Send character from arduino to python decode error - python-3.x

I try to send string from arduino to python via bluetooth
but when I try it looks like worked, but received data doesn't looks like what I want
This is my code
[Arduino]
void Send_Joystick(int X, int Y)
{
if(800 <= X && X < 1023 && 700 <= Y && Y < 1025){ BTSerial.write(byte(10);}
else if(600 <= X && X < 800 && 700 <= Y && Y < 1025){ BTSerial.write(byte(11);}
else if(400 <= X && X < 600 && 700 <= Y && Y < 1025){ BTSerial.write(byte(12);}
else if(200 <= X && X < 400 && 700 <= Y && Y < 1025){ BTSerial.write(byte(13);}
else if(0 <= X && X < 200 && 700 <= Y && Y < 1025){ BTSerial.write(byte(14);}
else if(800 <= X && X < 1025 && 300 <= Y && Y < 700){ BTSerial.write(byte(15);}
else if(600 <= X && X < 800 && 300 <= Y && Y < 700){ BTSerial.write(byte(16);}
else if(400 <= X && X < 600 && 300 <= Y && Y < 700){ BTSerial.write(byte(17);}
else if(200 <= X && X < 400 && 300 <= Y && Y < 700){ BTSerial.write(byte(18);}
else if(0 <= X && X < 200 && 300 <= Y && Y < 700){ BTSerial.write("19>");}
else if(800 <= X && X < 1025 && 0 <= Y && Y < 300){ BTSerial.write("20>");}
else if(600 <= X && X < 800 && 0 <= Y && Y < 300){ BTSerial.write("21>");}
else if(400 <= X && X < 600 && 0 <= Y && Y < 300){ BTSerial.write("22>");}
else if(200 <= X && X < 400 && 0 <= Y && Y < 300){ BTSerial.write("23>");}
else if(0 <= X && X < 200 && 0 <= Y && Y < 300){ BTSerial.write("24>");}
}
this is just part of my code and looks different with BTSerial
because I tried many ways
[Python3]
import bluetooth
bd_addr = "98:D3:37:00:8D:39" # The address from the HC-05 sensor
port = 1
sock = bluetooth.BluetoothSocket(bluetooth.RFCOMM)
sock.connect((bd_addr,port))
while True:
try:
data = sock.recv(1024)
print(data)
except KeyboardInterrupt:
break
sock.close()
#while True:
# try:
# data = sock.recv(1024)
# print("received [%s]" %data)
# except KeyboardInterrupt:
# break
#sock.close()
# below this was in the main code below "data = sock.recv(1024)" phrase
#data_end = data.find('>')
# if data_end != -1:
# rec = data[:data_end]
# print(rec)
# data = data[data_end+1:]
and this was my python code
and when I did this python shell shows me something like this
b'\xc3\xcc\xcf'
b'\xc3'
b'\xec\xcf'
b'\xc3'
b'\xec\xce'
b'\xc3\xec\xcf'
b'\xc3\xcc'
b'\xcf'
b'\xc3'
b'\xec\xcf'
b'\xc3\xec\xcf'
b'\xc3\xcc\xcf'
and when I change my python code to
data = sock.recv(1024).decode
the outcome looks like this
built-in method decode of bytes object at 0x2d1bd40
built-in method decode of bytes object at 0x2d1bd70
built-in method decode of bytes object at 0x2d1bda0
built-in method decode of bytes object at 0x2d1bdd0
built-in method decode of bytes object at 0x2d1be00
built-in method decode of bytes object at 0x2d1be30
built-in method decode of bytes object at 0x2d1be60
I want to received data as same as i sent from arduino
but every trying always not work
How can I get this work?

Related

Make nn.Transformer work for Text Generation

I am trying to make a Transformer work for paraphrase generation but the generations are not useful (the same everytime, full of BOS tokens or "?" tokens).
I followed this tutorial for reference. My implementation is embedded into a framework which requires an Encoder and a Decoder:
The encoder is like this:
class TransformerEncoder(nn.Module):
def __init__(
self,
vocab_size,
pad_token_id=None,
embedding_size=256,
num_heads=8,
num_layers=3,
ffnn_size=512,
dropout=0.1,
):
super(TransformerEncoder, self).__init__()
self.vocab_size = vocab_size
self.pad_token_id = pad_token_id
self.embedding_size = embedding_size
self.num_heads = num_heads
self.num_layers = num_layers
self.ffnn_size = ffnn_size
self.embed_tokens = TokenEmbedding(vocab_size, embedding_size)
self.embed_positions = PositionalEmbedding(embedding_size, dropout=dropout)
encoder_layer = nn.TransformerEncoderLayer(
embedding_size,
num_heads,
ffnn_size,
dropout,
)
encoder_norm = nn.LayerNorm(embedding_size)
self.encoder = nn.TransformerEncoder(encoder_layer, num_layers, encoder_norm)
def forward(
self,
input_ids,
):
# seq_len = input_ids.shape[1]
# device = next(self.parameters()).device
embedded_tokens = self.embed_positions(self.embed_tokens(input_ids))
# B x T x C -> T x B x C
embedded_tokens = embedded_tokens.transpose(0, 1)
memory = self.encoder(embedded_tokens)
return (memory,)
The decoder is like this:
class TransformerDecoder(nn.Module):
def __init__(
self,
vocab_size,
pad_token_id=None,
embedding_size=256,
num_heads=8,
num_layers=3,
ffnn_size=512,
dropout=0.1,
):
super(TransformerDecoder, self).__init__()
self.vocab_size = vocab_size
self.pad_token_id = pad_token_id
self.embedding_size = embedding_size
self.num_heads = num_heads
self.num_layers = num_layers
self.ffnn_size = ffnn_size
self.dropout_module = nn.Dropout(p=dropout)
self.embed_tokens = TokenEmbedding(vocab_size, embedding_size)
self.embed_positions = PositionalEmbedding(embedding_size, dropout=dropout)
decoder_layer = nn.TransformerDecoderLayer(
embedding_size, num_heads, ffnn_size, dropout
)
decoder_norm = nn.LayerNorm(embedding_size)
self.decoder = nn.TransformerDecoder(decoder_layer, num_layers, decoder_norm)
self.fc_out = nn.Linear(embedding_size, vocab_size)
def forward(
self,
input_ids,
encoder_out,
):
seq_len = input_ids.shape[1]
device = next(self.parameters()).device
mask = generate_square_subsequent_mask(seq_len).to(device)
embedded_tokens = self.embed_positions(self.embed_tokens(input_ids))
# B x T x C -> T x B x C
embedded_tokens = embedded_tokens.transpose(0, 1)
output = self.decoder(embedded_tokens, encoder_out[0], tgt_mask=mask)
# T x B x C -> B x T x C
output = output.transpose(1, 0)
return (self.fc_out(output),)
TokenEmbedding and PositionalEmbedding are as in the tutorial.
The main model just invokes encoder and decoder like:
encoder_outputs = self.encoder(input_ids=input_ids, **kwargs)
decoder_outputs = self.decoder(
input_ids=decoder_input_ids,
encoder_out=encoder_outputs,
**kwargs,
)
The labels are shifted one token to the right to be fed to the decoder using:
def shift_tokens_right(self, input_ids: torch.Tensor, decoder_start_token_id: int):
shifted_input_ids = input_ids.new_zeros(input_ids.shape)
shifted_input_ids[:, 1:] = input_ids[:, :-1].clone()
shifted_input_ids[:, 0] = decoder_start_token_id
return shifted_input_ids
The loss is calculated as:
loss_fct = nn.CrossEntropyLoss(ignore_index=self.pad_token_id)
loss = loss_fct(logits.reshape(-1, logits.shape[-1]), targets.reshape(-1))
The loss is going down, but the generations are real bad. Following is an example of the generations:
Source: < s > Can I jailbreak iOS 10 ? < /s > < pad > < pad > < pad > < pad > < pad > < pad > < pad > < pad > < pad > < pad > < pad > < pad > < pad > < pad > < pad > < pad > < pad > < pad > < pad > < pad > < pad > < pad > < pad > < pad > < pad > < pad > < pad > < pad > < pad > < pad > < pad >
Preds: < s > < s > < s > < s > < s > < s > < s > < s > < s > < s > < s > < s > < s > < s > < s > < s > < s > < s > < s > < s > < s > < s > < s > < s > < s > < s > < s > < s > < s > < s > < s > < s > < s > < s > < s > < s > < s > < s > < s > < s > < s > < s > < s > < s > < s > < s > < s > < s > < s > < s >
Target: < s > Can you jailbreak iOS 10 ? < /s > < pad > < pad > < pad > < pad > < pad > < pad > < pad > < pad > < pad > < pad > < pad > < pad > < pad > < pad > < pad > < pad > < pad > < pad > < pad > < pad > < pad > < pad > < pad > < pad > < pad > < pad > < pad > < pad > < pad > < pad > < pad >
As you can see, the predictions in this case are only BOS tokens. The output of the decoder on each decoder step is always almost the same for every iteration. The model does not seem to be learning. I have tried learning rates from 0.1 to 1e-4. For a brief moment at the second or third epoch, there were produced intelligible sentences, but quickly after that the generations reverted back to just BOS or PAD tokens.
Do you have an intuition on what might be wrong? Sorry for the question not being self-contained. Thanks in advance for any help you can provide.

CS50 Wk4 Blur Pset

void blur(int height, int width, RGBTRIPLE image[height][width])
{
for (int i = 0; i < height; i++)
{
for (int j = 0; j < width; j++)
{
int red_total = 0;
int blue_total = 0;
int green_total = 0;
int number_counted = 0;
for (int k = -1; k <= 1; k++)
{
for (int l = -1; l <= 1; l++)
{
if (i + k <= height && i + k >= 0 && j + l <= width && j + l >= 0)
{
blue_total += image[i+k][j+l].rgbtBlue;
red_total += image[i+k][j+l].rgbtRed;
green_total += image[i+k][j+l].rgbtGreen;
number_counted ++;
}
}
}
image[i][j].rgbtBlue = blue_total / number_counted;
image[i][j].rgbtRed = red_total / number_counted;
image[i][j].rgbtGreen = green_total / number_counted;
}
}
return;
}
Why is that section && operators?
if (i + k <= height && i + k >= 0 && j + l <= width && j + l >= 0)
I ran it with || operators because my understanding is that under the guise of the problem IF any of those conditions are satisfied there is no block to add. Yet why is it that when I run it under || it returns segmentation fault whereas if I run it with && the problem works out?
Thank you for answering!
All of those conditions have to be true or else the array operations will be invalid.
e.g. if i+k > height then image[i+k] is invalid.
Also I think you have some "off by one problems. image is [height][width] so valid values are [0..height-1] and [0..width-1] so the checks should be more like if (i + k < height && i + k >= 0 && j + l < width && j + l >= 0)

Get a certain combination of numbers in Python

Is there a efficient and convenient solution in Python to do something like -
Find largest combination of two numbers x and y, with the following conditions -
0 < x < 1000
0 < y < 2000
x/y = 0.75
x & y are integers
It's easy to do it using a simple graphing calculator but trying to find the best way to do it in Python
import pulp
My_optimization_prob = pulp.LpProblem('My_Optimization_Problem', pulp.LpMaximize)
# Creating the variables
x = pulp.LpVariable("x", lowBound = 1, cat='Integer')
y = pulp.LpVariable("y", lowBound = 1, cat='Integer')
# Adding the Constraints
My_optimization_prob += x + y #Maximize X and Y
My_optimization_prob += x <= 999 # x < 1000
My_optimization_prob += y <= 1999 # y < 2000
My_optimization_prob += x - 0.75*y == 0 # x/y = 0.75
#Printing the Problem and Constraints
print(My_optimization_prob)
My_optimization_prob.solve()
#printing X Y
print('x = ',pulp.value(x))
print('y = ',pulp.value(y))
Probably just -
z = [(x, y) for x in range(1, 1000) for y in range(1, 2000) if x/y==0.75]
z.sort(key=lambda x: sum(x), reverse=True)
z[0]
#Returns (999, 1332)
This is convenient, not sure if this is the most efficient way.
Another possible relatively efficient solution is -
x_upper_limit = 1000
y_upper_limit = 2000
x = 0
y = 0
temp_variable = 0
ratio = 0.75
for i in range(x_upper_limit, 0, -1):
temp_variable = i/ratio
if temp_variable.is_integer() and temp_variable < y_upper_limit:
x = i
y = int(temp_variable)
break
print(x,y)

Rewrite code using generate statement (Verilog HDL)

I'm trying to rewrite this code using generate statements (Verilog HDL):
integer j;
always#(posedge cpu_clk) begin
// ACCU_RST
if(RAM[3][7]) begin
RAM[3][7] <= 1'b0;
for(j = 10; j <= 15; j = j + 1)
RAM[j] <= 8'b0;
end
// CPU write
RAM[addr + 0] <= in_valid && cmd && (addr + 0 <= 9 || addr + 0 >= 16) ? data_in[8 * 0 + 7:8 * 0] : RAM[addr + 0];
RAM[addr + 1] <= in_valid && cmd && (addr + 1 <= 9 || addr + 1 >= 16) ? data_in[8 * 1 + 7:8 * 1] : RAM[addr + 1];
RAM[addr + 2] <= in_valid && cmd && (addr + 2 <= 9 || addr + 2 >= 16) ? data_in[8 * 2 + 7:8 * 2] : RAM[addr + 2];
RAM[addr + 3] <= in_valid && cmd && (addr + 3 <= 9 || addr + 3 >= 16) ? data_in[8 * 3 + 7:8 * 3] : RAM[addr + 3];
//CPU read
out_valid <= !cmd && in_valid;
out_data[8 * 0 + 7:8 * 0] <= !cmd && in_valid ? RAM[addr + 0] : out_data[8 * 0 + 7:8 * 0];
out_data[8 * 1 + 7:8 * 1] <= !cmd && in_valid ? RAM[addr + 1] : out_data[8 * 1 + 7:8 * 1];
out_data[8 * 2 + 7:8 * 2] <= !cmd && in_valid ? RAM[addr + 2] : out_data[8 * 2 + 7:8 * 2];
out_data[8 * 3 + 7:8 * 3] <= !cmd && in_valid ? RAM[addr + 3] : out_data[8 * 3 + 7:8 * 3];
end
Yet I recieve the following errors if I try this:
// CPU write
for(i = 0; i <= 3; i = i + 1) begin
if(in_valid && cmd && (addr + i <= 9 || addr + i >= 16))
RAM[addr + i] <= data_in[8 * i + 7:8 * i];
end
//CPU read
out_valid <= !cmd && in_valid;
for(i = 0; i <= 3; i = i + 1) begin
if(in_valid && !cmd)
out_data[8 * i + 7:8 * i] <= RAM[addr + i];
end
ERROR: i is not a constant value.
(error points to data_in[8 * i + 7:8 * i] and out_data[8 * i + 7:8 * i])
Another try, using two always blocks, one for generate, one for ACCU_RST yields multiple drivers for RAM (duh).
Last try:
genvar i;
always#(posedge cpu_clk) begin
if(ACCU_RST) begin
RAM[3][7] <= 1'b0;
for(j = 10; j <= 15; j = j + 1)
RAM[j] <= 8'b0;
end
// CPU write cmd
for(i = 0; i <= 3; i = i + 1) begin :CPU_W
if(in_valid && cmd && (addr + i <= 9 || addr + i >= 16))
RAM[addr + i] <= data_in[8 * i + 7:8 * i];
end
//CPU read cmd
out_valid <= !cmd && in_valid;
for(i = 0; i <= 3; i = i + 1) begin :CPU_R
if(in_valid && !cmd)
out_data[8 * i + 7:8 * i] <= RAM[addr + i];
end
end
That yields:
ERROR: Procedural assignment to a non-register i is not permitted,
left-hand side should be reg/integer/time/genvar
(and points to i = 0 and to i = i + 1).
For this you shouldn't use a generate block. The generate for loop must exist outside of an always block. And a values must only be assigned in one always block to be synthesizable. Take the below example, RAM[2] can be assigned when addr==0 on the third loop (i==2), when addr==1 on the second loop (i==1), and when addr==2 on on the first loop (i==0). Three separate always blocks which is a synthesizable error.
genvar i;
generate
for(i=0; i<4; i++) begin
always #(posedge clk)
if (in_valid && cmd && (addr + i <= 9 || addr + i >= 16))
RAM[addr + i] <= data_in[8*i + 7 : 8*i];
end
endgenerate
Skip the generate and use a standard for loop inside the always block. Use indexed part-select (references here and here):
integer i; // <-- not genvar
always #(posedge cpu_clk) begin
/* ... your other code ... */
// CPU write cmd
for (i = 0; i < 4; i = i + 1) begin :CPU_W
if (in_valid && cmd && (addr + i <= 9 || addr + i >= 16))
RAM[addr + i] <= data_in[ 8*i +: 8];
end
//CPU read cmd
out_valid <= !cmd && in_valid;
for (i = 0; i < 4; i = i + 1) begin :CPU_R
if (in_valid && !cmd)
out_data[ 8*i +: 8] <= RAM[addr + i];
end
end

What is wrong with my function in Octave?

I just tried to create my first function in octave, it looks as follows:
function hui(x)
if(0 <= x && x <2)
retval = (1.5 * x + 2)
elseif(2<= x && x <4)
retval = (-x + 5)
elseif(4<= x && x < 6)
retval = (0.5 * x)
elseif(6<= x && x < 8)
retval = (x - 3)
elseif(8<= x && x <=10)
retval = (2 * x - 11)
endif
endfunction
but if I try to plot it using: x=0:0.1:10; plot(x, hui(x));
It shows a plot witch seems a little bit strange.
What did I wrong?
Thanks in advance
John
You'll have to pardon my rustiness with the package, but you need to change the code around a bit. Notably, the notation 0<=x is incorrect, and must be x>=0. Since hui is operating on a vector, I believe you need to take that into account when constructing your return value.
I'm sure there are more effective ways of vectorizing this, but basically, While stepping over the input vector, I added the latest value onto the return vector, and at the end lopping off the initial 0 that I had put in. I put in a sentinel value in case the input didn't fulfill one of the criteria (it was always taking the "else" path in your code, so putting something there could have alerted you to something being wrong).
function [retval] = hui(x)
retval = 0
for i=1:size(x,2)
if(x(i)>=0 && x(i) <2)
retval = [retval (1.5 * x(i) + 2)];
elseif( x(i)>=2 && x(i) <4)
retval = [retval (-1*x(i) + 5)];
elseif(x(i)>=4 && x(i) < 6)
retval = [retval (0.5 * x(i))];
elseif(x(i)>=6 && x(i) < 8)
retval = [retval (x(i) - 3)];
elseif(x(i)>=8 && x(i) <=10)
retval = [retval (2 * x(i) - 11)];
else
retval = -999;
endif
endfor
retval = retval(2:size(retval,2));
endfunction
x is a vector, so you either need to loop through it or vectorise your code to removing the need.
As you're using Octave, it's worth vectorising everything you possibly can. The easiest way I can think of doing this is:
x = 0:0.1:10;
y = x;
y(x >= 0 & x < 2) = x(x >= 0 & x < 2) * 1.5 + 2;
y(x >= 2 & x < 4) = x(x >= 2 & x < 4) * -1 + 5;
y(x >= 4 & x < 6) = x(x >= 4 & x < 6) * 0.5;
y(x >= 6 & x < 8) = x(x >= 6 & x < 8) - 3;
y(x >= 8 & x < 10) = x(x >= 8 & x < 10) * 2 - 11;
The y(x >= a & x < b) syntax is logical indexing. Alone, x >= a & x < b gives you a vector of logical values, but combined with another vector you get the values which meet the condition. Octave will also let you do assignments like this.

Resources