How to send a bluetooth ACK signal (standard formats?) - node.js

I'm trying to communicate with a bluetooth thermometer. It's not BLE, it uses serial ports. I've made it as far as receiving REQ signals from the device, but it requires a ACK signal or it cuts the connection after a few seconds.
The problem is, I can't decipher what the ACK signal is supposed to be. Going off the documentation, it says:
<ACK Format> ADH,01H
<REQ Format> ADH,00H,n
The third byte of REQ is the can be multiplied by 0.01310547 to get the voltage of the battery
<Data Format> ADH,03H,1EH," IRSTP3xx.yyy.HhhSss,nnn,tt.t"+0D+0A
xx: LotNo.(base 16) "01"~"FF"
yyy: S/N(base 16) "001"~"FFF"
...
...
Nothing in the Data Format mentions the first 3 bytes(?) either.
That's pretty much all I've got to work with. I'm trying decoding REQ with different encodings like ascii and utf-8 to see if I can get it to match the REQ format, and then use that same encoding to format and send ACK, but I haven't had any luck.
Is the format just in some kind of standard notation that I'm not familiar with?

The H apparently stands for hexidecimal.
ADH is a two byte message, the first byte being a hex A and the second a hex D. I have not seen that notation before.

Related

How do you send a number when you answer a get request with coap

I've been reading the rfc 7252 for a while and I am probably blind but I can't find how can I send a simple number (integer or float) when you answer a get request for a ressource (for example the sensor /light, where do you write it in the packet.
I think it's in the payload, so I tried to send this packet :
the option content-format text/plain, charset=utf-8, length 1
then I write 255(0xff) in the packet
then I write 0x34 in the packet (payload part).
But obviously it's not working, first I don't think I should use this option (probably another one but I can't find the good one to send either integer or float number), I'm not sure though if I'm in the right way and not sure anymore of what I am doing tbh, so that's why I'm asking.
Thanks for help,
Good bye
EDIT : Here are more info :
I'm using microcoap on arduino, using an ethernet cable between computer/arduino mega 2560.
wireshark info
After reviewing your Wireshark trace and seeing the response in Copper I think I see the problem. When you say that the Content-format is text/plain you are saying that you are sending ASCII data across. You say you send [0xFF 0x34] in your post, but in the trace you are actually sending is [0xFF 0x33]. Copper is showing you exactly what you are sending: 0xFF doesn't resolve as ASCII here and 0x33 is the ASCII for 3, which is shown in the Wireshark trace and in your Copper output window. If you want to send 2 raw bytes of data that shouldn't be interpreted as text you want to set your Content-format to be application/octet-stream.

Sending NULL data over a socket

I am currently attempting to communicate with an external application over TCP/IP based socket. I have successfully established a connection with the client and received some data. This manual here states that
After this command is received, the client must read an acknowledgement
octet from the daemon. A positive acknowledgement is an octet of zero bits. A negative acknowledgement is an octet of any other pattern.
I would like to send a positive acknowledgment and I am sending it this way
My server listening code was obtained from here
void WriteData(std::string content)
{
send(newsockfd,content.c_str(),content.length(),0);
}
WriteData("00000000");
My question is if I am sending this data corectly (octet of zero bits) ?
Update:
I have read this post here
which states that send only allows to send a char* array. So I am not sure how I can send a byte over a socket. I know i could do something like this
std::bitset<8> b1 ; // [0,0,0,0,0,0,0,0]
but i am not sure how i would send that over a socket.
Try
WriteData(std::string("\0",1));
using your function or even:
const char null_data(0);
send(newsockfd,&null_data,1,0);
to send it directly.
WriteData("00000000");
Will actually sends 8 octets of 48 [decimal] (assuming your platform is ASCII which all modern systems are compatible with).
However \0 is the escape sequence used in string literals to represent the null character (that is the zero octet).
There's a conflation (mixing together) in C++ between the notions of characters and bytes that dates back to the earliest days of C and is pretty much baked in.

problems sending bytes greater 0x7F python3 serial port

I'm working with python3 and do not find an answer for my little problem.
My problem is sending a byte greater than 0x7F over the serial port with my raspberry pi.
example:
import serial
ser=serial.Serial("/dev/ttyAMA0")
a=0x7F
ser.write(bytes(chr(a), 'UTF-8'))
works fine! The receiver gets 0x7F
if a equals 0x80
a=0x80
ser.write(bytes(chr(a), 'UTF-8'))
the receiver gets two bytes: 0xC2 0x80
if i change the type to UTF-16 the receiver reads
0xFF 0xFE 0x80 0x00
The receiver should get only 0x80!
Whats wrong! Thanks for your answers.
UTF-8 specification says that words that are 1 byte/octet start with 0. Because 0x80 is "10000000" in binary, it needs to be preceded by a C2, "11000010 10000000" (2 bytes/octets). 0x7F is 01111111, so when reading it, it knows it is only 1 byte/octet long.
UTF-16 says that all words are represented as 2 byte/octets and has a Byte Order Mark which essentially tells the reader which one is the most-significant octet (or endianness.
Check on UTF-8 for full specifications, but essentially you are moving from the end of the 1 byte range, to the start of the 2 byte range.
I don't understand why you want to send your own custom 1-byte words, but what you are really looking for is any SBCS (Single Byte Character Set) which has a character for those bytes you specify. UTF-8/UTF-16 are MBCS, which means when you encode a character, it may give you more than a single byte.
Before UTF-? came along, everything was SBCS, which meant that any code page you selected was coded using 8-bits. The problem arose when 256 characters were not enough, and they had to make code pages like IBM273 (IBM EBCDIC Germany) and ISO-8859-1 (ANSI Latin 1; Western European) to interpret what "0x2C" meant. Both the sender and receiver needed to set their code page identifier to the same, or they wouldn't understand each other. There is further confusion because these SBCS code pages don't always use the full 256 characters, so "0x7F" may not even exist / have a meaning.
What you could do is encode it to something like codepage 737/IBM 00737, send the "Α" (Greek Alpha) character and it should encode it as 0x80.
If it doesn't work, t'm not sure if you can send the raw byte through pyserial as the write() method seems to require an encoding, you may need to look into the source code to see the lower level details.
a=0x80
ser.write(bytes(chr(a), 'ISO-8859-1'))

Interpreting data from RDM630 RFID reader

I am trying to build an RFID based door opener with an Attiny2313 and an RDM630 RFID reader. There has been no Problem with programming or getting the two ICs to talk to each other via UART. The Problem is the interpretation of the data.
I wasn't able to make any sense of what the RDM630 had sent to the Attiny, so I hooked it up via an RS232/USB Adapter, and this is what I get on my PC:
Display = ASCII:
Display set to HEX:
Written on the Card is:
0000714511 010,59151
Can anyone help me make sense of the Data?
Most of the bytes that the RDM630 RFID reader module sends are ASCII characters of hex digits ('0'-'9','A'-'F') which means 0x30-0x39, 0x41-0x46.
It looks like your RS232/USB inverts the bits, comparing to a direct TTL connections.
(RS232 is inverted TTL. It has also different voltage levels, but that's OK as long as TTL transmit output feeds RS232 receive input, as in your case. The other way around is more tricky).

Pyserial converting bytes to normal string

I am receiving a packet through a serial port but when I receive the packet it is of class bytes and looks like this:
b'>0011581158NNNNYNNN +6\r'
How do I convert this to a normal string? When I try to take information from this string, it comes out as a decimal representation it appears.
You can call decode on the bytes object to convert it to a string, but that only works if the bytes object actually represents text:
>>> bs = b'>0011581158NNNNYNNN +6\r'
>>> bs.decode('utf-8')
'>0011581158NNNNYNNN +6\r'
To really parse the input, you need to know the format, and what it actually means. To do that, identify the device that is connected to the serial port (A scanner? A robot? A receiver of some kind?). And look up the protocol. In your case, it may be a text-based protocol, but you'll often find that bytes stand for digits, in which you'll probably want to have a look at the struct module.

Resources