NMEA 2000, configure repetition of messages - nmea

According to the NMEA 2000 standard it is possible to configure the repetition time of messages which are specified by the manufacturer of the receiver. This is done by the Group Function message (PGN 126208) is sent. Because this message is larger than eight bytes, a transport protocol is needed.My question is, which protocol is used. Is it the TP (SAE J1939), ETP (ISOBUS), Fast Packet (NMEA2000)?
Thanks for the help

The PGN is composed of information from the 29-bit identifier format. The contents are:
EDP (Extended Data Page): 1-bit
DP (Data Page): 1-bit
PDU-F (PDU-Format): 8-bits
PDU-S (PDU-Specific): 8-bits
The PGN you provide is 126208 which becomes 0x1ED00. If you decode the information you see that EDP is 0 and DP is 1. This is a NMEA2000 message. Use the Fast-Packet transport protocol.
There's a good primer provided by Vector CANtech here:
http://vector.com/portal/medien/cmc/application_notes/AN-ION-1-3100_Introduction_to_J1939.pdf
(The DP & EDP bits are discussed on page 4 of that PDF)

Related

Decode weight in data packets from wireshark with bluetooth low energy

I've been working doing some reverse engineering to different BLE based devices and I have a weight scale where I can't find a pattern to find/decode/interpret the weight value that I can get from the android app. I was also able to get the services and characteristics of this device but did not found a SIG standard match with the UUIDs from the bluetooth site.
I'm using an nRF51 dongle to sniffing data packets between the android app and the weight scale and I can look the ble traffic, but there are several events during the communication that I can't really understand what's the relation and I can't be able to convert those values to readable weight in kg or pounds.
My target value is: 71.3kg readed from the weight scale app.
Let me show you what I get from the ble sniffer.
first I see that the master is sending notification/indication requests to the handles 0x0009(notify), 0x000c(indication) and 0x000f(notify) in each characteristic of one service.
Then I start to read notification/indications values mixed with write commands. At the end of the communication, I see some packets that I feel that they are the ones with the weight scale data and BMI.
Packets number 574, 672 and 674 in the image.
So that give us the following candidates:
1st. packet_number_574 = '000002c9070002
2nd. packet_number_672 = '420001000000005ed12059007f02c9011d01f101'
3rd. packet_number_674 = '42018c016b00070237057d001a01bc001d007c'
1st packet during the communication exchange looks more like a counter/RT/clock than a real measurement because of the data behavior, so I feel this one is not a real option.
2nd and 3rd looks more like real candidates, I have split them and convert them to decimal values and I have not found a relation, even combining bytes since this values are floating point data types. So my real question is, Am I missing something that you might see with this information? Do you know if there is a relation between this data packets and my target?
Thank you for taking your time reading me and any help could be good, thanks!
EDIT:
I have a python script that allows me to check the services and their characteristics hierarchy and some useful data like properties, their handles and descriptors.
Service 'fff0' (0000fff0-0000-1000-8000-00805f9b34fb):
Characteristic 'fff1' (0000fff1-0000-1000-8000-00805f9b34fb):
Handle: 8 (8)
Readable: False
Pro+perties: WRITE NOTIFY
Descriptor: Descriptor <Client Characteristic Configuration> (handle 0x9; uuid 00002902-0000-1000-8000-00805f9b34fb)
Characteristic 'fff2' (0000fff2-0000-1000-8000-00805f9b34fb):
Handle: 11 (b)
Readable: False
Properties: WRITE NO RESPONSE INDICATE
Descriptor: Descriptor <Client Characteristic Configuration> (handle 0xc; uuid 00002902-0000-1000-8000-00805f9b34fb)
Characteristic 'fff3' (0000fff3-0000-1000-8000-00805f9b34fb):
Handle: 14 (e)
Readable: False
Properties: NOTIFY
Descriptor: Descriptor <Client Characteristic Configuration> (handle 0xf; uuid 00002902-0000-1000-8000-00805f9b34fb)
This are the characteristics related to the notifications and indications that I see in wireshark. I think the packet number 574 (which characteristics has only a notifiy property) is more important than I think.
I solve it by myself.
This post gives me the idea to take the values (2 bytes) and multiply them by 0.1, that way I could get the weight.
In bytes I could look for my target value without decimals 713, which is in hex = 0x02c9
If we look at the packet number 574:
000002c9070002 and split it 00:00:02:c9:07:00:02
I could see that 2nd and 3rd bytes match with this pattern!
The only thing required to do is to group this bytes and multiply the decimal value 713 x 0.1 = 71.3. I made different tests and found that this pattern is constant so I feel is accurate for my solution. Hope this could help someone in the future.

How does BLE handle sending multiple packets per inteval with a 1-bit sequence number?

As far as I unterstand BLE uses two 1-bit fields for applying sequence numbers to packets (SN, NESN). From my (admittedly basic) knowledge about (wireless) communication a 1-bit sequence number is perfectly fine as long as a sender does not continue sending data until the last message is acknowledged by the receiver.
Because of that it is trivial to understand how BLE works with a one-packet-per-interval scheme. However BLE allows multiple packets in a single connection interval. So far I couldn't find any information on how this scenario is handled without allocating more bits for larger sequence numbers.
Any pointers in the right direction on where I'm going wrong or where I can read up on this would be appreciated.

What exactly is the class byte in JavaCard?

I've started to work with the JavaCards and trying to grasp the sense of CLA byte.
If to read RFC 5.4.1 Class byte
5.4.1 Class byte
According to table 8 used in conjunction with table 9, the class byte
CLA of a command is used to indicate to what extent the command and the response comply with this part of ISO/IEC 7816 and when applicable (see table 9), the format of secure messaging and the logical channel number.
So... CLA flag is used for the indication, but what exact? Because, the table and description as for the beginner is rather difficult, I understand that usually are used the next CLA bytes: 0x00, 0x80, 0x84.
For e.g. if to read the content from table:
0X' Structure and coding of command and response according to this part of ISO/IEC 7816 (for coding of 'X' see table 9)
10 to 7F RFU
Reserved for PTS
I understand that for the fine developing - I should read GlobalPlatform specification, the specification about the exact card (mine is NXP one) and other related materials, but I want to admit, that it's difficult to understand the content.
I've expected the following (pseudo-table):
0x00 -> for reading streams from file system
0x01 -> for writing byte buffer to memory blocks
0x02 -> call AES/RSA methods
The CLASS byte is defined in ISO 7816-4. The first bit indicates the interindustry class. Java Card applets shall operate in this interindustry standard. Global Platform is another specification to manage and maintain the card and all commands will have class byte 0x80 - 0x8F. Class byte 0xFF is used for communication with the card reader in some cases and is otherwise invalid for a card.
The interindustry meaning for the CLA serves 3 major functions:
Function 1: Chaining
bit5 = 1 signalizes that the current command is not the last command of a chain, meaning that multiple APDUs all belong together and the card may therefore do additional things
Function 2: Secure Messaging
bit4+3 serve to signalize the secure messaging status of the current command. This means that the APDU is authenticated(e.g. MACed) and the data is encrypted(e.g. block cipher). The command header is never encrypted.
Function 3: Logical Channel
bit2+1 serve to identify the logical channel number. Logical channels are parallel communication interfaces through the card, therefore an applet A can be selected on Channel 0 and an applet B can be selected on Channel 1 while both applets remain in their internal state(no RAM is reset). Most cards do not support logical channels or you have to enable them explicitly.
CLA byte is a typical trap for Java Card beginners and its usually best that you leave at 0x00 for the start.

error detection/correction/recovery in serial protocols

I have some designing to do for a serial protocol and am running into some questions that I figure must have been considered elsewhere.
So I'm wondering if there are some recommendations for best practices in designing serial protocols. (Please either state a fact that is easily verifiable, or cite a reputable source if you make a claim.) General recommendations for websites/books are also welcome.
In particular I have to deal with issues like
parsing a stream of bytes into packets
verifying a packet is correct (easy with a CRC, for instance)
identifying reasonable types of errors that can occur (e.g. in a point-to-point serial stream, sporadic single bit errors, and dropped series of bytes, are both likely, but extra phantom bytes are unlikely; whereas with a record stored in flash memory or on a disk drive the types of errors that predominate are different)
error correction or recovery (if I detect an error in a packet, can I correct it? If not, can I resync to the boundary of the next packet?)
how to make variable-length packets robust to error correction / recovery.
Any suggestions?
Packet delimiting
For syncing to packet boundaries, typically you have a byte or byte sequence that identifies the packet boundary, which cannot occur within the packet itself. If the packet data happens to contain that identifier, then you have to "escape" (aka byte stuff) it.
Examples:
PPP Encapsulation
Consistent Overhead Byte Stuffing (COBS), or maybe COBS/R, which encodes data packets so no zero bytes are present, thus you can use zero bytes for packet delimiting
Packet verification
Various options are:
Checksum
Adler-32
Fletcher
CRC (the more bits the better the check)
Error correction etc
Good questions. I've not had much experience with that.
Have you considered FEC (Forward Error Correction)?
This procedure is very often used in "physical" level communication protocols such as WDM (Wavelength Division Multiplexing) / OTN (Optical Transport Network).

To pad or not to pad - creating a communication protocol

I am creating a protocol to have two applications talk over a TCP/IP stream and am figuring out how to design a header for my messages. Using the TCP header as an initial guide, I am wondering if I will need padding. I understand that when we're dealing with a cache, we want to make sure that data being stored fits in a row of cache so that when it is retrieved it is done so efficiently. However, I do not understand how it makes sense to pad a header considering that an application will parse a stream of bytes and store it how it sees fit.
For example: I want to send over a message header consisting of a 3 byte field followed by a 1 byte padding field for 32 bit alignment. Then I will send over the message data.
In this case, the receiver will just take 3 bytes from the stream and throw away the padding byte. And then start reading message data. As I see it, he will not be storing the 3 bytes and the message data the way he wants. The whole point of byte alignment is so that it will be retrieved in an efficient manner. But if the retriever doesn't care about the padding how will it be retrieved efficiently?
Without the padding, the retriever just takes the 3 header bytes from the stream and then takes the data bytes. Since the retriever stores these bytes however he wants, how does it matter whether or not the padding is done?
Maybe I'm missing the point of padding.
It's slightly hard to extract a question from this post, but with what I've said you guys can probably point out my misconceptions.
Please let me know what you guys think.
Thanks,
jbu
If word alignment of the message body is of some use, then by all means, pad the message to avoid other contortions. The padding will be of benefit if most of the message is processed as machine words with decent intensity.
If the message is a stream of bytes, for instance xml, then padding won't do you a whole heck of a lot of good.
As far as actually designing a wire protocol, you should probably consider using a plain text protocol with compression (including the header), which will probably use less bandwidth than any hand-designed binary protocol you could possibly invent.
I do not understand how it makes sense to pad a header considering that an application will parse a stream of bytes and store it how it sees fit.
If I'm a receiver, I might pass a buffer (i.e. an array of bytes) to the protocol driver (i.e. the TCP stack) and say, "give this back to me when there's data in it".
What I (the application) get back, then, is an array of bytes which contains the data. Using C-style tricks like "casting" and so on I can treat portions of this array as if it were words and double-words (not just bytes) ... provided that they're suitably aligned (which is where padding may be required).
Here's an example of a statement which reads a DWORD from an offset in a byte buffer:
DWORD getDword(const byte* buffer)
{
//we want the DWORD which starts at byte-offset 8
buffer += 8;
//dereference as if it were pointing to a DWORD
//(this would fail on some machines if the pointer
//weren't pointing to a DWORD-aligned boundary)
return *((DWORD*)buffer);
}
Here's the corresponding function in Intel assembly; note that it's a single opcode i.e. quite an efficient way to access the data, more efficient that reading and accumulating separate bytes:
mov eax,DWORD PTR [esi+8]
Oner reason to consider padding is if you plan to extend your protocol over time. Some of the padding can be intentionally set aside for future assignment.
Another reason to consider padding is to save a couple of bits on length fields. I.e. always a multiple of 4, or 8 saves 2 or 3 bits off the length field.
One other good reason that TCP has padding (which probably does not apply to you) is it allows dedicated network processing hardware to easily separate the data from the header. As the data always starts on a 32 bit boundary, it's easier to separate the header from the data when the packet gets routed.
If you have a 3 byte header and align it to 4 bytes, then designate the unused byte as 'reserved for future use' and require the bits to be zero (rejecting messages where they are not as malformed). That leaves you some extensibility. Or you might decide to use the byte as a version number - initially zero, and then incrementing it if (when) you make incompatible changes to the protocol. Don't let the value be 'undefined' and "don't care"; you'll never be able to use it if you start out that way.

Resources