We are working on an application that will use SPP (Serial Port Profile) over Bluetooth and the developers are debating using some type of protocol and packet delivery, versus just streaming the data without any form of ACK, sequence, or size information.
Does Bluetooth provide the guaranteed delivery and data integrity so that we do not need the overhead of a packet protocol design? Can we rely on just Bluetooth to ensure the data was delivered?
is delivery guaranteed?,
The order of the delivery is guaranteed. This is due to the acknowledgement/sequence numbering scheme built in to the lower-layers of bluetooth protocol. so at a lower layer a packet is re-transmitted until it is acknowledged. note that this is equivalent to stop and wait ARQ scheme. if it takes more than a timeout then the connection is considered as lost (generally 30 seconds)
is data integrity guaranteed?
Bluetooth 4.2 introduced BT secure connection.This includes a Message Integiry Check(MIC) with each transmitted packet and a MIC mismatch at the receiving end will trigger a re-transmission and a number of MIC mismatches might disconnect the connection.
so if you are not using Secure Connection feature, then integrity is not guaranteed. There is a 16 bit CRC scheme used to protect data, but it is known that over long time period there is going to be CRC escapes (bit flips in such a way that CRC remains correct). but this is relatively rare and happens in a noisy environment. if your application requires very high data integrity then either use SecureConnection or introduce Application level Integrity checks.
Note that SPP Profile in itself does not have any error/sequence checking, RFCOMM has a 8 bit FCS (Frame check sequence) which checks for the header corruption. L2CAP streaming/Retransmission mode has an optional 16-bit FCS which covers L2CAP header and data, Note that the basic L2CAP mode does not have FCS at all.
if you have an option to enable L2CAP FCS then you have a 16bit CRC at lower-level + 16 bit FCS at the L2CAP layer + 8 bit FCS at RFCOMM level will provide a data integrity which is good enough for most applications. However as mentioned earlier if it is really critical then you need to introduce additional application level integrity checks.
In essence BT has its own safety mechanisms for transfer. However, and this is just as important - i order for YOU to know when data starts and ends you should use a packet type transmission, such as STX and ETX to delimit each packet. There are dongles that adhere to the bad habit of repeating the last sent byte if there is a time lapse in the transmission, but they will stop when ETX or EOT in encountered.
And, for your system safety, you might as well include a checksum at the end of the packet. Then you are pretty sure.
Related
I have a simple application which receives packets of fixed ethertype via raw socket (the transport is ethernet), and sends two duplicates over another interface (via raw socket):
recvfrom() //blocking
//make duplicate
//add tail
sendto(packet1);
sendto(packet2);
I want two increase throughput. I need at least 4000 frames/second, can't change packet size. How can I achieve these? The system is embedded (AM335x SoC), kernel is 4.14.40... How can I encrease the performance?
A few considerations:
I know you said you can't change "packet size", but using buffered writes and infrequent flushes might really help performance
You could enable jumbo frames
You could disable Nagle
Usually people don't want to change their packet sizes, because they expect a 1-to-1 correspondence between their send()'s and recv()'s. This is not a good thing, because TCP specifically does not ensure that your send()'s and recv()'s will have a 1-1 correspondence. They usually will be 1-1, but they are not guaranteed to do so. Transmitting data with Nagle enabled or over many router hops or without Path MTU Discovery enabled makes the 1-1 relationship less likely.
So if you use buffering, and frame your data somehow (EG 1: terminate messages with a nul byte, if your data cannot otherwise have a nul or EG 2: transfer lengths as network shorts or something, so you know how much to read), you'll likely be killing two birds with one stone - that is, you'll get better speed and better reliability.
I'm new to BLE, and bluetooth in general, but I'm on a project that includes communication via BT 5.
As the BLE communication has to transmit around 2 bytes, to 1 MB at a time, I'm looking for a way to optimize the transmission time.
I know the pros n cons for the lower transmission freq (125 kbps), and for the highest transmission freq (2 Mbps), and for the DLE of 251 PDU bytes, but what I see from different forums and articles, the throughput mostly depend on the connection parameters as the connection interval and the packets per connection event. But where does the transmission frequency come in?
I've tried searching this forum for an answer, and several others, and even the BT core specification, but I haven't been able to find a solution for my problem.
If you read my answer at Why is BLE 4.2 faster than BLE 4.1, you can see that there are many components affecting the overall transfer speed.
You first have the radio transmission rate itself, which sets the upper limit.
You then have the overhead between all packets that becomes less apparant as longer packets you have.
The connection interval and length of each connection event can be important if you want the throughout to be high. If there is only one connection and the Bluetooth chip is not too stupid, the connection event length will fill the connection interval and therefore the connection interval doesn't really matter. However, if there are other conflicting radio events scheduled in a way that the connection event must be closed, the transmission cannot continue until the next connection event. So in this case, the throuhput will be higher if you lower the connection interval. So as a summary it highly depends on which Bluetooth stack the chip runs, how it's configured by the host and how many active connections you have.
The transmission rate controls your underlying bitrate but on top of that sits different layers of the BLE protocol which slow down the realizable throughput. This article has general derivation of how the different layers impact throughput in case that's useful!
I have an Android App that write several bytes to a Bluetooth device.
Looking on btsnoop_hci.log I see that, when a large amount of bytes are sent to the BLE device, the app use Prepare Write Request more times and then Execute Write Request: Immediately Write All.
Now my problem is how to perform this with a my application using a RN4870 module.
At this moment I can connect, read service and characteristics, and write using
CHW command as described in the manual when the there are few bytes.
But I cannot write as the remote BLE device expect when there are lot of bytes.
Thank You for support
Marco
This is the Microchip answer:
Hello,
The core specifications are handled by the firmware.
The user doesn't have access at this level, so is nothing you can set.
Regarding the long data question:
"Does RN4870 module support the Data Length Extension feature? "
RN4870 rev 1.28 support DLE, but partially. The normal packet size in BLE without DLE is 20 bytes.
With a standard DLE feature, the normal packet size should be 251 bytes.
However, in RN4870 Rev 1.28, the packet size is 151 bytes. So it is not a full implementation of the DLE.
The DLE feature (Data Length Extension) is embedded into the lower levels of the Bluetooth stack and there are no specific commands to enable or disable DLE. Essentially, if the peer device also supports DLE, then the DLE will be enabled.
So there are no specific (commands) that you need to do to increase throughput through DLE.
Regards,
In other words there is nothing to do!
In Android Application you can't directly set the DLE length, instead you should set the MTU size. Android Bluetooth stack will calculate DLE length based on the MTU. Maximum Data Length supported by BT protocol in 251, but can be between 27 and 251 depends on BT controller H/W capability. During connection BT Device will negotiate with peer device(If peer device support DLE) to set Maximum DLE size supported by both device.
To increase your throughput you can use the maximum supported MTU size of 512. Also you can write without response and do error check on data using your own logic like checking parity or CRC and re-transmit data from Application for better throughput.
I am trying to figure out what the maximum throughput of a Bluetooth 2.1 SPP connection is.
I found 2 publications concerned with the topic (1, 2) and they both show diagrams, which show the throughput as a function of the Signal to noise ratio (that I can assume to be perfect for my concideration) and the kind of ACL package used. My problem is, I have no Idea which ACL packets are used. How is this decision made? Is it made on the fly, like "what's needed to transfer the current data is used"?
Furthermore, in the Serial Port Profile specification (chapter 2.3) I found this sentence:
This profile requires support for one-slot packets only. This means that this profile
ensures that data rates up to 128 kbps can be used. Support for higher rates is optional.
The last sentence realy confuses me. How do I find out whether this "option" applies in my case?
This means that in SPP mode, all bluetooth modules should work up to 128kbps, and some modules may work even faster.
Under SPP is RFCOMM, which tries to deliver the packets as quickly as possible. If only one packet is sent in one timeslot, you get the 128kbps. The firmware of the bluetooth module, or the HCI driver however can do things differently.
There are SPP speeds up to 480kbps reported - however this requires that both SPP modules are from the same vendor (e.g. BlueGiga iWrap modules can do this speed).
On the other end, if you're connecting to an unknown device, for example a BT112, or an RN41 module to an Android device, the actual usable SPP bandwidth can be much lower than 128 kbps (I have measurements as low as 10kbps).
In case of some old generation iPhones, the usable SPP bandwidth is around 8 kbps.
It is a wise idea to treat "standards" and "datasheets" very conservative, and do your own measurements if actual net data bandwidth is critical.
Even though BT, BT+EDR has theoretical on-the-air bitrates of 3Mbps, the actual usable net data bandwidth is a way smaller.
I have some designing to do for a serial protocol and am running into some questions that I figure must have been considered elsewhere.
So I'm wondering if there are some recommendations for best practices in designing serial protocols. (Please either state a fact that is easily verifiable, or cite a reputable source if you make a claim.) General recommendations for websites/books are also welcome.
In particular I have to deal with issues like
parsing a stream of bytes into packets
verifying a packet is correct (easy with a CRC, for instance)
identifying reasonable types of errors that can occur (e.g. in a point-to-point serial stream, sporadic single bit errors, and dropped series of bytes, are both likely, but extra phantom bytes are unlikely; whereas with a record stored in flash memory or on a disk drive the types of errors that predominate are different)
error correction or recovery (if I detect an error in a packet, can I correct it? If not, can I resync to the boundary of the next packet?)
how to make variable-length packets robust to error correction / recovery.
Any suggestions?
Packet delimiting
For syncing to packet boundaries, typically you have a byte or byte sequence that identifies the packet boundary, which cannot occur within the packet itself. If the packet data happens to contain that identifier, then you have to "escape" (aka byte stuff) it.
Examples:
PPP Encapsulation
Consistent Overhead Byte Stuffing (COBS), or maybe COBS/R, which encodes data packets so no zero bytes are present, thus you can use zero bytes for packet delimiting
Packet verification
Various options are:
Checksum
Adler-32
Fletcher
CRC (the more bits the better the check)
Error correction etc
Good questions. I've not had much experience with that.
Have you considered FEC (Forward Error Correction)?
This procedure is very often used in "physical" level communication protocols such as WDM (Wavelength Division Multiplexing) / OTN (Optical Transport Network).