Now, I'm researching about BLuetooth Low Energy. Who can show the value of Connection Interval (min) and Connection Event (max) on Windows ? (just for BTLE).
Thanks so much!
Related
I want to make sure the latency between my app and the bluetooth headphones is accounted for, but I have absolutely no idea how I can get this value. The closest thing I found was:
BluetoothLEPreferredConnectionParameters.ConnectionLatency which is only available on Windows 11... Otherwise there isn't much to go on.
Any help would be appreciated.
Thanks,
Peter
It's very difficult to get the exact latency because it is affected by many parameters - but you're on the right track by guessing that the connection parameters are a factor of this equation. I don't have much knowledge on UWP, but I can give you the general parameters that affect the speed/latency, and then you can check their availability in the API or even contact Windows technical team to see if these are supported.
When you make a connection with a remote device, the following factors impact the speed/latency of the connection:-
Connection Interval: this specifies the interval at which the packets are sent during a connection. The lower the value, the higher the speed. The minimum value as per the Bluetooth spec is 7.5ms.
Slave Latency: this is the value you originally mentioned - it specifies the number of packets that can be missed before a connection is considered lost. A value of 0 means that you have the fastest most robust connection.
Connection PHY: this is the modulation on which the packets are sent. If both devices support 2MPHY, then the connection should be quicker.
Data Length/MTU Extension: these are two separate features but I am looping them together becuase the effect is the same - more bytes are sent per packet, which results in a higher throughput. The maximum value is 251 bytes per packet.
You can find more information about these parameters here:-
A Practical Guide to BLE Throughput
Maximizing BLE Throughput: Everything You Need to Know
Bluetooth 5 Speed - How to Achieve Maximum Throughput
And below are some other links that might help you understand what is supported on UWP:-
Bluetooth Developer FAQ
BluetoothLEConnectionParameters.OptimizedProperty
Bluetooth LE Preferred Connection Parameter Class
Bluetooth LE Connection PHY class
How to Change MTU Size and PHY on Windows UWP C++
Screen grab from WireShark showing traffic when problem occurs
Short question - Referring to WireShark image, What is causing Master to send LL_CHANNEL_MAP_IND and why is it taking so long?
We are working on a hardware/software project that is utilizing the TI WL18xx as a Bluetooth controller. One of the main functions of this product is to communicate with our sensor hardware over a Bluetooth Low Energy connection. We have encountered an issue that we have had difficulty pinpointing, but feel may reside in the TI WL18xx hardware/firmware. Intermittently, after a second Bluetooth Low Energy device is connected, the response times from writing and notification of one of the characteristics on one of the connected devices becomes very long.
Details
The host product device is running our own embedded Linux image on a TI AM4376x processor. The kernel is 4.14.79 and our communication stack sits on top of Bluez5. The wifi/bluetooth chip is the Jorjin WG7831-BO, running TIInit_11.8.32.bts firmware version 4.5. It is based on the TI WL1831. The sensor devices that we connect to are our own and we use a custom command protocol which uses two characteristics to perform command handshakes. These devices work very well on a number of other platforms, including Mac, Windows, Linux, Chrome, etc.
The workflow that is causing problems is this;
A user space application allows the user to discover and connect to our sensor devices over BLE, one device at a time.
The initial connection requires a flurry of command/response type communication over the aforementioned BLE characteristics.
Once connected, the traffic is reduced significantly to occasional notifications of new measurements, and occasional command/response exchanges triggered by the user.
A single device always seems stable and performant.
When the user connects to a second device, the initial connection proceeds as expected.
However, once the second device's connection process completes, we start to see that the command/response response times become hundreds of times longer on the initially connected device.
The second device communication continues at expected speeds.
This problem only occurs with the first device about 30% of the time we follow this workflow.
Traces
Here is a short snippet of the problem that is formed from a trace log that is a mix of our library debug and btmon traces.
Everything seems fine until line 4102, at which we see the following:
ACL Data TX: Handle 1025 flags 0x00 dlen 22 #1081 [hci0] 00:12:48.654867
ATT: Write Command (0x52) len 17
Handle: 0x0014
Data: 580fd8c71bff00204e000000000000
D2PIO_SDK: GMBLNGIBlobSource.cpp(1532) : Blob cmd sent: 1bh to GDX-FOR 07100117; length = 15; rolling counter = 216; timestamp = 258104ms .
HCI Event: Number of Completed Packets (0x13) plen 5 #1082 [hci0] 00:12:49.387892
Num handles: 1
Handle: 1025
Count: 1
ACL Data RX: Handle 1025 flags 0x02 dlen 23 #1083 [hci0] 00:12:51.801225
ATT: Handle Value Notification (0x1b) len 18
Handle: 0x0016
Data: 9810272f1bd8ff00204e000000000000
D2PIO_SDK: GMBLNGIBlobSource.cpp(1745) : GetNextResponse(GDX-FOR 07100117) returns 1bh cmd blob after 3139=(261263-258124) milliseconds.
The elapsed time reported by GetNextResponse() for most cmds should be < 30 milliseconds. Response times were short when we opened and sent a bunch of cmds to device A.
The response times remained short when we opened and sent a bunch of cmds to device B. But on the first subsequent cmd sent to device A, the response time is more than 3 seconds!
Note that a Bluetooth radio can only do one thing at a time. Receive or transmit. On one single frequency. If you have two connections and two connection events happen at the same time, the firmware must decide which one to prioritize, and which one to skip. Maybe the firmware isn't smart enough to handle your specific case. Try with other connection parameters to see if something gets better. You can also try another Bluetooth dongle from a different manufacturer.
I'm new to BLE, and bluetooth in general, but I'm on a project that includes communication via BT 5.
As the BLE communication has to transmit around 2 bytes, to 1 MB at a time, I'm looking for a way to optimize the transmission time.
I know the pros n cons for the lower transmission freq (125 kbps), and for the highest transmission freq (2 Mbps), and for the DLE of 251 PDU bytes, but what I see from different forums and articles, the throughput mostly depend on the connection parameters as the connection interval and the packets per connection event. But where does the transmission frequency come in?
I've tried searching this forum for an answer, and several others, and even the BT core specification, but I haven't been able to find a solution for my problem.
If you read my answer at Why is BLE 4.2 faster than BLE 4.1, you can see that there are many components affecting the overall transfer speed.
You first have the radio transmission rate itself, which sets the upper limit.
You then have the overhead between all packets that becomes less apparant as longer packets you have.
The connection interval and length of each connection event can be important if you want the throughout to be high. If there is only one connection and the Bluetooth chip is not too stupid, the connection event length will fill the connection interval and therefore the connection interval doesn't really matter. However, if there are other conflicting radio events scheduled in a way that the connection event must be closed, the transmission cannot continue until the next connection event. So in this case, the throuhput will be higher if you lower the connection interval. So as a summary it highly depends on which Bluetooth stack the chip runs, how it's configured by the host and how many active connections you have.
The transmission rate controls your underlying bitrate but on top of that sits different layers of the BLE protocol which slow down the realizable throughput. This article has general derivation of how the different layers impact throughput in case that's useful!
I'm looking for answers on the net for 2 days, and it seems like I can't find my answer so I finally post it here hoping I just mess something.
I'm conceiving a BLE slave device to log humidity in a room twice a day. This device has to run for at least 2 Years before getting recharged.
What is the BLE logic to ensure long battery life ?
1) Is a long advertisement / connection interval enough ?
2) Do I need to implement a RTC with interrupt to wake up my device and start advertising to get connected?
3) Do I have to use advertising packets only, and include my data into it?
I think I just miss something about bluetooth low energy, and it is a problem to create a ble device.
Thank you very much for you help, and have a good day !
You could calculate power consumption at https://devzone.nordicsemi.com/power/ for Nordic's chip. If the device does not advertise or has an active connection (i.e. it's just sleeping), it consumes almost no power at all and will definitely run for 2 years even on a CR2016 battery. So if possible, if you have for example a button that can start the advertising only when needed, that would be good.
Otherwise if you want it to be always available you must advertise. How long the advertisement interval should be depends on how long connection setup latency you want. If you have a BLE scanner that scans 100% of the time the advertisement interval will be equal to the connection setup time. If you have a low-power BLE scanner that only scans for example 10% of the time you'll have to multiply your advertitsement interval by 10 to get the expected setup time. It all comes down to simple math :)
I'd suggest create a connection rather than just putting the data in the advertisement packets since then you can acknowledge that the data has been arrived.
Note that if you have a connection interval of 4 seconds and have a stable constant connection you can get several years battery time on a coin cell battery.
Is there a limit on maximum number of packets (LE_DATA) that could be send by either slave or master during one connection interval?
If this limit exists, are there any specific conditions for this limit (e.g. only x number of ATT data packets)?
Are master/slave required or allowed to impose such a limit by specification?
(I hope I'm not reviving a dead post. But I think the section 4.5.1 is better suited to answer this than 4.5.6.)
The spec doesn't define a limit of packets. It just states the following:
4.5.1 Connection Events - BLUETOOTH SPECIFICATION Version 4.2 [Vol 6, Part B]
(...)
The start of a connection event is called an anchor point. At the anchor point, a master shall start to transmit a Data Channel PDU to the slave. The start of connection events are spaced regularly with an interval of connInterval and shall not overlap. The master shall ensure that a connection event closes at least T_IFS before the anchor point of the next connection event. The slave listens for the packet sent by its master at the anchor point.
T_IFS is the "Inter Frame Space" time and shall be 150 μs. Simply put it's the job of the master to solve this problem. As far as I know iOS limits the packet number to 4 per connection event for instance. Android may have other hard coded limits depending on the OS version.
There is max data rate that can be achieved both on BT and BLE. You can tweak this data rate by changing MTU (maximum transmission unit - packet size) up to max MTU both ends of transmission can handle. But AFAIK there is no straight constraint on number of packets, besides physical ones imposed by the data rate.
You can find more in the spec
I could find the following in Bluetooth Spec v4.2:
4.5.6 Closing Connection Events
The MD bit of the Header of the Data Channel PDU is used to indicate
that the device has more data to send. If neither device has set the
MD bit in their packets, the packet from the slave closes the
connection event. If either or both of the devices have set the MD
bit, the master may continue the connection event by sending another
packet, and the slave should listen after sending its packet. If a
packet is not received from the slave by the master, the master will
close the connection event. If a packet is not received from the
master by the slave, the slave will close the connection event.
Two consecutive packets received with an invalid CRC match within a
connection event shall close the event.
This means both slave and masters have self-imposed limit on number of packets they want to transmit during a CI. When either party doesn't wish to send more data, they just set this bit to 0 and other one understands. This should usually be driven by number of pending packets on either side.
Since I was looking for logical limits due to spec or protocol, this probably answers my question.
Physical limits to number packets per CI would be governed by data rate, and as #morynicz mentioned, on MTU etc.
From my understanding, the limit is: min{max master event length, max slave event length, connection interval}.
To clarify, both the master and slave devices (specifically, the BLE stack thereof) typically have event length or "GAP event length" times. This time limit may be used to allow a central and/or advertiser and/or broadcaster to schedule the "phase offset" of more than one BLE radio activity, and/or limit the CPU usage of the BLE stack for application processing needs. E.g. a Nordic SoftDevice stack may have a default event length of 3.75ms that is indefinitely extendable (up to the connection interval) based on other demands on the SoftDevice's scheduler. In Android and iOS BLE implementations, this value may be opaque or not specified (e.g. Android may set this value to "0", which leaves the decision up to the controller implementation associated with the BLE chip of that device).
Note also that either the master or the slave may effectively "drop out" of a connection event earlier than these times if their TX/RX buffers are filled (e.g. Nordic SoftDevice stack may have a buffer size of 6 packets/frames). This may be implemented by not setting the MD bit (if TX buffer is exhausted) and/or "nacking" with the NESN bit (if RX buffer is full). However, while the master device can actually "drop out" by ending the connection event (not sending any more packets), the slave device must listen for master packets as long as at least one of master and slave have the MD bit set and the master continues to transmit packets (e.g. the slave could keep telling the master that it has no more data and also keep NACKing master packets because it has no more buffer space for the current connection event, but the master may keep trying to send for as long as it wants during the connection interval; not sure how/if the controller stack implements any "smarts" regarding this).
If there are no limits from either device in terms of stack-specified event length or buffer size, then presumably packets could be transmitted back and forth the entire connection interval (assuming at least one side had data to send and therefore set the MD bit). Just note for throughput calculation purposes that there is a T_IFS spacing (currently specified at 150us) between each packet and before the end of the connection interval.