Slow BLE response times after second BLE device is connected - bluetooth

Screen grab from WireShark showing traffic when problem occurs
Short question - Referring to WireShark image, What is causing Master to send LL_CHANNEL_MAP_IND and why is it taking so long?
We are working on a hardware/software project that is utilizing the TI WL18xx as a Bluetooth controller. One of the main functions of this product is to communicate with our sensor hardware over a Bluetooth Low Energy connection. We have encountered an issue that we have had difficulty pinpointing, but feel may reside in the TI WL18xx hardware/firmware. Intermittently, after a second Bluetooth Low Energy device is connected, the response times from writing and notification of one of the characteristics on one of the connected devices becomes very long.
Details
The host product device is running our own embedded Linux image on a TI AM4376x processor. The kernel is 4.14.79 and our communication stack sits on top of Bluez5. The wifi/bluetooth chip is the Jorjin WG7831-BO, running TIInit_11.8.32.bts firmware version 4.5. It is based on the TI WL1831. The sensor devices that we connect to are our own and we use a custom command protocol which uses two characteristics to perform command handshakes. These devices work very well on a number of other platforms, including Mac, Windows, Linux, Chrome, etc.
The workflow that is causing problems is this;
A user space application allows the user to discover and connect to our sensor devices over BLE, one device at a time.
The initial connection requires a flurry of command/response type communication over the aforementioned BLE characteristics.
Once connected, the traffic is reduced significantly to occasional notifications of new measurements, and occasional command/response exchanges triggered by the user.
A single device always seems stable and performant.
When the user connects to a second device, the initial connection proceeds as expected.
However, once the second device's connection process completes, we start to see that the command/response response times become hundreds of times longer on the initially connected device.
The second device communication continues at expected speeds.
This problem only occurs with the first device about 30% of the time we follow this workflow.
Traces
Here is a short snippet of the problem that is formed from a trace log that is a mix of our library debug and btmon traces.
Everything seems fine until line 4102, at which we see the following:
ACL Data TX: Handle 1025 flags 0x00 dlen 22 #1081 [hci0] 00:12:48.654867
ATT: Write Command (0x52) len 17
Handle: 0x0014
Data: 580fd8c71bff00204e000000000000
D2PIO_SDK: GMBLNGIBlobSource.cpp(1532) : Blob cmd sent: 1bh to GDX-FOR 07100117; length = 15; rolling counter = 216; timestamp = 258104ms .
HCI Event: Number of Completed Packets (0x13) plen 5 #1082 [hci0] 00:12:49.387892
Num handles: 1
Handle: 1025
Count: 1
ACL Data RX: Handle 1025 flags 0x02 dlen 23 #1083 [hci0] 00:12:51.801225
ATT: Handle Value Notification (0x1b) len 18
Handle: 0x0016
Data: 9810272f1bd8ff00204e000000000000
D2PIO_SDK: GMBLNGIBlobSource.cpp(1745) : GetNextResponse(GDX-FOR 07100117) returns 1bh cmd blob after 3139=(261263-258124) milliseconds.
The elapsed time reported by GetNextResponse() for most cmds should be < 30 milliseconds. Response times were short when we opened and sent a bunch of cmds to device A.
The response times remained short when we opened and sent a bunch of cmds to device B. But on the first subsequent cmd sent to device A, the response time is more than 3 seconds!

Note that a Bluetooth radio can only do one thing at a time. Receive or transmit. On one single frequency. If you have two connections and two connection events happen at the same time, the firmware must decide which one to prioritize, and which one to skip. Maybe the firmware isn't smart enough to handle your specific case. Try with other connection parameters to see if something gets better. You can also try another Bluetooth dongle from a different manufacturer.

Related

Gigabit Ethernet takes ~370ms to report link down in MII register

To give you some context, I am trying to configure some network redundancy on Ubuntu 20.04.
I decided to use NIC bonding with:
2 Gigabit interfaces,
active-backup mode,
link state monitoring with MII (polling every 10ms).
It works as expected, when the active slave fails (wire unplugged) the bonding switches to the other slave. However this transition is too slow in my opinion as it takes up to 400ms.
In order to investigate I had a look at the MII register using this command: mii-tool -vv ens33 (-vv is to print raw MII registers content). It generates this kind of output:
Doing a basic bash script I print current time + value of the 2nd MII register which contains the link state. As this is a "while true" loop with no pause, I get a status every 20ms, so it allows me monitoring with enough reactivity.
Observed behavior:
when the interfaces are configured with a 100Mbit/s speed, Full Duplex => the MII register is almost immediately updated (no "visible" delay between the wire unplugging and the update of the register value on the screen).
however when configured at 1Gbit/s, Full Duplex (via auto-negotiation) => the update of MII register takes much more time (it is "visible" with the bash script). By sending pings every 10ms using the bond interface, and logging packets with Wireshark on receiver side, the observed delay is around 370ms.
My questions:
The Ethernet controler is an Intel I210, and I have not found anything relevant in the datasheet.
Do you see any reason for such a long time to detect link down when interface speed is 1Gbit/s vs 100Mbit/s (370 vs 50ms) ?
Any advice to improve the reactivity?
Thanks for your help :)
Every generation of Ethernet speed has it's own method and requirements.
10BaseT:
"link pulses" are used every 16 ±8 ms
100BaseT: "link pulses" are not used but Idle ("/I/") 4B/5B code words monitoring is instead.
1000BaseT: link failure is actually
determined by maxwait_timer which is defined in IEEE 802.3 to be 750 ±10 ms for Master and 350 ±10
ms for Slave ports.
As you can see, your measured 370ms are exactly what you can expect from a slave port.
Sources:
https://www.microchip.com/forums/FindPost/417307
https://ww1.microchip.com/downloads/en/Appnotes/VPPD-03240.pdf

How to monitor XBee GPIO data through X-CTU console?

I'm using PIR sensor for motion detection and XBee s2c for transmission. The remote(transmitting) XBee, connected to PIR, is configured as below
CE=0
DH=0
DL=0
D4=3
IR=3E8 (500ms)
IC=FF (Change Detection on all pins)
The base(receiving) XBee is configured as below
CE=1
DH=0
DL=FFFF
D4=5
At the base, GPIO4 is connected to an LED. I have performed a simple test as mentioned here to check whether the GPIO is working or not. It's working as mentioned with above given DH & DLs. As D4 is configured to 5, the LED glows all time. Theoretically, whenever PIR sends high, LED should be off and vice-versa. But I am having two problems
The console of remote XBee is not showing any frames being sent but console of base XBee is showing the receiving frames and it is receiving correct data of PIR.
The pin D4 of base is not following the data being received i.e, it stays high all time.
I have observed the frames being received and they are showing the response of PIR as intended. How is the pin D4 not following the frames being received? I have followed this tutorial for DIO lines passing of XBee.
Make sure you're running the 802.15.4 (ATVR=0x20XX) or DigiMesh firmware (0x90XX) and not the ZigBee firmware (0x40XX). Looking at the options in XCTU, I don't think ZigBee firmware supports I/O line passing.
And looking at that knowledge base article, you might need to set ATIT on the remote and ATT4 and ATIA on the base. If those registers aren't available, then you're probably running a firmware version that doesn't support I/O line passing.

Transmitting odd number of bits serially

I'm implementing a LIN protocol on a Linux SBC that transmits over a UART. I don't have time to develop a complete LIN stack, so I'm just implementing a frame structure for messages as defined by the protocol. The problem is that the protocol requires a "Break" field which makes the slave devices on the bus listen. This field consists of zeros for 13 bit-times. Any ideas how to send zeros 13 bit-times over UART, when serial data transmission requires complete bytes?
Per Wiki:
LIN (Local Interconnect Network) is a serial network protocol used for
communication between components in vehicles. The need for a cheap
serial network arose as the technologies and the facilities
implemented in the car grew, while the CAN bus was too expensive to
implement for every component in the car. European car manufacturers
started using different serial communication topologies, which led to
compatibility problems.
If you would have paid attention at class you would have known that:
Data is transferred across the bus in fixed form messages of
selectable lengths. The master task transmits a header that consists
of a break signal followed by synchronization and identifier fields.
The slaves respond with a data frame that consists of between 2, 4 and
8 data bytes plus 3 bytes of control information.
You should just send an echo of 0x0000 following by CR/LF.

BLE: Max number of packets in Connection interval

Is there a limit on maximum number of packets (LE_DATA) that could be send by either slave or master during one connection interval?
If this limit exists, are there any specific conditions for this limit (e.g. only x number of ATT data packets)?
Are master/slave required or allowed to impose such a limit by specification?
(I hope I'm not reviving a dead post. But I think the section 4.5.1 is better suited to answer this than 4.5.6.)
The spec doesn't define a limit of packets. It just states the following:
4.5.1 Connection Events - BLUETOOTH SPECIFICATION Version 4.2 [Vol 6, Part B]
(...)
The start of a connection event is called an anchor point. At the anchor point, a master shall start to transmit a Data Channel PDU to the slave. The start of connection events are spaced regularly with an interval of connInterval and shall not overlap. The master shall ensure that a connection event closes at least T_IFS before the anchor point of the next connection event. The slave listens for the packet sent by its master at the anchor point.
T_IFS is the "Inter Frame Space" time and shall be 150 μs. Simply put it's the job of the master to solve this problem. As far as I know iOS limits the packet number to 4 per connection event for instance. Android may have other hard coded limits depending on the OS version.
There is max data rate that can be achieved both on BT and BLE. You can tweak this data rate by changing MTU (maximum transmission unit - packet size) up to max MTU both ends of transmission can handle. But AFAIK there is no straight constraint on number of packets, besides physical ones imposed by the data rate.
You can find more in the spec
I could find the following in Bluetooth Spec v4.2:
4.5.6 Closing Connection Events
The MD bit of the Header of the Data Channel PDU is used to indicate
that the device has more data to send. If neither device has set the
MD bit in their packets, the packet from the slave closes the
connection event. If either or both of the devices have set the MD
bit, the master may continue the connection event by sending another
packet, and the slave should listen after sending its packet. If a
packet is not received from the slave by the master, the master will
close the connection event. If a packet is not received from the
master by the slave, the slave will close the connection event.
Two consecutive packets received with an invalid CRC match within a
connection event shall close the event.
This means both slave and masters have self-imposed limit on number of packets they want to transmit during a CI. When either party doesn't wish to send more data, they just set this bit to 0 and other one understands. This should usually be driven by number of pending packets on either side.
Since I was looking for logical limits due to spec or protocol, this probably answers my question.
Physical limits to number packets per CI would be governed by data rate, and as #morynicz mentioned, on MTU etc.
From my understanding, the limit is: min{max master event length, max slave event length, connection interval}.
To clarify, both the master and slave devices (specifically, the BLE stack thereof) typically have event length or "GAP event length" times. This time limit may be used to allow a central and/or advertiser and/or broadcaster to schedule the "phase offset" of more than one BLE radio activity, and/or limit the CPU usage of the BLE stack for application processing needs. E.g. a Nordic SoftDevice stack may have a default event length of 3.75ms that is indefinitely extendable (up to the connection interval) based on other demands on the SoftDevice's scheduler. In Android and iOS BLE implementations, this value may be opaque or not specified (e.g. Android may set this value to "0", which leaves the decision up to the controller implementation associated with the BLE chip of that device).
Note also that either the master or the slave may effectively "drop out" of a connection event earlier than these times if their TX/RX buffers are filled (e.g. Nordic SoftDevice stack may have a buffer size of 6 packets/frames). This may be implemented by not setting the MD bit (if TX buffer is exhausted) and/or "nacking" with the NESN bit (if RX buffer is full). However, while the master device can actually "drop out" by ending the connection event (not sending any more packets), the slave device must listen for master packets as long as at least one of master and slave have the MD bit set and the master continues to transmit packets (e.g. the slave could keep telling the master that it has no more data and also keep NACKing master packets because it has no more buffer space for the current connection event, but the master may keep trying to send for as long as it wants during the connection interval; not sure how/if the controller stack implements any "smarts" regarding this).
If there are no limits from either device in terms of stack-specified event length or buffer size, then presumably packets could be transmitted back and forth the entire connection interval (assuming at least one side had data to send and therefore set the MD bit). Just note for throughput calculation purposes that there is a T_IFS spacing (currently specified at 150us) between each packet and before the end of the connection interval.

Making an IOCTL for transmission of packets using "usb_bulk_msg" to the USB endpoint

I have an application that sends 8000 audio packets per second. Now initially for experimenting purpose I am preparing a buffer of 8 audio packets and then making an IOCTL call and passing the buffer to my driver.
I am using "USB analyser". From the USB analyser I got that the inter packet gap(IPG) is around 20-40 usec. That is fine. But sometimes the IPG shows 200-300usec. Is it the USB subsystem(USB core/HCD) that is playing the role or the implementation of ioctl ?. Making >1000 ioctl calls and "copy_from_user"per second may be the culprit behind the late submission of packets. And moreover i am using USB3.0 which is capable of supporting 5 GBps data rate.
The code flow in my driver is like :
switch(cmd)
{
case SEND_AUDIO:
copy_from_user(...,....,...);
for(i=0; i<8; i++);
usb_bulk_msg();
break;
}

Resources