Decoding Bluetooth HCI log for Cycling Trainer - bluetooth

trying to figure out how to control the trainer. got this bluetooth HCI logs but unsure if there's a Method to "unlock" the trainer before sending the command.
This is the sequence which I'm seeing from the Manufacturer's App sent to the trainer.
I don't quite understand this particular output. It looks to me that this Char UUID has the Indication characteristic. Is the app actually sending anything to the trainer? I don't see a Value being written. (Is it correct to presume that all writes will have a Value Hex?)
This is sending 100w (6400 or 0064 in LSB) target power to the trainer.
This is sending 300w (2c01 or 012c in LSB) target power to the trainer.
Here are the 2 btsnoop logs from 2 different apps if anyone is willing/wants to look further.
https://www.dropbox.com/s/86wox4ywz3sjr2v/btsnoop_hci%28TrainerRoad%29.log?dl=0
https://www.dropbox.com/s/pgs6j4fg6opjrij/btsnoop_hci1%28Saris%29.log?dl=0

Related

Slow BLE response times after second BLE device is connected

Screen grab from WireShark showing traffic when problem occurs
Short question - Referring to WireShark image, What is causing Master to send LL_CHANNEL_MAP_IND and why is it taking so long?
We are working on a hardware/software project that is utilizing the TI WL18xx as a Bluetooth controller. One of the main functions of this product is to communicate with our sensor hardware over a Bluetooth Low Energy connection. We have encountered an issue that we have had difficulty pinpointing, but feel may reside in the TI WL18xx hardware/firmware. Intermittently, after a second Bluetooth Low Energy device is connected, the response times from writing and notification of one of the characteristics on one of the connected devices becomes very long.
Details
The host product device is running our own embedded Linux image on a TI AM4376x processor. The kernel is 4.14.79 and our communication stack sits on top of Bluez5. The wifi/bluetooth chip is the Jorjin WG7831-BO, running TIInit_11.8.32.bts firmware version 4.5. It is based on the TI WL1831. The sensor devices that we connect to are our own and we use a custom command protocol which uses two characteristics to perform command handshakes. These devices work very well on a number of other platforms, including Mac, Windows, Linux, Chrome, etc.
The workflow that is causing problems is this;
A user space application allows the user to discover and connect to our sensor devices over BLE, one device at a time.
The initial connection requires a flurry of command/response type communication over the aforementioned BLE characteristics.
Once connected, the traffic is reduced significantly to occasional notifications of new measurements, and occasional command/response exchanges triggered by the user.
A single device always seems stable and performant.
When the user connects to a second device, the initial connection proceeds as expected.
However, once the second device's connection process completes, we start to see that the command/response response times become hundreds of times longer on the initially connected device.
The second device communication continues at expected speeds.
This problem only occurs with the first device about 30% of the time we follow this workflow.
Traces
Here is a short snippet of the problem that is formed from a trace log that is a mix of our library debug and btmon traces.
Everything seems fine until line 4102, at which we see the following:
ACL Data TX: Handle 1025 flags 0x00 dlen 22 #1081 [hci0] 00:12:48.654867
ATT: Write Command (0x52) len 17
Handle: 0x0014
Data: 580fd8c71bff00204e000000000000
D2PIO_SDK: GMBLNGIBlobSource.cpp(1532) : Blob cmd sent: 1bh to GDX-FOR 07100117; length = 15; rolling counter = 216; timestamp = 258104ms .
HCI Event: Number of Completed Packets (0x13) plen 5 #1082 [hci0] 00:12:49.387892
Num handles: 1
Handle: 1025
Count: 1
ACL Data RX: Handle 1025 flags 0x02 dlen 23 #1083 [hci0] 00:12:51.801225
ATT: Handle Value Notification (0x1b) len 18
Handle: 0x0016
Data: 9810272f1bd8ff00204e000000000000
D2PIO_SDK: GMBLNGIBlobSource.cpp(1745) : GetNextResponse(GDX-FOR 07100117) returns 1bh cmd blob after 3139=(261263-258124) milliseconds.
The elapsed time reported by GetNextResponse() for most cmds should be < 30 milliseconds. Response times were short when we opened and sent a bunch of cmds to device A.
The response times remained short when we opened and sent a bunch of cmds to device B. But on the first subsequent cmd sent to device A, the response time is more than 3 seconds!
Note that a Bluetooth radio can only do one thing at a time. Receive or transmit. On one single frequency. If you have two connections and two connection events happen at the same time, the firmware must decide which one to prioritize, and which one to skip. Maybe the firmware isn't smart enough to handle your specific case. Try with other connection parameters to see if something gets better. You can also try another Bluetooth dongle from a different manufacturer.

BLE Clarifying the Read and Indicate Operations

I'm writing code for Pycom's Lopy4 board and have created a BLE service for environmental sensing which currently has one characteristic, temperature. I'm taking the temperature as a float and attempting to update the characteristic value every two seconds.
When I use a BLE scanner app, whenever I try to read, I read a value of "temperature10862," which is the characteristic's name and uuid. Yet when I press the indicate button, the value shows the correct temperature string, updating automatically every two seconds.
I'm a bit confused overall. Is this a problem with my code on the Pycom device or am I simply misunderstanding what a BLE read is supposed to be? Since the temperature values are obviously being updated on the device, but why does the client, the app, only show these values with an indication rather than a read?
I am sorry for any vagueness in the question, but any help or guidance would be appreciated.
Read Attempt
Indicate Attempt
Returning "temperature10862" as a read response is obviously incorrect. Sending the temperature as a string is in this case also incorrect, since you use the Bluetooth SIG-defined characteristic https://www.bluetooth.com/xml-viewer/?src=https://www.bluetooth.com/wp-content/uploads/Sitecore-Media-Library/Gatt/Xml/Characteristics/org.bluetooth.characteristic.temperature.xml. According to that the value should consist of a signed 16-bit integer in units of 0.01 degrees Celcius.
If you look at https://www.bluetooth.com/xml-viewer/?src=https://www.bluetooth.com/wp-content/uploads/Sitecore-Media-Library/Gatt/Xml/Services/org.bluetooth.service.environmental_sensing.xml, you will see that it's mandatory to support Read and optional to support Notifications. Indications, however are not permitted. So you should change your indicate property to notify instead.
The value sent should be the same regardless if the value is sent as a notification or read response.
Be sure you read the Environmental Sensing specs and follow the rest of the GATT service structure.

Decode weight in data packets from wireshark with bluetooth low energy

I've been working doing some reverse engineering to different BLE based devices and I have a weight scale where I can't find a pattern to find/decode/interpret the weight value that I can get from the android app. I was also able to get the services and characteristics of this device but did not found a SIG standard match with the UUIDs from the bluetooth site.
I'm using an nRF51 dongle to sniffing data packets between the android app and the weight scale and I can look the ble traffic, but there are several events during the communication that I can't really understand what's the relation and I can't be able to convert those values to readable weight in kg or pounds.
My target value is: 71.3kg readed from the weight scale app.
Let me show you what I get from the ble sniffer.
first I see that the master is sending notification/indication requests to the handles 0x0009(notify), 0x000c(indication) and 0x000f(notify) in each characteristic of one service.
Then I start to read notification/indications values mixed with write commands. At the end of the communication, I see some packets that I feel that they are the ones with the weight scale data and BMI.
Packets number 574, 672 and 674 in the image.
So that give us the following candidates:
1st. packet_number_574 = '000002c9070002
2nd. packet_number_672 = '420001000000005ed12059007f02c9011d01f101'
3rd. packet_number_674 = '42018c016b00070237057d001a01bc001d007c'
1st packet during the communication exchange looks more like a counter/RT/clock than a real measurement because of the data behavior, so I feel this one is not a real option.
2nd and 3rd looks more like real candidates, I have split them and convert them to decimal values and I have not found a relation, even combining bytes since this values are floating point data types. So my real question is, Am I missing something that you might see with this information? Do you know if there is a relation between this data packets and my target?
Thank you for taking your time reading me and any help could be good, thanks!
EDIT:
I have a python script that allows me to check the services and their characteristics hierarchy and some useful data like properties, their handles and descriptors.
Service 'fff0' (0000fff0-0000-1000-8000-00805f9b34fb):
Characteristic 'fff1' (0000fff1-0000-1000-8000-00805f9b34fb):
Handle: 8 (8)
Readable: False
Pro+perties: WRITE NOTIFY
Descriptor: Descriptor <Client Characteristic Configuration> (handle 0x9; uuid 00002902-0000-1000-8000-00805f9b34fb)
Characteristic 'fff2' (0000fff2-0000-1000-8000-00805f9b34fb):
Handle: 11 (b)
Readable: False
Properties: WRITE NO RESPONSE INDICATE
Descriptor: Descriptor <Client Characteristic Configuration> (handle 0xc; uuid 00002902-0000-1000-8000-00805f9b34fb)
Characteristic 'fff3' (0000fff3-0000-1000-8000-00805f9b34fb):
Handle: 14 (e)
Readable: False
Properties: NOTIFY
Descriptor: Descriptor <Client Characteristic Configuration> (handle 0xf; uuid 00002902-0000-1000-8000-00805f9b34fb)
This are the characteristics related to the notifications and indications that I see in wireshark. I think the packet number 574 (which characteristics has only a notifiy property) is more important than I think.
I solve it by myself.
This post gives me the idea to take the values (2 bytes) and multiply them by 0.1, that way I could get the weight.
In bytes I could look for my target value without decimals 713, which is in hex = 0x02c9
If we look at the packet number 574:
000002c9070002 and split it 00:00:02:c9:07:00:02
I could see that 2nd and 3rd bytes match with this pattern!
The only thing required to do is to group this bytes and multiply the decimal value 713 x 0.1 = 71.3. I made different tests and found that this pattern is constant so I feel is accurate for my solution. Hope this could help someone in the future.

How to monitor XBee GPIO data through X-CTU console?

I'm using PIR sensor for motion detection and XBee s2c for transmission. The remote(transmitting) XBee, connected to PIR, is configured as below
CE=0
DH=0
DL=0
D4=3
IR=3E8 (500ms)
IC=FF (Change Detection on all pins)
The base(receiving) XBee is configured as below
CE=1
DH=0
DL=FFFF
D4=5
At the base, GPIO4 is connected to an LED. I have performed a simple test as mentioned here to check whether the GPIO is working or not. It's working as mentioned with above given DH & DLs. As D4 is configured to 5, the LED glows all time. Theoretically, whenever PIR sends high, LED should be off and vice-versa. But I am having two problems
The console of remote XBee is not showing any frames being sent but console of base XBee is showing the receiving frames and it is receiving correct data of PIR.
The pin D4 of base is not following the data being received i.e, it stays high all time.
I have observed the frames being received and they are showing the response of PIR as intended. How is the pin D4 not following the frames being received? I have followed this tutorial for DIO lines passing of XBee.
Make sure you're running the 802.15.4 (ATVR=0x20XX) or DigiMesh firmware (0x90XX) and not the ZigBee firmware (0x40XX). Looking at the options in XCTU, I don't think ZigBee firmware supports I/O line passing.
And looking at that knowledge base article, you might need to set ATIT on the remote and ATT4 and ATIA on the base. If those registers aren't available, then you're probably running a firmware version that doesn't support I/O line passing.

Linux (uclinux 2.4.x)

I have one doubt.
Using uclinux 2.4.x. In this linux I have my own adapter code for reading the frames from ingress port.
Have added a debug log at exact place from where reading frames word by word and it is sure that I received all number of frames transmitted by peer.
( at Layer2 I am receiving all frames).
From here onward now invoking "netif_rx" to send all received framer to upper layer (i.e Network layer and Transport Layer).
Doubt : I observe that there are some drop of packets at (Transport Layer)UDP protocol.
How did I confirm: Added a count check at Layer 1 and Layer3(UDP Layer) and both counts are not equal.
that means even if we receiving all framer from layer 1 but when it reaching to UDP in between somewhere drop is happening.
So, Could any one suggest that where exactly the problem could be, How to check if memory is full or skb_alloc is not happening for more packets at UDP layer.
Kindly suggest your opinion, It will be of great help and support.
Please let me know if you need more information.
BR
Karn

Resources