is BLE SPP full duplex? - bluetooth

I'm working on a project that requires BLE Serial Profile.
I have successfully implemented it, but now I'm wondering what happens when server is sending data to the client, and client wants to send data back (while server is still sending). Is this handled on the low level with a queue or something similar?
Is there any risk that messages will get lost?
Thanks for any help.

Bluetooth provides the effect of full duplex transmission through the use of time division duplex (TDD). In principle transmission and reception do not happen at the same time. So in your case there is no risk of collision (loss) of data packets.
As you can see "Central" and "Peripheral" have a window of 625us during which they transmit.
For further details you can read "Timeslot" chapter of the base band specification in Core Bluetooth specification.
https://www.bluetooth.com/specifications/specs/core-specification-5-3/

Related

Reverse engineering Bluetooth LE - device sends weird responses back

I recently aquired a Segway Ninebot ES2 electric scooter. I can connect to the scooter via Bluetooth LE and grab information such as battery status, current mileage, temperature, and so on. This is all done through an application.
On my Android device, I've successfully extraceted the HCI log file, which I imported into Wireshark. I can see all the requests and commands send back and forth between my phone and the scooter. However, the requests and responses are all garbage and I have no idea how to interpret them.
Example of a sent command (info says Sent Write Command, Handle: 0x000e (Nordic UART Service: Nordic UART Tx))
Example of the received value I got right after (info says Rcvd Handle Value Notification, Handle: 0x000b (Nordic UART Service: Nordic UART Rx))
How am I supposed to interpret these responses? If the battery status was 59%, I would expect it to return something like 0x3b (0x3b hex is 59 decimal). But honestly, I have no idea how this works. Maybe they're returning a bunch of data in a data type only their app knows how to interpret? Like JSON for web.
Here's an example from the nRF Connect for Mobile application, where I hit the down arrow on all the characteristics: https://i.imgur.com/hREDomP.jpg (large image)
And probably more important: How do I replicate a request or command in nRF Connect? I've tried sending a byte array that looks like 0x {02410011000d.....} (from the Write Command) in the application, but I have no idea how to read the response.
If someone is still interested, I did the same research for this scooter.
That's standart BLE communacation, device offers BLE "services" and "characteristics". Service can contain one or more characteristics, by which you communicate with device. Each charateristic can allow different types of interaction with it: writing into it, reading from it, subscribing to notifications (so you dont have to to manually read, it kinda pushes data to your app), and more (read here, for example)
Take a look at your wireshark screenshot: you can see Service UUID, Handle UUID (the characteristic), and handle ID. You can communicate with device via uuid or id, depending on your programming language or library (more about uuids).
In this particular scooter there are two characteristics, one allows writing into it, another - allows subscribing to it. Together, they act like RX and TX wires in UART: you write data into one and read from another. So, to begin communication with scooter you must establish connection to it, subscribe for notifications from one ch, and write data to another.
As for protocol: look again at she screenshots, "UART Tx" is the actual payload that was sent to scooter and "UART Rx" was the response. Yes, it's binary data, that only app would understand. Luckily, protocol has been reverse engineered and is well documented. In your example app requests serial number, and it's returned in response - "N2GWX...". In order to request battery percentage you must build another payload according to protocol.
I'm not sure if it's still relevant, but at least for those, who will be interested in the topic.
You can try the following to understand how to interpret response from the device.
An option to consider is to fetch manufacturer's mobile app (apk) either by adb or from sites like apkmirror, etc.
Then apply some reverse-eng tool like JADX.
If you're lucky and the code is somewhat readable, then search for smth that has to do with response (like ResponseParser) and try to find algo that is used to interpret the response.
However, the very first attemp should always be to search on github/google if smb did it already for your device, unless it's very niche.

Changing the connection interval in a Bluez-based GATT-oriented application

We are currently working on an application on linux (a.o. RasPi running latest Debian Jessie) that connects to a BLE device (developed by us). This tool has evolved from cherry-picking files from the bluez (5.46) stack and adding an application layer on top. This all works quite nicely, except for the fact that connecting is incredibly slow. From the output of our tool, I understand that a truckload of messages need to be exchanged to communicate GATT services and characteristics, and each of those costs one connection interval of time. Since it is a low power device, we want the connection interval to be relatively high, and thus the high delay.
When connecting with Android BLE Scanner, I see (on the device side) that BLE Scanner manipulates the connection interval to a low value, gets all the requested data, and then sets the connection interval back to its original value. Note, btw, that neither BLE Scanner nor our Bluez-derived application take the preferred connection parameters into account.
Now I want to have our application do the same: set the connection interval to 8ms, get all info about characteristics and services, and set the connection interval back. In the Bluez stack I even find a nice function in the HCI layer for this: hci_le_conn_update.
But now the challenge: the rest of the application is built on top of the GATT functionality and even though the BLE specification defines a hierarchy between those two (with some layers in between), in code they seem absolutely independent of each other.
There are two parameters to the hci_le_conn_update function that are HCI specific: 'dd' (file descriptor to device) and 'handle' (some value that identifies the connection). The hcitool tells me that when I create a connection, the first handle is 64, so I tried with that value. For 'dd' I used hci_dev_open to get a file descriptor for the device. This worked. Sort of.
As I said before, the min/max values are not entirely taking into account. So when I set it to 6/10, I get 11 and when I set it to 6/50, I get 60. This is a bit too undeterministic for my taste, and I would prefer a function that directly changes the connection interval instead of giving a range that is mostly ignored anyway. Also the fact that I have to use a hardcoded magic number 64 gives me a bad itch. I can actually control the connection interval on the embedded device's side, but I want the control at the side of the client application.
The goal is to update the connection interval in a Bluez-GATT-based application. Within certain limits, I do not mind that much how I get there. Any suggestions?
In the official dbus API, there is no method to change connection parameters. (See https://git.kernel.org/pub/scm/bluetooth/bluez.git/tree/doc/gatt-api.txt and https://git.kernel.org/pub/scm/bluetooth/bluez.git/tree/doc/device-api.txt). The key is therefore to send the Connection Parameter Update Request from the peripheral side. You can of course experiment with sending a raw hci command but that is a bit "hacky" and has no guarantees to not mess up the BlueZ daemon.
If you would like to discuss the features of BlueZ such as an connection parameter update request api, you should do that on the BlueZ mailing list (http://www.bluez.org/contact/) rather than here.

How to handle DISABLE_NOTIFICATION_VALUE?

If a Bluetooth server has a notifying characteristic and the remote client writes to the descriptor of this characteristic the value DISABLE_NOTIFICATION_VALUE , then how must the server cope with this descriptor write request?
Must the server software refrain from calling any GattServer.notifyCharacteristicChanged() , or will the Bluetooth stack prevent sending notifications to the client after GattServer.notifyCharacteristicChanged() has been called by the server software?
The server should only send a notification over the air if the corresponding notification bit in the descriptor is 1. Whether the application software is the one that should do this check or the bluetooth stack does this for you is implementation-specific. I guess most bluetooth stacks do not do this for you.
If you wonder about Android, I can't see clear documentation whether this is done or not, so you'd better do it yourself.

Beacon Payload Analysis

I am analyzing the traffic beacons generate using tshark and iptraf. I know they are mainly used to determine the proximity of a device and like any other network device the traffic generated by them must be having a header and payload information in it.
What is best way to find out the payload information though header info can be identified as to where packet is being sent etc , but how we can classify the payload and what information it contains in a beacon signal , is it the same like any other web traffic sent and receive on a network or is it different since they make use of Bluetooth ?
Any pointers regarding would be useful .
Bluetooth LE beacon transmissions are much simpler than the HTTP protocol. They are transmit only and have no real headers, although there are short segments within the transmissions called PDUs that have a similar purpose.
To see an example of a beacon transmission, see my answer here:
What is the iBeacon Bluetooth Profile

How to Send data over UDP socket as it is being written?

I have connected two Linux Machines using netcat over WLAN using Server-Client design. And now i am able to send and receive messages between them. On the server i use UDP socket creation :
$ nc -u -l 3333
and on the client side i connect to the port using the port number and destination IP :
$ nc -u 192.168.178.160 3333
This leads to a bi-directional connection between server and client. One couldn't tell, but i guess it is quite Real-Time.
now i want to develop the functionality and try and establish a real-time speech connection between the two sides. Recording via Microphones is also feasible through arecord commands which write the speech data to a .wav file . Transmission of the .wav file is possible, only after it has been fully recorded but this is of no use since what is desired, is a Real-Time communication. Of course the received speech signals have to be instantly played back on the other end.
Has anyone any idea how to make it Real-Time?
Fidelity means a large buffer count to preserve sound continuity despite network latency and latency variation, low sound delay approximating to real time means a small buffer count to reduce overall latency. You cannot have both.
IME, you need to keep ~250ms max. of sound buffered at both ends to maintain an illusion of 'real time' speech. This queue of buffers needs to be emptied at the fixed rate necessary to reproduce the speech and kept topped-up by the network protocol as necessary. If the network latency is too low to top up buffer pools of that size, the buffer pool has to be made larger, the queue longer and the perceived real-time performance will suffer.
The TCP/UDP issue is a red-herring on most network connections.
Just be thankful that you are not streaming video:)

Resources