Changing the connection interval in a Bluez-based GATT-oriented application - linux

We are currently working on an application on linux (a.o. RasPi running latest Debian Jessie) that connects to a BLE device (developed by us). This tool has evolved from cherry-picking files from the bluez (5.46) stack and adding an application layer on top. This all works quite nicely, except for the fact that connecting is incredibly slow. From the output of our tool, I understand that a truckload of messages need to be exchanged to communicate GATT services and characteristics, and each of those costs one connection interval of time. Since it is a low power device, we want the connection interval to be relatively high, and thus the high delay.
When connecting with Android BLE Scanner, I see (on the device side) that BLE Scanner manipulates the connection interval to a low value, gets all the requested data, and then sets the connection interval back to its original value. Note, btw, that neither BLE Scanner nor our Bluez-derived application take the preferred connection parameters into account.
Now I want to have our application do the same: set the connection interval to 8ms, get all info about characteristics and services, and set the connection interval back. In the Bluez stack I even find a nice function in the HCI layer for this: hci_le_conn_update.
But now the challenge: the rest of the application is built on top of the GATT functionality and even though the BLE specification defines a hierarchy between those two (with some layers in between), in code they seem absolutely independent of each other.
There are two parameters to the hci_le_conn_update function that are HCI specific: 'dd' (file descriptor to device) and 'handle' (some value that identifies the connection). The hcitool tells me that when I create a connection, the first handle is 64, so I tried with that value. For 'dd' I used hci_dev_open to get a file descriptor for the device. This worked. Sort of.
As I said before, the min/max values are not entirely taking into account. So when I set it to 6/10, I get 11 and when I set it to 6/50, I get 60. This is a bit too undeterministic for my taste, and I would prefer a function that directly changes the connection interval instead of giving a range that is mostly ignored anyway. Also the fact that I have to use a hardcoded magic number 64 gives me a bad itch. I can actually control the connection interval on the embedded device's side, but I want the control at the side of the client application.
The goal is to update the connection interval in a Bluez-GATT-based application. Within certain limits, I do not mind that much how I get there. Any suggestions?

In the official dbus API, there is no method to change connection parameters. (See https://git.kernel.org/pub/scm/bluetooth/bluez.git/tree/doc/gatt-api.txt and https://git.kernel.org/pub/scm/bluetooth/bluez.git/tree/doc/device-api.txt). The key is therefore to send the Connection Parameter Update Request from the peripheral side. You can of course experiment with sending a raw hci command but that is a bit "hacky" and has no guarantees to not mess up the BlueZ daemon.
If you would like to discuss the features of BlueZ such as an connection parameter update request api, you should do that on the BlueZ mailing list (http://www.bluez.org/contact/) rather than here.

Related

HM-10 BLE Module - connect to other Devices

first of all: What i am trying to do is only for private interest.
I'd like to connect a AT-09/HM-10 BLE-Module with Firmware 6.01 to another device which provides also a BLE Module, which it is not based on the CC254X-Chip,
I am able to communicate with this Device using my Laptop with integrated Bluetooth, Linux and the bluepy-helper. I am also able to make a connection using the HM10 through a USB-RS232-Module and "Hterm", but after that quite Stuck in my progress.
By "reverse-engineering" the Android-Application for controlling this particular device i found a set of Commands, stored as Strings in Hex-Format. The Java-Application itself sends out the particular Command combined with a CRC16-Modbus-Value in addition with a Request (whatever it is), to a particular Service and Characteristic UUID.
I also have a Wireshark-Protocol pulled from my Android-Phone while the application was connected to the particular device, but i am unable to find the commands extracted from the .apk in this protocol.
This is where i get stuck. After making a connection and sending out the Command+CRC16-Value i get no response at all, so i am thinking that my intentions are wrong. I am also not quite sure how the HM-10-Firmware handles / maps the Service and Char-UUIDs from the destination device.
Are there probably any special AT-Commands which would fit my need?
I am absolutely not into the technical depths of Bluetooth and its communication layer at all. The only thing i know is that the HM-10 connects to a selected BLE-Device and after that it provides a Serial I/O and data flows between the endpoints.
I have no clue how and if it can handle Data flow to certain Service/Char UUIDs from the destination endpoint, althrough it seems to have built-in the GATT , l2cap-Services and so on. Surely it handles all the neccessary communication by itself, but i donĀ“t know where i get access to the "front-end" at all.
Best regards !

Reverse engineering Bluetooth LE - device sends weird responses back

I recently aquired a Segway Ninebot ES2 electric scooter. I can connect to the scooter via Bluetooth LE and grab information such as battery status, current mileage, temperature, and so on. This is all done through an application.
On my Android device, I've successfully extraceted the HCI log file, which I imported into Wireshark. I can see all the requests and commands send back and forth between my phone and the scooter. However, the requests and responses are all garbage and I have no idea how to interpret them.
Example of a sent command (info says Sent Write Command, Handle: 0x000e (Nordic UART Service: Nordic UART Tx))
Example of the received value I got right after (info says Rcvd Handle Value Notification, Handle: 0x000b (Nordic UART Service: Nordic UART Rx))
How am I supposed to interpret these responses? If the battery status was 59%, I would expect it to return something like 0x3b (0x3b hex is 59 decimal). But honestly, I have no idea how this works. Maybe they're returning a bunch of data in a data type only their app knows how to interpret? Like JSON for web.
Here's an example from the nRF Connect for Mobile application, where I hit the down arrow on all the characteristics: https://i.imgur.com/hREDomP.jpg (large image)
And probably more important: How do I replicate a request or command in nRF Connect? I've tried sending a byte array that looks like 0x {02410011000d.....} (from the Write Command) in the application, but I have no idea how to read the response.
If someone is still interested, I did the same research for this scooter.
That's standart BLE communacation, device offers BLE "services" and "characteristics". Service can contain one or more characteristics, by which you communicate with device. Each charateristic can allow different types of interaction with it: writing into it, reading from it, subscribing to notifications (so you dont have to to manually read, it kinda pushes data to your app), and more (read here, for example)
Take a look at your wireshark screenshot: you can see Service UUID, Handle UUID (the characteristic), and handle ID. You can communicate with device via uuid or id, depending on your programming language or library (more about uuids).
In this particular scooter there are two characteristics, one allows writing into it, another - allows subscribing to it. Together, they act like RX and TX wires in UART: you write data into one and read from another. So, to begin communication with scooter you must establish connection to it, subscribe for notifications from one ch, and write data to another.
As for protocol: look again at she screenshots, "UART Tx" is the actual payload that was sent to scooter and "UART Rx" was the response. Yes, it's binary data, that only app would understand. Luckily, protocol has been reverse engineered and is well documented. In your example app requests serial number, and it's returned in response - "N2GWX...". In order to request battery percentage you must build another payload according to protocol.
I'm not sure if it's still relevant, but at least for those, who will be interested in the topic.
You can try the following to understand how to interpret response from the device.
An option to consider is to fetch manufacturer's mobile app (apk) either by adb or from sites like apkmirror, etc.
Then apply some reverse-eng tool like JADX.
If you're lucky and the code is somewhat readable, then search for smth that has to do with response (like ResponseParser) and try to find algo that is used to interpret the response.
However, the very first attemp should always be to search on github/google if smb did it already for your device, unless it's very niche.

No Authentication after Probe Response modified for security

I am working on 802.11i security implementation and I am stuck quite at the beginning. My work is based on existing framework which broadcasts open WiFi network and devices are successfully able to connect.
In order to inform STAs of access restrictions, I added RSN (Robust Security Network) element into Tagged parameters of Beacons and Probe Response frames. I defined both suites (Group and Pairwise) with 802.11 standardized values setting to AES and AuthKey setting to both PSK and 802.1X.
After this frame is sent, no Authentication Request is generated from STAs and I got response on all of them (Win7, Android and iOS) that device cannot connect. I compared Wireshark traces with multiple connection establishments, and cannot figure out what doesn't add up. Only thing I modified is addition of RSN and this RSN is absolutely the same as in other production connection attempts.
What I thought is that either device rejects frame because of improper implementation/format, or it doesn't support offered security mechanisms. But then - removing RSN field (making network open) allows STAs to proceed with Authentication Req; and looking into Probe Responses from real APs, they have the exact same structure and contents of RSN.
And other interesting behavior is that Apple device (iPad) doesn't even recognize network as secured with RSN field implemented while both Win7 and Android do.
Any ideas what am I missing or misunderstanding?
Screenshots: First is working real communication, second is my implemented modification.

Does X11 have a lifesign or constant stream?

I have a fault tolerant application, where an X Server requests to start an Application on a remote client (by some other mechanism) and receive and display its X-window. Fault tolerance means that the server needs to detect loss of the connection to the client and then call a different back-up-client and start the application there and show the window.
My question is whether there exists a mechanism in the X11 protocol that allows to reliably detect in an X11-Server whether the connection has been broken or not.
Experiments show that when unplugging a cable connection it needs some TCP-Timeout to detect the connection loss on socket level. This is very OS-dependent. In our case it was abut 30 minutes after which the X-Server eventually closed the window.
So another assumption could be that the X11-stream constantly delivers some commands and the server could implement some logic like this: If the X11-stream does not deliver any X11 traffic for a timeout y (e.g. 3 seconds), we assume the connection is lost and actively close the window and establish the connection to the fall-back-client.
Is the assumption true? I did not see any such statement in the X11-protocol about how to detect connection loss. Is there any explicit lifesign that is regularly transmitted? Or is the assumption valid that there is constant traffic? Or could there be longer periods of inactivity where nothing is transmitted at all while the connection is perfectly up and running?
There is a NoOperation command from the client that could be used for such purpose. But do clients usually implement something like that as a lifesign?
I have a fault tolerant application, where an X Server needs to start an Application...
I don't think that an X server can "start an application". May be that some setup allows something similar to that, but normally is not so.
...whether there exists a mechanism in the X11 protocol that allows to reliably detect in an X11-Server whether the connection has been broken or not.
No, it does not exist. The X11 protocol is based on TCP/IP, which does not provide directly this "heartbeat". I think the assumption is that, if you click or otherwise stimulate an X11 window, the TCP layer will timeout or throw another error if the client application is gone.
I did not see any statement in the X11-protocol about how to detect connection loss.
There is a NoOperation command from the client that could be used for such purpose. But do clients usually implement something like that as a lifesign?
Maybe that some application uses that NoOperation, but the purpose would be different from what you need. I mean, the X11 server is like an extension from the point of view of an application; the application can have interest to know whether the server is up and working, but it is not true the contrary. And, anyway, even if the server could detect that the application is gone, probably there is no way to tell the server to launch another application.
Probably a special proxy could be deployed; it could launch the application and monitor the connection (in both ways) and take the required steps in case the application goes away. But then again, who would monitor the proxy application?
First of all, X Protocol relies completely on TCP to send/receive information.
You cannot safely put a timeout capable transaction in order to detect a timeout in TCP. TCP is designed to retransmit only those segments that have already been sent but no acknowledged. It is completely asynchronous, in the sense that you send a command, and you can receive many responses or events unrelated to that command, before you receive the response. There's no heartbeat mechanism on XProtocol (except that the NOOP command is sent to synchronize operations with the server, and you receive a response for it, but you cannot overuse it, as that slows down severely the X connection, just launch any client with the -synchronous option to see it, see X(7)). You can even have TCP connections alive for years without interchanging a single packet. There's some mechanism, activated by option SO_KEEPALIVE that makes tcp to employ such heartbeat on TCP for a connection that has no data to transmit, but the X11 protocol normally doesn't make use of it. You don't post any code, nor a description of how the system is configured. The standard XServer never starts a connection by itself, except when launched specifically to negotiate with an XDMCP server (and this is done on UDP protocol) to serve as an XTerminal.
From your words probably you don't know that the roles of server and client are exchanged in X Protocol (the client is the remote application that connects to the server to display its output, and the server is the application that controls your display, mouse and keyboard) There's no means for the server to create a new client, so you need to be creating this connection in other means (probably through SSH, but not described).
By the way, when you say:
Experiments show that when unplugging a cable connection it needs some TCP-Timeout to detect the connection loss on socket level. This is very OS-dependent. In our case it was abut 30 minutes after which the X-Server eventually closed the window.
That is not OS-dependent. It is precisely the standard behaviour when you don't have traffic to send, there's no packet exchanged, so no detection is made (except if your client ---remember, this is the remote application program that wants to show its data in your local server--- activates the SO_KEEPALIVE option, and it requires several losses before declaring a lost connection) In your case the amount of time is variable because timers don't start until there's some data sent over the unplugged connection, and this makes it variable (not OS dependant)
On other side, you cannot pretend the server is going to turn on your monitor in case you leave the office and turn it off by mistake or by accident. What is the fault tolerance specification in that case?
IMHO, in regard of the presentation protocol, the application should be ready to show you as much information about the system as soon as you activate the connection (but the connection must be something allowed to fail). What is important is the means you develop for the application to be fault tolerant, even in the case you are not there to see the display. Will be somebody be advised that no one is looking at the screen? Are you going to detect the absence of operators in that case? Don't take this as a flame, but common sense should imperate in this case.
In case you need to ensure the connectivity to the remote host is available, you need to use another means to check for it. I recommend you to have a simple application pinging the remote host and alerting in case you don't get a positive result. Or you can open a connection to the server and then close it as soon as you get a positive response from the server (the first packet, for example) This will lead us to the next step, that is to ensure that some human is looking at the (turned on) screen of the display :)
For example, you can run a client in parallel to the one you are interested in, and force a heartbeat by asking for some server atom name (or a root window property value) in a loop with some delay. This will make the connection fail or your client can alert in case it doesn't receive the answer in some configurable time.

Multiple BLE Connections using Linux and Bluez 5.0

I am currently attempting to connect to multiple BLE devices using BlueZ 5.0 and Linux. I have one host BLE adapter and I have modified the gatttool to connect and perform this function. If I run an instance of the modified gatttool, I successfully connect and receive notification data from the BLE device. If I run another instance of the modified gatttool and connect to another BLE device, this application starts receiving notification data from both BLE devices and the initial application no longer receives any data. I believe this is due to the socket setup, where both applications are configuring their sockets to the same address and PSM (the newest instance receives the data whereas the other is starved). Is there a way to prevent this condition? Ideally, I want one application to connect to multiple devices. I assume that the application can only have one socket for the reason that multiple sockets will have the same issue as the multiple instances above. My BLE device is a TI CC2540 keyfob acting as a heartrate monitor.
I started an answer so I could have more space...
I'm using a combination of Python and C to get my code to work, so my "code" may look funny because it could be from either. Also, I used Bluez 4 as the 5 didn't support the kernel I was using. Let me know if there's an issue and I can clarify.
It seems like there's several ways of doing things, but I ended up opening separate sockets for different tasks. You can open a single socket and then set the socket options to take filtering off and you should get all the packets in one place. However, that was my initial way of doing it and I found that my connections would die within seconds.
To scan for connections I opened a socket(AF_BLUETOOTH, SOCK_RAW, BTPROTO_HCI) then did a bind on device 0. (there's a function called hci_get_route to get an available device number) You can then call hci_le_set_scan_parameters to set options, setsockopt(SOL_HCI, HCI_FILTER, filter) to just get LE scan events, and then called hci_le_set_scan_enable to turn on scanning.
Each device connection was made with a socket(AF_BLUETOOTH, SOCK_SEQPACKET, BTPROTO_L2CAP) which you then tell to connect to a particular device by calling connect on the socket with a struct sockaddr_l2 that has the particular device address in it. On that socket you should only get packets from that device. (one caveat... I found that my dongle wouldn't allow a connection while active scanning was taking place.. I had to temporarily shut it off just before connecting and then turn it back on. Otherwise I got a BUSY error from errno)
After saying all that, though... I think the way you're supposed to do everything in Bluez 5 is to use DBUS. Unfortunately that wasn't really an option for what I was doing. The functions I mentioned are in the shared lib that apparently isn't installed by default in 5 (you have to explicitly ask for it to be installed with configure). They stopped installing the shared lib by default because they wanted to encourage people to use DBUS instead.
WE have combined the code from hcitool and gatttool. The code works well for 2 device (scan, hci_le_create_conn and gatt_connect). I believe there is no limitation on the number of devices used.
1 Start cmd_lescan (from hcitool.c)
2.For each device scanned -
cmd_lecc (from hcitool.c)
gatt_connect (from gatttool.c)
This way one process can manage multiple BLE device. We do not have to turn OFF the scanning, just have ignore non advertisement messages:
if (meta->subevent != 0x02)
continue;
Thanks and looking forward to comments.

Resources