I've never worked with bluetooth before. I have to sends data via BLE and I've found the limit of 20 bytes per chunk.
The sender is an Arduino and the receiver could be both an Android or a Node.js app on a pc.
I have to send 9 values, stored in float values, so 4 bytes * 9 = 36 bytes. I need 2 chunks for all my data via BLE. The receiving part needs both chunks to process them. If some data are lost, I don't care.
I'm not expert in network protocols and I think I have to give each message an incremental timestamp so that the receiver can glue the two chunks with the same timestamp or discard the last one if the new timestamp is higher. But I'm not sure how to do a checksum, if I really need it or not, if I really have to care about it, or if - for a simple beta version of my system - I can ignore all those problems..
Does anyone can give me some advice? Like examples of similar situations handled with BLE communication?
You can get around the size limitation using the "Read Blob Request" of ATT. It allows you to read an attribute and also give an offset. So, you can use it to read the attribute with an offset of 0, if there's more than ATT_MTU bytes than you can request again with the offset at ATT_MTU*1, if there's still more ATT_MTU*2, etc... (You can read it in 3.4.4.5 of the Bluetooth v4.1 specifications; it's in the 4.0 spec too but I don't have that in front of me right now)
If the value changes between request, I'm not sure how you could go about detecting such a change. You could have the attribute send notifications when there's a change to interrupt the process in case the value changes in the middle of reading it.
Related
I am trying to build an Android app that interfaces with the ESP32 using BLE. I am using the RxBluetoothKotlin library from Vincent Masselis for the Android side. For the ESP32 side, I am using the default Kolban libraries that are included in the Arduino IDE. My phone is a OnePlus 5T and my ESP32 is a MH ET Live ESP32DevKIT. My Android app can be found here, and my ESP32 program here.
The whole system works pretty much perfectly for me in terms of pure functionality. That is to say, every button does what it's supposed to do, and I get the exact behaviour I had expected to get. However, the communication itself is very slow. Around 200 bytes/second. My test button in the Android app requests a bunch of text data from the ESP32, and displays this in a dialog. It also lists a number which represents the time between request and reception in milliseconds. Using this, I get around 2 seconds for 440 bytes of data. When I send less data, the time decreases approximately linearly with data size. 40 bytes of data will take around 200ms, and 20 bytes or under typically takes less than 100ms.
This seems rather slow to me. From what I understand, I should be able to at least get a few kilobytes per second. I have tried to check the speed using nRF Connect, but I get the same 2 seconds timespan for my data transfer. This suggests that the problem is not in my app, since I also have it with a completely different app. I also put the code in my main loop inside of callbacks instead (which I probably should have done in the first place), but this didn't change things at all. I have tried taking the microcontroller and my phone to a few different locations, hoping to eliminate interference. I have tried to mess with BLEDevice::setPower and BLEDevice::setMTU, as well as setting RxBluetoothGatt.requestMtu(500) on the Android side. Everything so far seems to have had little to no effect. The only thing that did anything, was adding the line "pServer->updatePeerMTU(0,500);" in my loop during the connection phase. This caused the first 23 bytes of data to be repeated whenever I pressed the test button in my app, and made the data transfer take about 3 seconds. If I'm lucky, I can get maybe a bit under 1.8 seconds for 440 bytes, but this is a very small change when I'm expecting an order of magnitude of difference, and might even be down to pure chance rather than anything I did.
Does anyone have an idea of how to increase my transfer speed?
The data transmission speed is mainly influenced by the Bluetooth LE connection interval (between 7.5 ms and 4 seconds) and is negotiated between the master (central unit) and the peripheral device. The master establishes a connection with a parameter set and the peripheral can propose to change this parameter set. In the end, however, the central unit decides which parameter set is to be used.
But the Bluetooth connection interval cannot be changed by an Android applications directly, which normally act as the central role. Instead it can request a connection priority which is known to have an influence on the connection interval.
I'm writing code for Pycom's Lopy4 board and have created a BLE service for environmental sensing which currently has one characteristic, temperature. I'm taking the temperature as a float and attempting to update the characteristic value every two seconds.
When I use a BLE scanner app, whenever I try to read, I read a value of "temperature10862," which is the characteristic's name and uuid. Yet when I press the indicate button, the value shows the correct temperature string, updating automatically every two seconds.
I'm a bit confused overall. Is this a problem with my code on the Pycom device or am I simply misunderstanding what a BLE read is supposed to be? Since the temperature values are obviously being updated on the device, but why does the client, the app, only show these values with an indication rather than a read?
I am sorry for any vagueness in the question, but any help or guidance would be appreciated.
Read Attempt
Indicate Attempt
Returning "temperature10862" as a read response is obviously incorrect. Sending the temperature as a string is in this case also incorrect, since you use the Bluetooth SIG-defined characteristic https://www.bluetooth.com/xml-viewer/?src=https://www.bluetooth.com/wp-content/uploads/Sitecore-Media-Library/Gatt/Xml/Characteristics/org.bluetooth.characteristic.temperature.xml. According to that the value should consist of a signed 16-bit integer in units of 0.01 degrees Celcius.
If you look at https://www.bluetooth.com/xml-viewer/?src=https://www.bluetooth.com/wp-content/uploads/Sitecore-Media-Library/Gatt/Xml/Services/org.bluetooth.service.environmental_sensing.xml, you will see that it's mandatory to support Read and optional to support Notifications. Indications, however are not permitted. So you should change your indicate property to notify instead.
The value sent should be the same regardless if the value is sent as a notification or read response.
Be sure you read the Environmental Sensing specs and follow the rest of the GATT service structure.
I am currently working on a graduation project where I want to transmit a sessiontoken using BLE. On the server side I am using Node.js and Bleno to create the connection. After the client subscribes to the notification, the server will push the token.
A small part of the code is:
const buf1 = Buffer.from(info, 'utf8');
updateValueCallback(buf1);
At this step, I am using nRF Connect to check if everything is working. My intention works, except I see that only the first 20 characters are transferred. (As much as the packet size)
My question concerns the buffer size. Will, when I finally connect to an Android app, the whole string be transmitted? In this case the underlying protocols will cut the string and reassemble it on the other side. In this case the buffer size doesn't matter. Or must I negotiate the MTU to be the size of the string. In other words must the buffersize be the size of the transmitted package?
In the case the buffer is smaller than the whole string, can the whole string still be transmitted with it?
GATT requires that a notification is maximum MTU - 3 bytes long. The default MTU is 23 so hence the maximum modification value length is 20 bytes by default. By negotiating a larger MTU you can send longer notifications (if your BLE stack supports that).
I haven't used Bleno but all the stack that I have used I needed to slice the data myself 20 bytes at the time. And on receiver side collect them and put them together again.
The stacks have been good to buffer the data and transmit it one chunk at the time. So I have looped the function (as your updateValueCallback()) until all the slices of my data was done.
Hope it works for you.
I'm writing a small app to test out how torrent p2p works and I created a sample torrent and am seeding it from my Deluge client. From my app I'm trying to connect to Deluge and download the file.
The torrent in question is a single-file torrent (file is called A - without any extension), and its data is the ASCII string Test.
Referring to this I was able to submit the initial handshake and also get a valid response back.
Immediately afterwards Deluge is sending even more data. From the 5th byte it would seem like it is a bitfield message, but I'm not sure what to make of it. I read that torrent clients may send a mixture of Bitfield and Have messages to show which parts of the torrent they possess. (My client isn't sending any bitfield, since it is assuming not to have any part of the file in question).
If my understanding is correct, it's stating that the message size is 2: one for identifier + payload. If that's the case why is it sending so much more data, and what's that supposed to be?
Same thing happens after my app sends an interested command. Deluge responds with a 1-byte message of unchoke (but then again appends more data).
And finally when it actually submits the piece, I'm not sure what to make of the data. The first underlined byte is 84 which corresponds to the letter T, as expected, but I cannot make much more sense of the rest of the data.
Note that the link in question does not really specify how the clients should supply messages in order once the initial handshake is completed. I just assumed to send interested and request based on what seemed to make sense to me, but I might be completely off.
I don't think Deluge is sending the additional bytes you're seeing.
If you look at them, you'll notice that all of the extra bytes are bytes that already existed in the handshake message, which should have been the longest message you received so far.
I think you're reading new messages into the same buffer, without zeroing it out or anything, so you're seeing bytes from earlier messages again, following the bytes of the latest message you read.
Consider checking if the network API you're using has a way to check the number of bytes actually received, and only look at that slice of the buffer, not the entire thing.
I'm using Bluetooth Low Energy to connect a device in Central mode to several devices in Peripheral mode. The Peripheral device would need to send 4 strings (all fewer than 20 characters) to the Central device.
Is it better to create 4 characteristics and have the Peripheral make 4 write requests to the Central? Or is it better to have 1 characteristic and combine all 4 strings into a JSON object as to make only 1 write request?
Simply put - is it better in this instance to small chunks of data multiple times or send a larger chunk of data once?
Which approach would be better for allowing as many Peripherals to connect to a Central as possible? Does it matter?
Thanks.
Since you have four separate strings I would recommend that you make four characteristics as well. The data packets in BLE is usually about 20 bytes (23 bytes minus some ATT overhead) so your data fits in a single packet. And when you do a read long (on a single long attribute) your going to get a single read response packet back before you ask for the next chunk of data. You would probably end up with the same amount of packets going back and forth to read the data. Obviously, discovering four vs only one characteristic takes a few extra packets, but nothing to worry about.
Simply put - is it better in this instance to small chunks of data
multiple times or send a larger chunk of data once?
Only if your data strings are considerably smaller than 20 bytes.
You talk about having the peripheral write it's data to the central (presumably right after connection). Normally this is done the other way, where the central first discovers the peripheral database (after connection of course) and then read the data it's interested in.
You can also consider using indications, rather than reads. This is typical for a peripheral sensor which is logging data. The central connects, discovers the database and then enable notification for a given attribute. The peripheral will then send each data sample (maybe packed in some way) with indications until all the locally logged data has been transferred. With indication, every transfer is acknowledged so the peripheral can wait for the confirmation before removing the data locally making sure you don't loose logging samples.
Which approach would be better for allowing as many Peripherals to
connect to a Central as possible? Does it matter?
The way to organize and transfer data does not matter in how many peripherals your central can connect to. But if your peripherals write into the central database you need to make sure they don't write to the same handle(s). Better to do a read or indication/confirmation initiated by the central.