I just got a Bluetooth LE/Smart bathroom scales (Model Sanitas SBF 70). I can read data from it using the following command:
gatttool --device=(btaddr) -I
connect
Then when I stand on it, I get multiple notification messages like this:
"Notification handle = 0x002e value: e7 58 01 05 e9"
where the last two bytes are is the mass in 50g increments.
I'd like to integrate this into a few application using a TCP or UDP socket service that broadcasts these messages to any listening clients.
But after some research I have no idea what's the best way to stay connected all the time (the connection times out after a few minutes). Or alternatively to be able to re-establish a connection when the scales is used (I notice lots of activity from 'hcitool lescan' whenever someone steps on the scales).
I don't care what language / library is used. If I can push this to a TCP/UDP socket it will be trivial for other applications to consume the information.
The answer is straightforward: You don't.
Your scale is most likely battery powered. Therefore the Bluetooth communications will only be enabled for a short period of time after having measured your weight. Your application just needs to try connecting to the scale over and over (catch any "unable to connect timeouts") until you step on it. And when connected get the data from it before BLE is shut down again. In pseudo code:
while true:
while not_connected:
try to connect
receive notifications
disconnect
gatttool wrapped by the python module pygatt is perfectly usable to solve this chalenge.
In my case scale data (preceding 30 weights) is transferred after enabling indications of 3 different characteristics.
Related
I'm developing a system to control a range of IoT devices. Each set of devices is grouped into a "system" that monitors/controls a real-world process. For example system A may be managing process A and have:
3 cameras
1 accelerometer
1 magnetometer
5 thermocouples
The webserver maintains socket connections to each device. Users can connect (via a UI - again with WebSockets) to the webserver and receive updates about systems to which they are subscribed.
When a user wants to begin process A, they should press a 'start' button on the interface. This will start up the cameras, accelerometer, magnetometer, and thermocouples. These will begin sending data to the server. It also triggers the server to set the recording mode to true for each device, which means the server will write output to a database. My question:
Should I send a single 'start' request from javascript code in my UI to the server, and allow the server to start each device individually (how do I then handle an error, for example, if a single sensor isn't working - what about if two sensors don't work?). Or do I send individual requests from the UI to the server for each device, i.e. start camera 1, start camera 2, start accelerometer, start recording camera1, etc. and handle each success/error state individually?
My preference throughout the system so far has been the latter approach - one request, one response; with an HTTP error code. However, programming becomes more complex when there are many devices to control, for example - System B has 12 thermocouples.
Some components of the system are not vital - e.g. if 1 camera fails we can continue, however, if the accelerometer fails the whole system cannot run and so human monitoring is required. If the server started the devices individually from a single 'start' message, should I return an array of errors, or should the server know which components are vital and return a single error if a vital component fails? And in a failure state, should the server then handle stopping each sensor and returning to the original state - and what if that then fails? I foresee this code becoming quite complex with this approach.
I've been going back and forth over the best way to approach this for months, but I can't find much advice online around building complex, production-ready IoT systems for the real world. If anybody has any advice or could point me towards any papers/books/etc. I would really appreciate it.
Thanks in advance,
Tom
I'm setting up a device to advertise as a server (peripheral) and a mobile phone to act as the client (central). My issue is: when my central 'reads' from the peripheral, how many packets can the peripheral respond with for a single request?
What I have seen so far is that the peripheral may respond with a 20 byte packet and then indicate another 20 byte packet. I don't see how this could achieve the stated data rates?
From your question I understand that your actual question is how to achieve the maximum BLE data rate, right?
Have a look here:
https://stackoverflow.com/search?q=BLE+throughput
Especially here: BLE peripheral throughput limit and here How can I increase the throughput of my BLE application?
In general the key is not the number of packets in the first place. First have a look at the MTU size and connection interval. After that, yes, it is possible to send multiple packets per connection interval. (Uh, I have to guess here 3 or 4 usually but not sure)
Moreover, in newer bluetooth standard versions you can look if your device supports packet length extension.
For further reading I suggest
https://punchthrough.com/pt-blog-post/maximizing-ble-throughput-on-ios-and-android/
and
https://punchthrough.com/pt-blog-post/maximizing-ble-throughput-part-3-data-length-extension-dle/
Using notifications instead of read requests you will get the best throughput.
Then a lower connection interval will also usually increase the throuhput.
Using LE Data Length Extension and a large MTU will increase it even further.
The last step is to switch from 1 Mbit/s PHY to 2 Mbit/s PHY to get even better.
From BT snoop log below, found BLE central device and peripheral device got connected after a few loops of negotiation about
connection parameters, include connection interval, connection latency and supervisor timeout etc.
As found in bt snoop log, the connection interval will be set to 1 second, my question is:
Why not found the connection between them disappear 1 second later after they connected?
What’s the real meaning of connection interval?
As you know BLE has a pillar that is to be low energy consumption.
The main rule is turn on the radio as little as possible and turn off the radio as soon as possibile.
When a connection is established the radio signal isn't always active even when a peer want to transmit. The transmission phase has the radio turns on and off more times.
The connection interval is the time between two connection event and inside each connection event there is packets transmission.
Suppose the peer wants to transmitt 10 packets : the radio signal is on for packets transmission (max 6 packets) then turned off for a time that is the connection interval ... now 6 packets are transmitted. After connection interval, the radio is turned on to transmit tha last 4 packets and so on.
The connection interval can be from 7.5 ms and 4 s and it depends on both peers.
Of course, lesser connection interval means high baud rate transmission but more power consumption and vice versa.
Paolo.
BLE is a radio communication protocol that work in the 2.4GHz spectrum.
If you measure the radio current on a CRO, while it is in a connection, you will get a graph something similar to shown above.
The peaks indicate that radio is turned ON. Between the peaks the device is sleeping to save power.
To put it simply, connection interval is the time interval between the peaks.
Meaning it is the time for which the device sleeps after sending a packet and then wakes up to send a packet again.
This timing is synchronized between the two communication devices. It is like two people agreeing to meet at a particular time and place.
Is it possible to send some notification messages to the nearby Bluetooth devices without pairing.I have found some protocol for these - OBEX Oject Push. But am not clear whether is is feasible without pairing request .Any demo apps for reference?
Yes and no.
If you are actually talking about connecting but not pairing, then, yes.
If you are talking about no connection at all, then no.
When creating a Bluetooth connection between two or more devices the following steps are taken.
Inquiry – If two Bluetooth devices know absolutely nothing about each other, one must run an inquiry to try to discover the other. One device sends out the inquiry request, and any device listening for such a request will respond with its address, and possibly its name and other information. The closest located device is not necessarily the fastest to respond and any device that hears the call will try to respond.
Paging – Paging is the process of forming a connection between two Bluetooth devices. Before this connection can be initiated, each device needs to know the address of the other (found in the inquiry process).
Connection – After a device has completed the paging process, it enters the connection state. While connected, a device can either be actively participating or it can be put into a low power sleep mode.
• Active Mode – This is the regular connected mode, where the device is actively transmitting or receiving data.
• Sniff Mode – This is a power-saving mode, where the device is less active. It’ll sleep and only listen for transmissions at a set interval (e.g. every 100ms).
• Hold Mode – Hold mode is a temporary, power-saving mode where a device sleeps for a defined period and then returns back to active mode when that interval has passed. The master can command a slave device to hold.
• Park Mode – Park is the deepest of sleep modes. A master can command a slave to “park”, and that slave will become inactive until the master tells it to wake back up.
Two devices can be bonded together through a one-time process called pairing. When two devices are paired, they store each other’s addresses, names and profiles in memory, allowing them to automatically establish a connection as soon as they are in range of each other.
It is not possible to send OPP (or other) communication between two devices before connecting.
It is possible to send communication between two devices after connection but before pairing.
I'm implementing a websocket-secure (wss://) service for an online game where all users will be connected to the service as long they are playing the game, this will use a high number of simultaneous connections, although the traffic won't be a big problem, as the service is used for chat, storage and notifications... not for real-time data synchronization.
I wanted to use Alchemy-Websockets, but it doesn't support TLS (wss://), so I have to look for another service like Fleck (or other).
Alchemy has been tested with high number of simultaneous connections, but I didn't find similar tests for Fleck, so I need to get some real info from users of fleck.
I know that Fleck is non-blocking and uses Async calls, but I need some real info, cuz it might be abusing threads, garbage collector, or any other aspect that won't be visible to lower number of connections.
I will use c# for the client as well, so I don't need neither hybiXX compatibility, nor fallback, I just need scalability and TLS support.
I finally added Mono support to WebSocketListener.
Check here how to run WebSocketListener in Mono.
10K connections is not little thing. WebSocketListener is asynchronous and it scales well. I have done tests with 10K connections and it should be fine.
My tests shows that WebSocketListener is almost as fast and scalable as the Microsoft one, and performs better than Fleck, Alchemy and others.
I made a test on a Windows machine with Core2Duo e8400 processor and 4 GB of ram.
The results were not encouraging as it started delaying handshakes after it reached ~1000 connections, i.e. it would take about one minute to accept a new connection.
These results were improved when i used XSockets as it reached 8000 simultaneous connections before the same thing happened.
I tried to test on a Linux VPS with Mono, but i don't have enough experience with Linux administration, and a few system settings related to TCP, etc. needed to change in order to allow high number of concurrent connections, so i could only reach ~1000 on the default settings, after that he app crashed (both Fleck test and XSocket test).
On the other hand, I tested node.js, and it seemed simpler to manage very high number of connections, as node didn't crash when reached the limits of tcp.
All the tests where echo test, the servers send the same message back to the client who sent the message and one random other connected client, and each connected client sends a random ~30 chars text message to the server on a random interval between 0 and 30 seconds.
I know my tests are not generic enough and i encourage anyone to have their own tests instead, but i just wanted to share my experience.
When we decided to try Fleck, we have implemented a wrapper for Fleck server and implemented a JavaScript client API so that we can send back acknowledgment messages back to the server. We wanted to test the performance of the server - message delivery time, percentage of lost messages etc. The results were pretty impressive for us and currently we are using Fleck in our production environment.
We have 4000 - 5000 concurrent connections during peak hours. On average 40 messages are sent per second. Acknowledged message ratio (acknowledged messages / total sent messages) never drops below 0.994. Average round-trip for messages is around 150 miliseconds (duration between server sending the message and receiving its ack). Finally, we did not have any memory related problems due to Fleck server after its heavy usage.