I've been reading this article on Bluetooth 5 & BLE maximum throughput. It provides data on maximum throughput across different devices and configurations. As far as I've understood, these measurements are defined by the connection between two devices and their respective data rates.
When establishing connections to more than one device, do these data rates apply to each connection independently? Or is the data rate shared between all of the connections?
For example: If I have a device with a maximum throughput of 1000kbps and connect it to two peripherals, will both connections have a throughput of 1000kbps? Or will it be split into two connections with 500kbps?
All Bluetooth chips I'm aware of only have one radio and one antenna. That means the connections are timeslotted. So if your connections use the 1Mbit/s PHY then the total throughput won't exceed 1Mbit/s.
How much each connection gets heavily depends on how the scheduler is implemented. If two connections have the same connection interval, a scheduler usually schedules a newly established connection to be allocated right after the first connection's connection events, which might lead to performance where the first connection can only send one or two packets per connection event and the second connection gets all remaining radio time.
Related
I am using the timeout mechanism to close my connection socket, which will close inactive connections greater than 15s. But my Linux system seems to only allow me to create 1024 sockets in the same time period.
So I want to ask
(1) Is there a connection limit for the connection socket?
(2) If there is an upper limit, is the timeout mechanism unable to resist a large number of concurrent requests in the same time period?
(3) I am using a dual-core 4g virtual machine. Will improving the configuration increase the number of its creation?
(4) If (2) is correct, what method (or how long) should the server use to close the socket?
I'm setting up a device to advertise as a server (peripheral) and a mobile phone to act as the client (central). My issue is: when my central 'reads' from the peripheral, how many packets can the peripheral respond with for a single request?
What I have seen so far is that the peripheral may respond with a 20 byte packet and then indicate another 20 byte packet. I don't see how this could achieve the stated data rates?
From your question I understand that your actual question is how to achieve the maximum BLE data rate, right?
Have a look here:
https://stackoverflow.com/search?q=BLE+throughput
Especially here: BLE peripheral throughput limit and here How can I increase the throughput of my BLE application?
In general the key is not the number of packets in the first place. First have a look at the MTU size and connection interval. After that, yes, it is possible to send multiple packets per connection interval. (Uh, I have to guess here 3 or 4 usually but not sure)
Moreover, in newer bluetooth standard versions you can look if your device supports packet length extension.
For further reading I suggest
https://punchthrough.com/pt-blog-post/maximizing-ble-throughput-on-ios-and-android/
and
https://punchthrough.com/pt-blog-post/maximizing-ble-throughput-part-3-data-length-extension-dle/
Using notifications instead of read requests you will get the best throughput.
Then a lower connection interval will also usually increase the throuhput.
Using LE Data Length Extension and a large MTU will increase it even further.
The last step is to switch from 1 Mbit/s PHY to 2 Mbit/s PHY to get even better.
From BT snoop log below, found BLE central device and peripheral device got connected after a few loops of negotiation about
connection parameters, include connection interval, connection latency and supervisor timeout etc.
As found in bt snoop log, the connection interval will be set to 1 second, my question is:
Why not found the connection between them disappear 1 second later after they connected?
What’s the real meaning of connection interval?
As you know BLE has a pillar that is to be low energy consumption.
The main rule is turn on the radio as little as possible and turn off the radio as soon as possibile.
When a connection is established the radio signal isn't always active even when a peer want to transmit. The transmission phase has the radio turns on and off more times.
The connection interval is the time between two connection event and inside each connection event there is packets transmission.
Suppose the peer wants to transmitt 10 packets : the radio signal is on for packets transmission (max 6 packets) then turned off for a time that is the connection interval ... now 6 packets are transmitted. After connection interval, the radio is turned on to transmit tha last 4 packets and so on.
The connection interval can be from 7.5 ms and 4 s and it depends on both peers.
Of course, lesser connection interval means high baud rate transmission but more power consumption and vice versa.
Paolo.
BLE is a radio communication protocol that work in the 2.4GHz spectrum.
If you measure the radio current on a CRO, while it is in a connection, you will get a graph something similar to shown above.
The peaks indicate that radio is turned ON. Between the peaks the device is sleeping to save power.
To put it simply, connection interval is the time interval between the peaks.
Meaning it is the time for which the device sleeps after sending a packet and then wakes up to send a packet again.
This timing is synchronized between the two communication devices. It is like two people agreeing to meet at a particular time and place.
I've recently read a lot about best practices with JMS, Spring (and TIBCO EMS) around connections, sessions, consumers & producers
When working within the Spring world, the prevailing wisdom seems to be
for consuming/incoming flows - to use an AbstractMessageListenerContainer with a number of consumers/threads.
for producing/publishing flows - to use a CachingConnectionFactory underneath a JmsTemplate to maintain a single connection to the broker and then cache sessions and producers.
For producing/publishing, this is what my (largeish) server application is now doing, where previously it was creating a new connection/session/producer for every single message it was publishing (bad!) due to use of the raw connection factory under JmsTemplate. The old behaviour would sometimes lead to 1,000s of connections being created and closed on the broker in a short period of time in high peak periods and even hitting socket/file handle limits as a result.
However, when switching to this model I am having trouble understanding what the performance limitations/considerations are with the use of a single TCP connection to the broker. I understand that the JMS provider is expected to ensure it can be used in the multi-threaded way etc - but from a practical perspective
it's just a single TCP connection
the JMS provider to some degree needs to co-ordinate writes down the pipe so they don't end up an interleaved jumble, even if it has some chunking in its internal protocol
surely this involves some contention between threads/sessions using the single connection
with certain network semantics (high latency to broker? unstable throughput?) surely a single connection will not be ideal?
On the assumption that I'm somewhat on the right track
Am I off base here and misunderstanding how the underlying connections work and are shared by a JMS provider?
is any contention a problem mitigated by having more connections or does it just move the contention to the broker?
Does anyone have any practical experience of hitting such a limit they could share? Either with particular message or network throughput, or even caused by # of threads/sessions sharing a connection in parallel
Should one be concerned in a single-connection scenario about sessions that write very large messages blocking other sessions that write small messages?
Would appreciate any thoughts or pointers to more reading on the subject or experience even with other brokers.
When thinking about the bottleneck, keep in mind two facts:
TCP is a streaming protocol, almost all JMS providers use a TCP based protocol
lots of the actions from TIBCO EMS client to EMS server are in the form of request/reply. For example, when you publish a message / acknowledge a receive message / commit a transactional session, what's happening under the hood is that some TCP packets are sent out from client and the server will respond with some packets as well. Because of the nature of TCP streaming, those actions have to be serialised if they are initiated from the same connection -- otherwise say if from one thread you publish a message and in the exact same time from another thread you commit a session, the packets will be mixed on the wire and there is no way server can interpret the right message from the packets. [ Note: the synchronisation is done from the EMS client library level, hence user can feel free to share one connection with multiple threads/sessions/consumers/producers ]
My own experience is multiple connections always output perform single connection. In a lossy network situation, it is definitely a must to use multiple connections. Under best network condition, with multiple connections, a single client can nearly saturate the network bandwidth between client and server.
That said, it really depends on what is your clients' performance requirement, a single connection under good network can already provides good enough performance.
Even if you use one connection and 100 sessions it means finally you
are using 100threads, it is same as using 10connections* 10 sessions =
100threads.
You are good until you reach your system resource limits
I'm working on a server architecture for sending/receiving messages from remote embedded devices, which will be hosted on Windows Azure. The front-facing servers are going to be maintaining persistent TCP connections with these devices, and I need a way to communicate with them on the backend.
Problem facts:
Devices: ~10,000
Frequency of messages device is sending up to servers: 1/min
Frequency of messages originating server side (e.g. from user actions, scheduled triggers, etc.): 100/day
Average size of message payload: 64 bytes
Upward communication
The devices send up messages very frequently (sensor readings). The constraints for that data are not very strong, due to the fact that we can aggregate/insert those sensor readings in a batched manner, and that they don't require in-order guarantees. I think the best way of handling them is to put them in a Storage Queue, and have a worker process poll the queue at intervals and dump that data. Of course, I'll have to be careful about making sure the worker process does this frequently enough so that the queue doesn't infinitely back up. The max batch size of Azure Storage Queues is 32, but I'm thinking of potentially pulling in more than that: something like publishing to the data store every 1,000 readings or 30 seconds, whichever comes first.
Downward communication
The server sends down updates and notifications much less frequently. This is a slightly harder problem, as I can see two viable paradigms here (with some blending in between). Could either:
Create a Service Bus Queue for each device (or one queue with thousands of subscriptions - limit is for number of queues is 10,000)
Have a state table housed in a DB that contains the latest "state" of a specific message type that the devices will get sent to them
With option 1, the application server simply enqueues a message in a fire-and-forget manner. On the front-end servers, however, there's quite a bit of things that have to happen. Concerns I can see include:
Monitoring 10k queues (or many subscriptions off of a queue - the
Azure SDK apparently reuses connections for subscriptions to the same
queue)
Connection Management
Should no longer monitor a queue if device disconnects.
Need to expire messages if device is disconnected for an extended period of time (so that queue isn't backed up)
Need to enable some type of "refresh" mechanism to update device's complete state when it goes back online
The good news is that service bus queues are durable, and with sessions can arrange messages to come in a FIFO manner.
With option 2, the DB would house a table that would maintain state for all of the devices. This table would be checked periodically by the front-facing servers (every few seconds or so) for state changes written to it by the application server. The front-facing servers would then dispatch to the devices. This removes the requirement for queueing of FIFO, the reasoning being that this message contains the latest state, and doesn't have to compete with other messages destined for the same device. The message is ephemeral: if it fails, then it will be resent when the device reconnects and requests to be refreshed, or at the next check interval of the front-facing server.
In this scenario, the need for queues seems to be removed, but the DB becomes the bottleneck here, and I fear it's not as scalable.
These are both viable approaches, and I feel this question is already becoming too large (although I can provide more descriptions if necessary). Just wanted to get a feel for what's possible, what's usually done, if there's something fundamental I'm missing, and what things in the cloud can I take advantage of to not reinvent the wheel.
If you can identify the device (may be device id/IMEI/Mac address) by the the message it sends then you can reduce the number of queues from 10,000 to 1 queue and not have 10000 subscriptions too. This could also help you in the downward communication as you will be able to identify the device and send the message to the appropriate socket.
As you mentioned the connections last longer you could deliver the command to the device that is connected and decide what to do with the commands to the device that are not connected.
Hope it helps