I want to make sure the latency between my app and the bluetooth headphones is accounted for, but I have absolutely no idea how I can get this value. The closest thing I found was:
BluetoothLEPreferredConnectionParameters.ConnectionLatency which is only available on Windows 11... Otherwise there isn't much to go on.
Any help would be appreciated.
Thanks,
Peter
It's very difficult to get the exact latency because it is affected by many parameters - but you're on the right track by guessing that the connection parameters are a factor of this equation. I don't have much knowledge on UWP, but I can give you the general parameters that affect the speed/latency, and then you can check their availability in the API or even contact Windows technical team to see if these are supported.
When you make a connection with a remote device, the following factors impact the speed/latency of the connection:-
Connection Interval: this specifies the interval at which the packets are sent during a connection. The lower the value, the higher the speed. The minimum value as per the Bluetooth spec is 7.5ms.
Slave Latency: this is the value you originally mentioned - it specifies the number of packets that can be missed before a connection is considered lost. A value of 0 means that you have the fastest most robust connection.
Connection PHY: this is the modulation on which the packets are sent. If both devices support 2MPHY, then the connection should be quicker.
Data Length/MTU Extension: these are two separate features but I am looping them together becuase the effect is the same - more bytes are sent per packet, which results in a higher throughput. The maximum value is 251 bytes per packet.
You can find more information about these parameters here:-
A Practical Guide to BLE Throughput
Maximizing BLE Throughput: Everything You Need to Know
Bluetooth 5 Speed - How to Achieve Maximum Throughput
And below are some other links that might help you understand what is supported on UWP:-
Bluetooth Developer FAQ
BluetoothLEConnectionParameters.OptimizedProperty
Bluetooth LE Preferred Connection Parameter Class
Bluetooth LE Connection PHY class
How to Change MTU Size and PHY on Windows UWP C++
Related
I'm new to BLE, and bluetooth in general, but I'm on a project that includes communication via BT 5.
As the BLE communication has to transmit around 2 bytes, to 1 MB at a time, I'm looking for a way to optimize the transmission time.
I know the pros n cons for the lower transmission freq (125 kbps), and for the highest transmission freq (2 Mbps), and for the DLE of 251 PDU bytes, but what I see from different forums and articles, the throughput mostly depend on the connection parameters as the connection interval and the packets per connection event. But where does the transmission frequency come in?
I've tried searching this forum for an answer, and several others, and even the BT core specification, but I haven't been able to find a solution for my problem.
If you read my answer at Why is BLE 4.2 faster than BLE 4.1, you can see that there are many components affecting the overall transfer speed.
You first have the radio transmission rate itself, which sets the upper limit.
You then have the overhead between all packets that becomes less apparant as longer packets you have.
The connection interval and length of each connection event can be important if you want the throughout to be high. If there is only one connection and the Bluetooth chip is not too stupid, the connection event length will fill the connection interval and therefore the connection interval doesn't really matter. However, if there are other conflicting radio events scheduled in a way that the connection event must be closed, the transmission cannot continue until the next connection event. So in this case, the throuhput will be higher if you lower the connection interval. So as a summary it highly depends on which Bluetooth stack the chip runs, how it's configured by the host and how many active connections you have.
The transmission rate controls your underlying bitrate but on top of that sits different layers of the BLE protocol which slow down the realizable throughput. This article has general derivation of how the different layers impact throughput in case that's useful!
I have an Android App that write several bytes to a Bluetooth device.
Looking on btsnoop_hci.log I see that, when a large amount of bytes are sent to the BLE device, the app use Prepare Write Request more times and then Execute Write Request: Immediately Write All.
Now my problem is how to perform this with a my application using a RN4870 module.
At this moment I can connect, read service and characteristics, and write using
CHW command as described in the manual when the there are few bytes.
But I cannot write as the remote BLE device expect when there are lot of bytes.
Thank You for support
Marco
This is the Microchip answer:
Hello,
The core specifications are handled by the firmware.
The user doesn't have access at this level, so is nothing you can set.
Regarding the long data question:
"Does RN4870 module support the Data Length Extension feature? "
RN4870 rev 1.28 support DLE, but partially. The normal packet size in BLE without DLE is 20 bytes.
With a standard DLE feature, the normal packet size should be 251 bytes.
However, in RN4870 Rev 1.28, the packet size is 151 bytes. So it is not a full implementation of the DLE.
The DLE feature (Data Length Extension) is embedded into the lower levels of the Bluetooth stack and there are no specific commands to enable or disable DLE. Essentially, if the peer device also supports DLE, then the DLE will be enabled.
So there are no specific (commands) that you need to do to increase throughput through DLE.
Regards,
In other words there is nothing to do!
In Android Application you can't directly set the DLE length, instead you should set the MTU size. Android Bluetooth stack will calculate DLE length based on the MTU. Maximum Data Length supported by BT protocol in 251, but can be between 27 and 251 depends on BT controller H/W capability. During connection BT Device will negotiate with peer device(If peer device support DLE) to set Maximum DLE size supported by both device.
To increase your throughput you can use the maximum supported MTU size of 512. Also you can write without response and do error check on data using your own logic like checking parity or CRC and re-transmit data from Application for better throughput.
I'm implementing a protocol over serial ports on Linux. The protocol is based on a request answer scheme so the throughput is limited by the time it takes to send a packet to a device and get an answer. The devices are mostly arm based and run Linux >= 3.0. I'm having troubles reducing the round trip time below 10ms (115200 baud, 8 data bit, no parity, 7 byte per message).
What IO interfaces will give me the lowest latency: select, poll, epoll or polling by hand with ioctl? Does blocking or non blocking IO impact latency?
I tried setting the low_latency flag with setserial. But it seemed like it had no effect.
Are there any other things I can try to reduce latency? Since I control all devices it would even be possible to patch the kernel, but its preferred not to.
---- Edit ----
The serial controller uses is an 16550A.
Request / answer schemes tends to be inefficient, and it shows up quickly on serial port. If you are interested in throughtput, look at windowed protocol, like kermit file sending protocol.
Now if you want to stick with your protocol and reduce latency, select, poll, read will all give you roughly the same latency, because as Andy Ross indicated, the real latency is in the hardware FIFO handling.
If you are lucky, you can tweak the driver behaviour without patching, but you still need to look at the driver code. However, having the ARM handle a 10 kHz interrupt rate will certainly not be good for the overall system performance...
Another options is to pad your packet so that you hit the FIFO threshold every time. It will also confirm that if it is or not a FIFO threshold problem.
10 msec # 115200 is enough to transmit 100 bytes (assuming 8N1), so what you are seeing is probably because the low_latency flag is not set. Try
setserial /dev/<tty_name> low_latency
It will set the low_latency flag, which is used by the kernel when moving data up in the tty layer:
void tty_flip_buffer_push(struct tty_struct *tty)
{
unsigned long flags;
spin_lock_irqsave(&tty->buf.lock, flags);
if (tty->buf.tail != NULL)
tty->buf.tail->commit = tty->buf.tail->used;
spin_unlock_irqrestore(&tty->buf.lock, flags);
if (tty->low_latency)
flush_to_ldisc(&tty->buf.work);
else
schedule_work(&tty->buf.work);
}
The schedule_work call might be responsible for the 10 msec latency you observe.
Having talked to to some more engineers about the topic I came to the conclusion that this problem is not solvable in user space. Since we need to cross the bridge into kernel land, we plan to implement an kernel module which talks our protocol and gives us latencies < 1ms.
--- edit ---
Turns out I was completely wrong. All that was necessary was to increase the kernel tick rate. The default 100 ticks added the 10ms delay. 1000Hz and a negative nice value for the serial process gives me the time behavior I wanted to reach.
Serial ports on linux are "wrapped" into unix-style terminal constructs, which hits you with 1 tick lag, i.e. 10ms. Try if stty -F /dev/ttySx raw low_latency helps, no guarantees though.
On a PC, you can go hardcore and talk to standard serial ports directly, issue setserial /dev/ttySx uart none to unbind linux driver from serial port hw and control the port via inb/outb to port registers. I've tried that, it works great.
The downside is you don't get interrupts when data arrives and you have to poll the register. often.
You should be able to do same on the arm device side, may be much harder on exotic serial port hw.
Here's what setserial does to set low latency on a file descriptor of a port:
ioctl(fd, TIOCGSERIAL, &serial);
serial.flags |= ASYNC_LOW_LATENCY;
ioctl(fd, TIOCSSERIAL, &serial);
In short: Use a USB adapter and ASYNC_LOW_LATENCY.
I've used a FT232RL based USB adapter on Modbus at 115.2 kbs.
I get about 5 transactions (to 4 devices) in about 20 mS total with ASYNC_LOW_LATENCY. This includes two transactions to a slow-poke device (4 mS response time).
Without ASYNC_LOW_LATENCY the total time is about 60 mS.
With FTDI USB adapters ASYNC_LOW_LATENCY sets the inter-character timer on the chip itself to 1 mS (instead of the default 16 mS).
I'm currently using a home-brewed USB adapter and I can set the latency for the adapter itself to whatever value I want. Setting it at 200 µS shaves another mS off that 20 mS.
None of those system calls have an effect on latency. If you want to read and write one byte as fast as possible from userspace, you really aren't going to do better than a simple read()/write() pair. Try replacing the serial stream with a socket from another userspace process and see if the latencies improve. If they don't, then your problems are CPU speed and hardware limitations.
Are you sure your hardware can do this at all? It's not uncommon to find UARTs with a buffer design that introduces many bytes worth of latency.
At those line speeds you should not be seeing latencies that large, regardless of how you check for readiness.
You need to make sure the serial port is in raw mode (so you do "noncanonical reads") and that VMIN and VTIME are set correctly. You want to make sure that VTIME is zero so that an inter-character timer never kicks in. I would probably start with setting VMIN to 1 and tune from there.
The syscall overhead is nothing compared to the time on the wire, so select() vs. poll(), etc. is unlikely to make a difference.
I am trying to figure out what the maximum throughput of a Bluetooth 2.1 SPP connection is.
I found 2 publications concerned with the topic (1, 2) and they both show diagrams, which show the throughput as a function of the Signal to noise ratio (that I can assume to be perfect for my concideration) and the kind of ACL package used. My problem is, I have no Idea which ACL packets are used. How is this decision made? Is it made on the fly, like "what's needed to transfer the current data is used"?
Furthermore, in the Serial Port Profile specification (chapter 2.3) I found this sentence:
This profile requires support for one-slot packets only. This means that this profile
ensures that data rates up to 128 kbps can be used. Support for higher rates is optional.
The last sentence realy confuses me. How do I find out whether this "option" applies in my case?
This means that in SPP mode, all bluetooth modules should work up to 128kbps, and some modules may work even faster.
Under SPP is RFCOMM, which tries to deliver the packets as quickly as possible. If only one packet is sent in one timeslot, you get the 128kbps. The firmware of the bluetooth module, or the HCI driver however can do things differently.
There are SPP speeds up to 480kbps reported - however this requires that both SPP modules are from the same vendor (e.g. BlueGiga iWrap modules can do this speed).
On the other end, if you're connecting to an unknown device, for example a BT112, or an RN41 module to an Android device, the actual usable SPP bandwidth can be much lower than 128 kbps (I have measurements as low as 10kbps).
In case of some old generation iPhones, the usable SPP bandwidth is around 8 kbps.
It is a wise idea to treat "standards" and "datasheets" very conservative, and do your own measurements if actual net data bandwidth is critical.
Even though BT, BT+EDR has theoretical on-the-air bitrates of 3Mbps, the actual usable net data bandwidth is a way smaller.
I would like to track a large number of beacons (~500) at once within a 50-100 m radius via an app on an iPhone (5s). I've had a look at the spec and online and I can't see if there is any limit on the number of beacons you can track at once using BLE. Does anyone know if there is limitation on the number of beacons you can track exists or if an iPhone 5s would be up to the task of tracking that many beacons?
You used the word track, but iOS has two different methods: monitoring and ranging.
You can set a maximum of 20 regions to monitor. (Found in documentation for the startMonitoringForRegion: method.) Region limits mostly come into play if your app is in the background. The OS will alert your app when you enter or leave a region that you're monitoring (give or take a few minutes). The OS will even launch your app just to let it know what happened (although only for a short time).
The other method is ranging, which is to find all the beacons within the Bluetooth range of the device (typically around 100 feet give or take). If your beacons are spread out over 100 miles, then you probably won't run into any practical limit here. I have not found any documentation for this, and I have only four beacons that I'm testing with, and four at a time works.
Here's one way to handle your situation. Make all your 500 beacons use the same UUID, and make a beacon region using initWithProximityUUID:identifier: method. (Identifier is just for you -- it doesn't affect anything). Starting monitoring for that beacon region. That way, your app will be notified whenever one of your 500 beacons are found (give or take a few minutes). Once notified, you can use startRangingBeaconsInRegion: to find all the beacons around that area, then use the major and minor values to figure out which beacons the user is near.
I'll add to Tim Tisdall's answer, which sets out the right framework. I can't speak to the specific capabilities of the iPhone 5s, or iOS in general, but I don't see any reason why it wouldn't return every ADV_IND packet (i.e. beacon transmission) that it receives.
The question is, will the 500 beacons be able to transmit their ADV_IND packets without collisions?
It takes about 0.128ms to transmit an ADV_IND packet. The time between advertising transmissions is configurable between 20ms and 10240ms (at intervals of 0.625ms), so the probability of collisions depends on the configuration of the beacons.
Based on the Poisson distribution, the probability of a collision for any given ADV_IND packet is 1-exp(-2*N*(0.128/AI)), where N is the number of beacons within range, AI is the time in milliseconds of the advertising interval (assuming all the beacons are configured the same), and the 0.128 is the time in milliseconds it takes to send the ADV_IND packet. (See http://www3.cs.stonybrook.edu/~jgao/CSE590-fall09/aloha-analysis.pdf if you want an explanation.)
For 500 beacons with the maximum advertising interval of about 10 seconds, there will be a collision about once every 81 packets (or about 6 out of 500). If you're willing to wait for a couple intervals (i.e. 30 seconds), there's a good chance you'll be able to receive all 500 ADV_IND packets.
On the other hand, if the advertising interval is smaller, say 500ms, you'll have a collision about 23% of the time (or 113 out of 500). You'd have to wait for several more intervals to improve the probability that you'd see the broadcasts from all the beacons.
The other way to look at it is that the more beacons you have, the longer you have to wait to make sure you receive all their packets. (The math to calculate the delay to receive the packets with a certain probability from the number of beacons and the advertising interval is too much for me today.)
One caveat: if you want to connect to these beacons, as opposed to just receiving the ADV_IND packet, that requires an exchange of two more packets on the advertising channels, and the probability of a collision in the advertising channels goes up a bit.
If I am reading your question right, you want to put all 500 iBeacons within 100 meters of each other, meaning their transmissions will overlap. You will probably run into radio congestion problems long before you run into any limitations of iOS7 or your phone.
I have successfully tested 20 iBeacons in close proximity without problems, but 500 iBeacons is an extreme density. this discussion on the hardware issue suggests you may run into trouble.
At a minimum, the collisions of the transmissions of 500 iBEacons will make it take longer for your iOS device to see each iBeacon. Normally, iOS7 provides a ranging update once per second for each iOS device, but you may find that you get updates much less often. It all depends on your application whether or not less frequent updates are acceptable.
Even if delays are acceptable, I would absolutely test this before counting on it working at all. Unfortunately, that means getting your hands on lots of iBeacons.
I don't agree. It is true that ble beacons only transmit advertising data, but the transmission of such data last about 3ms (considering three advertising channels).
Having 500 beacons, WITHOUT considering any collision, the scanner will takes 1.5s to see them all.
But, if all beacons are configured in same way (same advertising interval) it is inevitable to have collisions which lead to have undiscovered beacons. Even if the advertising interval is different between beacons collisions occur. To avoid collision probability one should use longer advertising interval, but this lead to longer discovery latency.
This reasoning is very raw, it doesn't take care of many effects, but is just an order of magnitude calculation.
By the way, the question is not easy, there are many parameters which play role, some are known some are unknown. But I'm working with ble since one year about and, to me, 500 is a huge number and there is the possibility that you don't see the majority of nodes because of collisions.
I was doing some research into iBeacon's because of this question (I had no idea what it was about).
It seems that on the "beacon" side of things all that happens is general advertising packets are sent out. It's similar to how a device advertises that you can connect to it. However, you don't actually connect to iBeacon's, it just reads those advertising packets. There's no built-in limitation on how many advertising packets a device can receive.
So, it wouldn't surprise me if 500 iBeacon's would run with no issues. The advertising packets are small and are spaced out (time wise, they are repeated every X ms). There's no communication going from the phone to the iBeacon, the phone is simply receiving the packets it hears. If there's interference on one packet it'll likely manage to get the next one.