Bluetooth SPP throughput - bluetooth

I am trying to figure out what the maximum throughput of a Bluetooth 2.1 SPP connection is.
I found 2 publications concerned with the topic (1, 2) and they both show diagrams, which show the throughput as a function of the Signal to noise ratio (that I can assume to be perfect for my concideration) and the kind of ACL package used. My problem is, I have no Idea which ACL packets are used. How is this decision made? Is it made on the fly, like "what's needed to transfer the current data is used"?
Furthermore, in the Serial Port Profile specification (chapter 2.3) I found this sentence:
This profile requires support for one-slot packets only. This means that this profile
ensures that data rates up to 128 kbps can be used. Support for higher rates is optional.
The last sentence realy confuses me. How do I find out whether this "option" applies in my case?

This means that in SPP mode, all bluetooth modules should work up to 128kbps, and some modules may work even faster.
Under SPP is RFCOMM, which tries to deliver the packets as quickly as possible. If only one packet is sent in one timeslot, you get the 128kbps. The firmware of the bluetooth module, or the HCI driver however can do things differently.
There are SPP speeds up to 480kbps reported - however this requires that both SPP modules are from the same vendor (e.g. BlueGiga iWrap modules can do this speed).
On the other end, if you're connecting to an unknown device, for example a BT112, or an RN41 module to an Android device, the actual usable SPP bandwidth can be much lower than 128 kbps (I have measurements as low as 10kbps).
In case of some old generation iPhones, the usable SPP bandwidth is around 8 kbps.
It is a wise idea to treat "standards" and "datasheets" very conservative, and do your own measurements if actual net data bandwidth is critical.
Even though BT, BT+EDR has theoretical on-the-air bitrates of 3Mbps, the actual usable net data bandwidth is a way smaller.

Related

Get Latency of Bluetooth Headphoners UWP C++

I want to make sure the latency between my app and the bluetooth headphones is accounted for, but I have absolutely no idea how I can get this value. The closest thing I found was:
BluetoothLEPreferredConnectionParameters.ConnectionLatency which is only available on Windows 11... Otherwise there isn't much to go on.
Any help would be appreciated.
Thanks,
Peter
It's very difficult to get the exact latency because it is affected by many parameters - but you're on the right track by guessing that the connection parameters are a factor of this equation. I don't have much knowledge on UWP, but I can give you the general parameters that affect the speed/latency, and then you can check their availability in the API or even contact Windows technical team to see if these are supported.
When you make a connection with a remote device, the following factors impact the speed/latency of the connection:-
Connection Interval: this specifies the interval at which the packets are sent during a connection. The lower the value, the higher the speed. The minimum value as per the Bluetooth spec is 7.5ms.
Slave Latency: this is the value you originally mentioned - it specifies the number of packets that can be missed before a connection is considered lost. A value of 0 means that you have the fastest most robust connection.
Connection PHY: this is the modulation on which the packets are sent. If both devices support 2MPHY, then the connection should be quicker.
Data Length/MTU Extension: these are two separate features but I am looping them together becuase the effect is the same - more bytes are sent per packet, which results in a higher throughput. The maximum value is 251 bytes per packet.
You can find more information about these parameters here:-
A Practical Guide to BLE Throughput
Maximizing BLE Throughput: Everything You Need to Know
Bluetooth 5 Speed - How to Achieve Maximum Throughput
And below are some other links that might help you understand what is supported on UWP:-
Bluetooth Developer FAQ
BluetoothLEConnectionParameters.OptimizedProperty
Bluetooth LE Preferred Connection Parameter Class
Bluetooth LE Connection PHY class
How to Change MTU Size and PHY on Windows UWP C++

Wifi station Bandwidth 160MHz / 80MHz control (linux)

I observe wifi station TX bandwidth to be reduced from 160MHz to 80MHz while the station is farther away versus it is closer to AP. I'm using "iw wlan0 station dump" command to check that. AP is forced to 160MHz and it actually use 160MHz for downlink for both cases. But the AX200 station is using 80MHz for uplink after the RSSI is lower than say about -60dBm.
I've checked this with Intel AX200 card. To confirm this is not a card related I also checked Broadcom Xeon 1200 card. Same here. Also a number of different AP was tested. All results are consistent.
Since Intel AX200 uses Intel proprietary Rate Control Algorithm "iwl-mvm-rs" and Broadcom use some other, I know the bandwidth limitation must be introduced by linux itself (mac80211 / cfg80211?). Which part it could be? Can I fix it to 160MHz?
This bandwidth reduction is probably the part of Rate Control Algorithm but the strange thing is that AP downlink bitrate is for example 500Mbits/s (160MHz) while in the same time uplink is 250Mbits/s (80MHz). On the closer locations the bitrate is the same e.g. 1000Mbits/s (160MHz) for both downlink and uplink. Thus this might be some kind of a bug to reduce the bandwidth too early.

Linux/Qt auto detect baud rate?

I'm in a situation where we are hooking up to a device that may speak a variety of different baud rates depending on model. Some of which may be non-standard, like 10000, but that's another problem for another day.
Ideally I could use Qt to auto detect the baud rate, but from my research that's likely not possible for a few reasons, which I'm okay with. However, is there any native Linux based method to auto detect the baud rate of the connected device? Even a 3rd party open source application could suffice.
Linux serial drivers don't support autobauding, because most hardware doesn't support it, because there's no agreement on how it might work. It's highly application-specific.
If you're using FTDI serial adapters, then most of them support the bit-bang mode, and you should use them as a digital oscilloscope in such a mode to get a bitstream that's very easy to autobaud on.
On other devices, the simplest way towards autobauding is to set the device to 2-3x the highest baudrate you expect, then treat the input data like a chunked digital oscilloscope, taking account of error bits, and use heuristics to detect the baud rate. It will succeed in a surprising number of cases, but you must get the statistical model of the data source right. I don't know of any pre-canned solutions for that.
Some additional kernel support could be had to better timestamp the input from the UART (whether hardware or USB) and thus decrease the uncertainity in your data and thus the number of samples you need to take to detect baud.
Some of which may be non-standard, like 10000, but that's another problem for another day.
No biggie. I figured it out 16 years ago :) This is the answer you're looking for. If you think that the API is sick as in very, very sick, then you'd be right.

iBeacon / Bluetooth Low Energy (BLE devices) - maximum number of beacons

I would like to track a large number of beacons (~500) at once within a 50-100 m radius via an app on an iPhone (5s). I've had a look at the spec and online and I can't see if there is any limit on the number of beacons you can track at once using BLE. Does anyone know if there is limitation on the number of beacons you can track exists or if an iPhone 5s would be up to the task of tracking that many beacons?
You used the word track, but iOS has two different methods: monitoring and ranging.
You can set a maximum of 20 regions to monitor. (Found in documentation for the startMonitoringForRegion: method.) Region limits mostly come into play if your app is in the background. The OS will alert your app when you enter or leave a region that you're monitoring (give or take a few minutes). The OS will even launch your app just to let it know what happened (although only for a short time).
The other method is ranging, which is to find all the beacons within the Bluetooth range of the device (typically around 100 feet give or take). If your beacons are spread out over 100 miles, then you probably won't run into any practical limit here. I have not found any documentation for this, and I have only four beacons that I'm testing with, and four at a time works.
Here's one way to handle your situation. Make all your 500 beacons use the same UUID, and make a beacon region using initWithProximityUUID:identifier: method. (Identifier is just for you -- it doesn't affect anything). Starting monitoring for that beacon region. That way, your app will be notified whenever one of your 500 beacons are found (give or take a few minutes). Once notified, you can use startRangingBeaconsInRegion: to find all the beacons around that area, then use the major and minor values to figure out which beacons the user is near.
I'll add to Tim Tisdall's answer, which sets out the right framework. I can't speak to the specific capabilities of the iPhone 5s, or iOS in general, but I don't see any reason why it wouldn't return every ADV_IND packet (i.e. beacon transmission) that it receives.
The question is, will the 500 beacons be able to transmit their ADV_IND packets without collisions?
It takes about 0.128ms to transmit an ADV_IND packet. The time between advertising transmissions is configurable between 20ms and 10240ms (at intervals of 0.625ms), so the probability of collisions depends on the configuration of the beacons.
Based on the Poisson distribution, the probability of a collision for any given ADV_IND packet is 1-exp(-2*N*(0.128/AI)), where N is the number of beacons within range, AI is the time in milliseconds of the advertising interval (assuming all the beacons are configured the same), and the 0.128 is the time in milliseconds it takes to send the ADV_IND packet. (See http://www3.cs.stonybrook.edu/~jgao/CSE590-fall09/aloha-analysis.pdf if you want an explanation.)
For 500 beacons with the maximum advertising interval of about 10 seconds, there will be a collision about once every 81 packets (or about 6 out of 500). If you're willing to wait for a couple intervals (i.e. 30 seconds), there's a good chance you'll be able to receive all 500 ADV_IND packets.
On the other hand, if the advertising interval is smaller, say 500ms, you'll have a collision about 23% of the time (or 113 out of 500). You'd have to wait for several more intervals to improve the probability that you'd see the broadcasts from all the beacons.
The other way to look at it is that the more beacons you have, the longer you have to wait to make sure you receive all their packets. (The math to calculate the delay to receive the packets with a certain probability from the number of beacons and the advertising interval is too much for me today.)
One caveat: if you want to connect to these beacons, as opposed to just receiving the ADV_IND packet, that requires an exchange of two more packets on the advertising channels, and the probability of a collision in the advertising channels goes up a bit.
If I am reading your question right, you want to put all 500 iBeacons within 100 meters of each other, meaning their transmissions will overlap. You will probably run into radio congestion problems long before you run into any limitations of iOS7 or your phone.
I have successfully tested 20 iBeacons in close proximity without problems, but 500 iBeacons is an extreme density. this discussion on the hardware issue suggests you may run into trouble.
At a minimum, the collisions of the transmissions of 500 iBEacons will make it take longer for your iOS device to see each iBeacon. Normally, iOS7 provides a ranging update once per second for each iOS device, but you may find that you get updates much less often. It all depends on your application whether or not less frequent updates are acceptable.
Even if delays are acceptable, I would absolutely test this before counting on it working at all. Unfortunately, that means getting your hands on lots of iBeacons.
I don't agree. It is true that ble beacons only transmit advertising data, but the transmission of such data last about 3ms (considering three advertising channels).
Having 500 beacons, WITHOUT considering any collision, the scanner will takes 1.5s to see them all.
But, if all beacons are configured in same way (same advertising interval) it is inevitable to have collisions which lead to have undiscovered beacons. Even if the advertising interval is different between beacons collisions occur. To avoid collision probability one should use longer advertising interval, but this lead to longer discovery latency.
This reasoning is very raw, it doesn't take care of many effects, but is just an order of magnitude calculation.
By the way, the question is not easy, there are many parameters which play role, some are known some are unknown. But I'm working with ble since one year about and, to me, 500 is a huge number and there is the possibility that you don't see the majority of nodes because of collisions.
I was doing some research into iBeacon's because of this question (I had no idea what it was about).
It seems that on the "beacon" side of things all that happens is general advertising packets are sent out. It's similar to how a device advertises that you can connect to it. However, you don't actually connect to iBeacon's, it just reads those advertising packets. There's no built-in limitation on how many advertising packets a device can receive.
So, it wouldn't surprise me if 500 iBeacon's would run with no issues. The advertising packets are small and are spaced out (time wise, they are repeated every X ms). There's no communication going from the phone to the iBeacon, the phone is simply receiving the packets it hears. If there's interference on one packet it'll likely manage to get the next one.

how to prove working of RS 232 full modem,RS 422 working PC to PC and LOOP BACK

Hello there I am a newbie trying to prove the working of RS 232 Full modem and also one RS 422( RX,TX,RTS,CTS)
These 2 ports are on a custom designed board and I need to prove they are working.
I am able to confirm the working at register level but I need to prove the working using softwares like Minicom or any other custom program.
How can I prove the working of these ports from one PC to a different PC using DB 9 connections and LOOP BACK too
Can someone help me with this? Do I need to use any extra hardware to prove the working of this in Linux?
The most common type of serial port test is probably a loopback test. Create a test fixture that connects output pins of the port to the input pins (TX->RX, RTS->CTS, etc). If you do not have matching input pins for every output pin, you will need to do a three-way connection.
After you create the loopback, you will need to write software that exercises the pins. If TX and RX are connected, you can send a byte and verify that it was echoed back. For the control pins, toggle them and make sure the other side of the connection saw the transition. Make sure you exercise every pin of the serial port.
Note that you should run a TX->RX data loopback at multiple baud rates. It is possible for there to be a signal integrity issue in the design that only shows up at higher bauds rates. It is also possible for there to be a bad signal connection on the board that is masked by inductance and capacitance at higher baud rates. Therefore, it is a good idea to run a data loopback at the slowest baud rate, the fastest baud rate, and 1-2 in the middle.
Another thing you should do is a baud rate accuracy test. This will prove the clock driving the UART is running at the right frequency, and being divided properly. Transmit X amount of bytes at a certain baud rate and verify they arrived in the expected amount of time. To get an accurate number, you will need to bypass any buffering in the OS serial port driver (e.g. use direct register I/O), and make sure to accommodate for any start/stop bit overhead (see comments below).
However, a loopback test is not exhaustive. It only proves the device can talk to itself. The device may still have some flaw (e.g. voltage levels) that cannot be detected locally. So, you should also run some tests with external hardware. Cable your board to another system and run a test (e.g. with minicom). Make sure they can talk to each other.
Even an external communication test can miss something. You can still have poor signal quality from your board, but it happens to be good enough for the other device. To accurately verify the signal quality/timing, you need an oscilloscope.
While you are running communication tests, connect the scope probes to the various signals and verify the signal integrity. Make sure that voltage levels are valid, that you see clean low/high bit transitions, and that the timing of the data pin appears correct for the specified baud rate. (A scope can be a more accurate way to measure baud rate than the software-based method described earlier.)

Resources