I've read in many places that RSSI is highly environment specific (e.g., walls or weather) which can make it difficult to infer which beacon is the closest in a Euclidean sort of way. I also gather that RSSI is measured in arbitrary units from 0 (good connection strength) to -100 (bad connection strength). In spite of these challenges, I have questions about the following two thought experiments related to the reliability of the RSSI values for beacon <--> device communications.
Experiment 1. Given a particular beacon and two devices located at the exact same location, will those two devices register the same RSSI for that beacon?
Experiment 2. Given a particular device and two beacons located at the exact same location, will those two beacons register the same RSSI for that device?
To formalize this in a statistical sense, will p(signal | beacon1, device1) = p(signal | beacon2, device2) if beacon1-device1 are placed in the exact same environment of beacon2-device2?
Since different antennas and devices have different RF properties, I'm going to go ahead and say that unless your beacons/devices are identical to each other, then no, you should not expect the same RSSI reading, even if their locations are identical. This is because the device cannot know how much power is in an RF signal before it passes through its circuitry, and better and bigger antennas will transmit/receive better than crappier ones.
That said, RSSI values of 0 will be read as 0 with both devices, and also maximum RSSI values, assuming that the two devices use the same RSSI scale, which doesn't seem to have to be the case, as wikipedia says: "As an example, Cisco Systems cards have a RSSI_Max value of 100 and will report 101 different power levels, where the RSSI value is 0 to 100. Another popular Wi-Fi chipset is made by Atheros. An Atheros based card will return an RSSI value of 0 to 127 (0x7f) with 128 (0x80) indicating an invalid value."
Anyway, if your devices are identical, then I would expect the readings to be identical as well, or at least very close to each other.
Besides the differences in hardware and transmission power, timing is also important. If the interval between two measurements conducted by either the same device/beacon or different ones exceed channel coherence time, RSSI may vary drastically. Coherence time in indoor environment is at the level of 1s, and 10-100 times smaller outdoor.
Related
I want to make sure the latency between my app and the bluetooth headphones is accounted for, but I have absolutely no idea how I can get this value. The closest thing I found was:
BluetoothLEPreferredConnectionParameters.ConnectionLatency which is only available on Windows 11... Otherwise there isn't much to go on.
Any help would be appreciated.
Thanks,
Peter
It's very difficult to get the exact latency because it is affected by many parameters - but you're on the right track by guessing that the connection parameters are a factor of this equation. I don't have much knowledge on UWP, but I can give you the general parameters that affect the speed/latency, and then you can check their availability in the API or even contact Windows technical team to see if these are supported.
When you make a connection with a remote device, the following factors impact the speed/latency of the connection:-
Connection Interval: this specifies the interval at which the packets are sent during a connection. The lower the value, the higher the speed. The minimum value as per the Bluetooth spec is 7.5ms.
Slave Latency: this is the value you originally mentioned - it specifies the number of packets that can be missed before a connection is considered lost. A value of 0 means that you have the fastest most robust connection.
Connection PHY: this is the modulation on which the packets are sent. If both devices support 2MPHY, then the connection should be quicker.
Data Length/MTU Extension: these are two separate features but I am looping them together becuase the effect is the same - more bytes are sent per packet, which results in a higher throughput. The maximum value is 251 bytes per packet.
You can find more information about these parameters here:-
A Practical Guide to BLE Throughput
Maximizing BLE Throughput: Everything You Need to Know
Bluetooth 5 Speed - How to Achieve Maximum Throughput
And below are some other links that might help you understand what is supported on UWP:-
Bluetooth Developer FAQ
BluetoothLEConnectionParameters.OptimizedProperty
Bluetooth LE Preferred Connection Parameter Class
Bluetooth LE Connection PHY class
How to Change MTU Size and PHY on Windows UWP C++
I've been working doing some reverse engineering to different BLE based devices and I have a weight scale where I can't find a pattern to find/decode/interpret the weight value that I can get from the android app. I was also able to get the services and characteristics of this device but did not found a SIG standard match with the UUIDs from the bluetooth site.
I'm using an nRF51 dongle to sniffing data packets between the android app and the weight scale and I can look the ble traffic, but there are several events during the communication that I can't really understand what's the relation and I can't be able to convert those values to readable weight in kg or pounds.
My target value is: 71.3kg readed from the weight scale app.
Let me show you what I get from the ble sniffer.
first I see that the master is sending notification/indication requests to the handles 0x0009(notify), 0x000c(indication) and 0x000f(notify) in each characteristic of one service.
Then I start to read notification/indications values mixed with write commands. At the end of the communication, I see some packets that I feel that they are the ones with the weight scale data and BMI.
Packets number 574, 672 and 674 in the image.
So that give us the following candidates:
1st. packet_number_574 = '000002c9070002
2nd. packet_number_672 = '420001000000005ed12059007f02c9011d01f101'
3rd. packet_number_674 = '42018c016b00070237057d001a01bc001d007c'
1st packet during the communication exchange looks more like a counter/RT/clock than a real measurement because of the data behavior, so I feel this one is not a real option.
2nd and 3rd looks more like real candidates, I have split them and convert them to decimal values and I have not found a relation, even combining bytes since this values are floating point data types. So my real question is, Am I missing something that you might see with this information? Do you know if there is a relation between this data packets and my target?
Thank you for taking your time reading me and any help could be good, thanks!
EDIT:
I have a python script that allows me to check the services and their characteristics hierarchy and some useful data like properties, their handles and descriptors.
Service 'fff0' (0000fff0-0000-1000-8000-00805f9b34fb):
Characteristic 'fff1' (0000fff1-0000-1000-8000-00805f9b34fb):
Handle: 8 (8)
Readable: False
Pro+perties: WRITE NOTIFY
Descriptor: Descriptor <Client Characteristic Configuration> (handle 0x9; uuid 00002902-0000-1000-8000-00805f9b34fb)
Characteristic 'fff2' (0000fff2-0000-1000-8000-00805f9b34fb):
Handle: 11 (b)
Readable: False
Properties: WRITE NO RESPONSE INDICATE
Descriptor: Descriptor <Client Characteristic Configuration> (handle 0xc; uuid 00002902-0000-1000-8000-00805f9b34fb)
Characteristic 'fff3' (0000fff3-0000-1000-8000-00805f9b34fb):
Handle: 14 (e)
Readable: False
Properties: NOTIFY
Descriptor: Descriptor <Client Characteristic Configuration> (handle 0xf; uuid 00002902-0000-1000-8000-00805f9b34fb)
This are the characteristics related to the notifications and indications that I see in wireshark. I think the packet number 574 (which characteristics has only a notifiy property) is more important than I think.
I solve it by myself.
This post gives me the idea to take the values (2 bytes) and multiply them by 0.1, that way I could get the weight.
In bytes I could look for my target value without decimals 713, which is in hex = 0x02c9
If we look at the packet number 574:
000002c9070002 and split it 00:00:02:c9:07:00:02
I could see that 2nd and 3rd bytes match with this pattern!
The only thing required to do is to group this bytes and multiply the decimal value 713 x 0.1 = 71.3. I made different tests and found that this pattern is constant so I feel is accurate for my solution. Hope this could help someone in the future.
I am trying to figure out what the maximum throughput of a Bluetooth 2.1 SPP connection is.
I found 2 publications concerned with the topic (1, 2) and they both show diagrams, which show the throughput as a function of the Signal to noise ratio (that I can assume to be perfect for my concideration) and the kind of ACL package used. My problem is, I have no Idea which ACL packets are used. How is this decision made? Is it made on the fly, like "what's needed to transfer the current data is used"?
Furthermore, in the Serial Port Profile specification (chapter 2.3) I found this sentence:
This profile requires support for one-slot packets only. This means that this profile
ensures that data rates up to 128 kbps can be used. Support for higher rates is optional.
The last sentence realy confuses me. How do I find out whether this "option" applies in my case?
This means that in SPP mode, all bluetooth modules should work up to 128kbps, and some modules may work even faster.
Under SPP is RFCOMM, which tries to deliver the packets as quickly as possible. If only one packet is sent in one timeslot, you get the 128kbps. The firmware of the bluetooth module, or the HCI driver however can do things differently.
There are SPP speeds up to 480kbps reported - however this requires that both SPP modules are from the same vendor (e.g. BlueGiga iWrap modules can do this speed).
On the other end, if you're connecting to an unknown device, for example a BT112, or an RN41 module to an Android device, the actual usable SPP bandwidth can be much lower than 128 kbps (I have measurements as low as 10kbps).
In case of some old generation iPhones, the usable SPP bandwidth is around 8 kbps.
It is a wise idea to treat "standards" and "datasheets" very conservative, and do your own measurements if actual net data bandwidth is critical.
Even though BT, BT+EDR has theoretical on-the-air bitrates of 3Mbps, the actual usable net data bandwidth is a way smaller.
I'm new at programming the Beaglebone Black and to Linux in general, so I'm trying to figure out what's happening when I'm setting up a SPI-connection. I'm running Linux beaglebone 3.8.13-bone47.
I have set up a SPI-connection, using a Device Tree Overlay, and I'm now running spidev_test.c to test the connection. For the application I'm making, I need a quite specific frequency. So when I run spidev_test and measure the frequency of the bits shiftet out, I don't get the expected frequency.
I'm sending a SPI-packet containing 0xAA, and in spidev_test I've modified the "spi_ioc_transfer.speed_hz" to 4000000 (4MHz). But I'm measuring a data transfer frequency of 2,98MHz. I'm seeing the same result with other speeds as well, deviations are usually around 25-33%.
How come the measured speed doesn't match the assigned speed?
How is the speed assigned in "speed_hz" defined?
How precise should I expect the frequency to be?
Thank you :)
Actually If you look closely on the DSO you can see that each clock cycles takes approx 312.5 ns , which makes the clock frequency to be 3.2Mhz,. May be the channel you're monitoring i
Then, the variation between the expected & actual speed,
In microncontrollers I've worked the all the peripherlas including the SPI derives ots clock from the Master clock which is supplied to the MCU(in your case MPU), the master frequency divided by some prescalar gives the frequency for periperal opearations, where as peripherals use this frequency and uses its prescalar for controlling the baud rate,
So in your case suppose if the master frequency is not proper this could lead to the behavior mentioned above.
So you have two options
1. Correct the MPU core frequency
2. Do a trial & error method to find the value which has to be given is spi test program to get the desired frequency
I would like to track a large number of beacons (~500) at once within a 50-100 m radius via an app on an iPhone (5s). I've had a look at the spec and online and I can't see if there is any limit on the number of beacons you can track at once using BLE. Does anyone know if there is limitation on the number of beacons you can track exists or if an iPhone 5s would be up to the task of tracking that many beacons?
You used the word track, but iOS has two different methods: monitoring and ranging.
You can set a maximum of 20 regions to monitor. (Found in documentation for the startMonitoringForRegion: method.) Region limits mostly come into play if your app is in the background. The OS will alert your app when you enter or leave a region that you're monitoring (give or take a few minutes). The OS will even launch your app just to let it know what happened (although only for a short time).
The other method is ranging, which is to find all the beacons within the Bluetooth range of the device (typically around 100 feet give or take). If your beacons are spread out over 100 miles, then you probably won't run into any practical limit here. I have not found any documentation for this, and I have only four beacons that I'm testing with, and four at a time works.
Here's one way to handle your situation. Make all your 500 beacons use the same UUID, and make a beacon region using initWithProximityUUID:identifier: method. (Identifier is just for you -- it doesn't affect anything). Starting monitoring for that beacon region. That way, your app will be notified whenever one of your 500 beacons are found (give or take a few minutes). Once notified, you can use startRangingBeaconsInRegion: to find all the beacons around that area, then use the major and minor values to figure out which beacons the user is near.
I'll add to Tim Tisdall's answer, which sets out the right framework. I can't speak to the specific capabilities of the iPhone 5s, or iOS in general, but I don't see any reason why it wouldn't return every ADV_IND packet (i.e. beacon transmission) that it receives.
The question is, will the 500 beacons be able to transmit their ADV_IND packets without collisions?
It takes about 0.128ms to transmit an ADV_IND packet. The time between advertising transmissions is configurable between 20ms and 10240ms (at intervals of 0.625ms), so the probability of collisions depends on the configuration of the beacons.
Based on the Poisson distribution, the probability of a collision for any given ADV_IND packet is 1-exp(-2*N*(0.128/AI)), where N is the number of beacons within range, AI is the time in milliseconds of the advertising interval (assuming all the beacons are configured the same), and the 0.128 is the time in milliseconds it takes to send the ADV_IND packet. (See http://www3.cs.stonybrook.edu/~jgao/CSE590-fall09/aloha-analysis.pdf if you want an explanation.)
For 500 beacons with the maximum advertising interval of about 10 seconds, there will be a collision about once every 81 packets (or about 6 out of 500). If you're willing to wait for a couple intervals (i.e. 30 seconds), there's a good chance you'll be able to receive all 500 ADV_IND packets.
On the other hand, if the advertising interval is smaller, say 500ms, you'll have a collision about 23% of the time (or 113 out of 500). You'd have to wait for several more intervals to improve the probability that you'd see the broadcasts from all the beacons.
The other way to look at it is that the more beacons you have, the longer you have to wait to make sure you receive all their packets. (The math to calculate the delay to receive the packets with a certain probability from the number of beacons and the advertising interval is too much for me today.)
One caveat: if you want to connect to these beacons, as opposed to just receiving the ADV_IND packet, that requires an exchange of two more packets on the advertising channels, and the probability of a collision in the advertising channels goes up a bit.
If I am reading your question right, you want to put all 500 iBeacons within 100 meters of each other, meaning their transmissions will overlap. You will probably run into radio congestion problems long before you run into any limitations of iOS7 or your phone.
I have successfully tested 20 iBeacons in close proximity without problems, but 500 iBeacons is an extreme density. this discussion on the hardware issue suggests you may run into trouble.
At a minimum, the collisions of the transmissions of 500 iBEacons will make it take longer for your iOS device to see each iBeacon. Normally, iOS7 provides a ranging update once per second for each iOS device, but you may find that you get updates much less often. It all depends on your application whether or not less frequent updates are acceptable.
Even if delays are acceptable, I would absolutely test this before counting on it working at all. Unfortunately, that means getting your hands on lots of iBeacons.
I don't agree. It is true that ble beacons only transmit advertising data, but the transmission of such data last about 3ms (considering three advertising channels).
Having 500 beacons, WITHOUT considering any collision, the scanner will takes 1.5s to see them all.
But, if all beacons are configured in same way (same advertising interval) it is inevitable to have collisions which lead to have undiscovered beacons. Even if the advertising interval is different between beacons collisions occur. To avoid collision probability one should use longer advertising interval, but this lead to longer discovery latency.
This reasoning is very raw, it doesn't take care of many effects, but is just an order of magnitude calculation.
By the way, the question is not easy, there are many parameters which play role, some are known some are unknown. But I'm working with ble since one year about and, to me, 500 is a huge number and there is the possibility that you don't see the majority of nodes because of collisions.
I was doing some research into iBeacon's because of this question (I had no idea what it was about).
It seems that on the "beacon" side of things all that happens is general advertising packets are sent out. It's similar to how a device advertises that you can connect to it. However, you don't actually connect to iBeacon's, it just reads those advertising packets. There's no built-in limitation on how many advertising packets a device can receive.
So, it wouldn't surprise me if 500 iBeacon's would run with no issues. The advertising packets are small and are spaced out (time wise, they are repeated every X ms). There's no communication going from the phone to the iBeacon, the phone is simply receiving the packets it hears. If there's interference on one packet it'll likely manage to get the next one.