I use a raspberry pi with a Bluetooth dongle to simulate a beacon. I want to measure the time needed for a mobile app to detect the UUID of a beacon when I change it in the raspberry. With this code I found out the server used by the smartphone to synchronize the tile
final Resources res = this.getResources();
final int id = Resources.getSystem().getIdentifier(
"config_ntpServer", "string","android");
final String defaultServer = res.getString(id);
Than I synchronized the time in the raspberry pi with
sudo ntpdate -u 2.android.pool.ntp.org
Before I change the id of the beacon I print the time
timestamp() {
date +"%T,%3N"
}
timestamp # print timestamp
sudo hcitool -i hci0 cmd 0x08 0x0008 1e 02 01 1.....
Then I compare the time when I changed the UUID and the time in the logcat when the UUID was seen for the first time and the result is alwayse negative
UUID changed at 15:33:03,276 and detected at 15:33:02.301.
Is this a synchronization problem? Is there a better way to do this?
A few thoughts:
Both the Android device and the Pi by default will sync their time to a NTP server automatically if they have a network connection. You should not have to do anything.
The NTP daemon doesn't always change the clock immediately -- it adjusts it slowly over time so as not to upset linux processes by an immediate jump. Since the Raspberry Pi has no real time clock, it always has incorrect time at boot. You may need to wait minutes after boot before it is in sync with the Android device.
NTP is not perfect. Don't expect to get your clocks synchronized to more than a few 10s of milliseconds when using internet time servers. Since bluetooth detection times can be very fast (also in the range of 10s of milliseconds), if you are getting detection times of -100 ms, this may be within the limits of this setup.
What you are showing is a detection time of about -1.0 second. This suggests that the time is not synchronized well. I would suspect the Pi is the problem and troubleshoot there. It might be helpful to show the time to the millisecond on both devices side by side to troubleshoot.
Related
To give you some context, I am trying to configure some network redundancy on Ubuntu 20.04.
I decided to use NIC bonding with:
2 Gigabit interfaces,
active-backup mode,
link state monitoring with MII (polling every 10ms).
It works as expected, when the active slave fails (wire unplugged) the bonding switches to the other slave. However this transition is too slow in my opinion as it takes up to 400ms.
In order to investigate I had a look at the MII register using this command: mii-tool -vv ens33 (-vv is to print raw MII registers content). It generates this kind of output:
Doing a basic bash script I print current time + value of the 2nd MII register which contains the link state. As this is a "while true" loop with no pause, I get a status every 20ms, so it allows me monitoring with enough reactivity.
Observed behavior:
when the interfaces are configured with a 100Mbit/s speed, Full Duplex => the MII register is almost immediately updated (no "visible" delay between the wire unplugging and the update of the register value on the screen).
however when configured at 1Gbit/s, Full Duplex (via auto-negotiation) => the update of MII register takes much more time (it is "visible" with the bash script). By sending pings every 10ms using the bond interface, and logging packets with Wireshark on receiver side, the observed delay is around 370ms.
My questions:
The Ethernet controler is an Intel I210, and I have not found anything relevant in the datasheet.
Do you see any reason for such a long time to detect link down when interface speed is 1Gbit/s vs 100Mbit/s (370 vs 50ms) ?
Any advice to improve the reactivity?
Thanks for your help :)
Every generation of Ethernet speed has it's own method and requirements.
10BaseT:
"link pulses" are used every 16 ±8 ms
100BaseT: "link pulses" are not used but Idle ("/I/") 4B/5B code words monitoring is instead.
1000BaseT: link failure is actually
determined by maxwait_timer which is defined in IEEE 802.3 to be 750 ±10 ms for Master and 350 ±10
ms for Slave ports.
As you can see, your measured 370ms are exactly what you can expect from a slave port.
Sources:
https://www.microchip.com/forums/FindPost/417307
https://ww1.microchip.com/downloads/en/Appnotes/VPPD-03240.pdf
Screen grab from WireShark showing traffic when problem occurs
Short question - Referring to WireShark image, What is causing Master to send LL_CHANNEL_MAP_IND and why is it taking so long?
We are working on a hardware/software project that is utilizing the TI WL18xx as a Bluetooth controller. One of the main functions of this product is to communicate with our sensor hardware over a Bluetooth Low Energy connection. We have encountered an issue that we have had difficulty pinpointing, but feel may reside in the TI WL18xx hardware/firmware. Intermittently, after a second Bluetooth Low Energy device is connected, the response times from writing and notification of one of the characteristics on one of the connected devices becomes very long.
Details
The host product device is running our own embedded Linux image on a TI AM4376x processor. The kernel is 4.14.79 and our communication stack sits on top of Bluez5. The wifi/bluetooth chip is the Jorjin WG7831-BO, running TIInit_11.8.32.bts firmware version 4.5. It is based on the TI WL1831. The sensor devices that we connect to are our own and we use a custom command protocol which uses two characteristics to perform command handshakes. These devices work very well on a number of other platforms, including Mac, Windows, Linux, Chrome, etc.
The workflow that is causing problems is this;
A user space application allows the user to discover and connect to our sensor devices over BLE, one device at a time.
The initial connection requires a flurry of command/response type communication over the aforementioned BLE characteristics.
Once connected, the traffic is reduced significantly to occasional notifications of new measurements, and occasional command/response exchanges triggered by the user.
A single device always seems stable and performant.
When the user connects to a second device, the initial connection proceeds as expected.
However, once the second device's connection process completes, we start to see that the command/response response times become hundreds of times longer on the initially connected device.
The second device communication continues at expected speeds.
This problem only occurs with the first device about 30% of the time we follow this workflow.
Traces
Here is a short snippet of the problem that is formed from a trace log that is a mix of our library debug and btmon traces.
Everything seems fine until line 4102, at which we see the following:
ACL Data TX: Handle 1025 flags 0x00 dlen 22 #1081 [hci0] 00:12:48.654867
ATT: Write Command (0x52) len 17
Handle: 0x0014
Data: 580fd8c71bff00204e000000000000
D2PIO_SDK: GMBLNGIBlobSource.cpp(1532) : Blob cmd sent: 1bh to GDX-FOR 07100117; length = 15; rolling counter = 216; timestamp = 258104ms .
HCI Event: Number of Completed Packets (0x13) plen 5 #1082 [hci0] 00:12:49.387892
Num handles: 1
Handle: 1025
Count: 1
ACL Data RX: Handle 1025 flags 0x02 dlen 23 #1083 [hci0] 00:12:51.801225
ATT: Handle Value Notification (0x1b) len 18
Handle: 0x0016
Data: 9810272f1bd8ff00204e000000000000
D2PIO_SDK: GMBLNGIBlobSource.cpp(1745) : GetNextResponse(GDX-FOR 07100117) returns 1bh cmd blob after 3139=(261263-258124) milliseconds.
The elapsed time reported by GetNextResponse() for most cmds should be < 30 milliseconds. Response times were short when we opened and sent a bunch of cmds to device A.
The response times remained short when we opened and sent a bunch of cmds to device B. But on the first subsequent cmd sent to device A, the response time is more than 3 seconds!
Note that a Bluetooth radio can only do one thing at a time. Receive or transmit. On one single frequency. If you have two connections and two connection events happen at the same time, the firmware must decide which one to prioritize, and which one to skip. Maybe the firmware isn't smart enough to handle your specific case. Try with other connection parameters to see if something gets better. You can also try another Bluetooth dongle from a different manufacturer.
I have a Raspberry Pi managing and writing to an FPGA. I use the SPI bus and some GPIO's to get the data over the link. The SPI write happens in bursts of variable length - from a few tens of bytes up to about 8kB.
The FPGA SPI receive code has a timeout which I initially set to around 12ms (which was a guesstimated value). This is almost always OK, but today I found out that very occasional timeouts on the FPGA seem to be caused by the fact that RPi will occasionally pause for about 15ms in the middle of sending a byte.
The RPi python driver code looks pretty much like this:
for b in byte_array:
send_byte()
where send_byte() is a simple function that calls the SPI byte write function in the GPIO library, with some checking of a BUSY line and retrying.
Generally, bytes go out a few hundred microseconds apart, so this sudden pause is odd. I am thinking it is probably caused by some sort of context switch - but the Pi is not doing much else (and running the stock Linux distro).
None of this is a big problem, I can easily increase the timeout with no problems. But I am curious. What causes a chip running Linux at several GHz to suddenly stop doing the only thing it is being asked to do for about 15ms - which is an eternity in this context?
If it WAS a problem - what could I do about it?
I'm looking for answers on the net for 2 days, and it seems like I can't find my answer so I finally post it here hoping I just mess something.
I'm conceiving a BLE slave device to log humidity in a room twice a day. This device has to run for at least 2 Years before getting recharged.
What is the BLE logic to ensure long battery life ?
1) Is a long advertisement / connection interval enough ?
2) Do I need to implement a RTC with interrupt to wake up my device and start advertising to get connected?
3) Do I have to use advertising packets only, and include my data into it?
I think I just miss something about bluetooth low energy, and it is a problem to create a ble device.
Thank you very much for you help, and have a good day !
You could calculate power consumption at https://devzone.nordicsemi.com/power/ for Nordic's chip. If the device does not advertise or has an active connection (i.e. it's just sleeping), it consumes almost no power at all and will definitely run for 2 years even on a CR2016 battery. So if possible, if you have for example a button that can start the advertising only when needed, that would be good.
Otherwise if you want it to be always available you must advertise. How long the advertisement interval should be depends on how long connection setup latency you want. If you have a BLE scanner that scans 100% of the time the advertisement interval will be equal to the connection setup time. If you have a low-power BLE scanner that only scans for example 10% of the time you'll have to multiply your advertitsement interval by 10 to get the expected setup time. It all comes down to simple math :)
I'd suggest create a connection rather than just putting the data in the advertisement packets since then you can acknowledge that the data has been arrived.
Note that if you have a connection interval of 4 seconds and have a stable constant connection you can get several years battery time on a coin cell battery.
I am trying to adapt a circuit for soft-power from Raspberry to Cubieboard (http://www.mosaic-industries.com/embedded-systems/microcontroller-projects/electronic-circuits/push-button-switch-turn-on/latching-toggle-power-switch). It appears to be working just fine but it needs a pin from the board to be set high while it is running (actually the description is for input with pull-up, but that's not the problem).
I have managed to set the PH15 pin as output high in uBoot script via gpio set 263. However about 3 seconds into the boot process something sets it low back again. The fex file is set correctly to adjust it to output high (tried input with pull-up too) but that appears to kick in several seconds later (about 7-8 seconds in the boot). As a result during the boot process the soft-power circuit just dies reacting to the low state of the pin.
So that is the puzzling question - how to set a pin on the Cubieboard immediately during boot and keep it that way? What brings it down?
Using kernel 3.4.79 with some config options (rtc, drivers, etc - nothing exotic).
Any help appreciated. Modifications to the circuit acceptable too!