Wifi station Bandwidth 160MHz / 80MHz control (linux) - linux

I observe wifi station TX bandwidth to be reduced from 160MHz to 80MHz while the station is farther away versus it is closer to AP. I'm using "iw wlan0 station dump" command to check that. AP is forced to 160MHz and it actually use 160MHz for downlink for both cases. But the AX200 station is using 80MHz for uplink after the RSSI is lower than say about -60dBm.
I've checked this with Intel AX200 card. To confirm this is not a card related I also checked Broadcom Xeon 1200 card. Same here. Also a number of different AP was tested. All results are consistent.
Since Intel AX200 uses Intel proprietary Rate Control Algorithm "iwl-mvm-rs" and Broadcom use some other, I know the bandwidth limitation must be introduced by linux itself (mac80211 / cfg80211?). Which part it could be? Can I fix it to 160MHz?
This bandwidth reduction is probably the part of Rate Control Algorithm but the strange thing is that AP downlink bitrate is for example 500Mbits/s (160MHz) while in the same time uplink is 250Mbits/s (80MHz). On the closer locations the bitrate is the same e.g. 1000Mbits/s (160MHz) for both downlink and uplink. Thus this might be some kind of a bug to reduce the bandwidth too early.

Related

Get Latency of Bluetooth Headphoners UWP C++

I want to make sure the latency between my app and the bluetooth headphones is accounted for, but I have absolutely no idea how I can get this value. The closest thing I found was:
BluetoothLEPreferredConnectionParameters.ConnectionLatency which is only available on Windows 11... Otherwise there isn't much to go on.
Any help would be appreciated.
Thanks,
Peter
It's very difficult to get the exact latency because it is affected by many parameters - but you're on the right track by guessing that the connection parameters are a factor of this equation. I don't have much knowledge on UWP, but I can give you the general parameters that affect the speed/latency, and then you can check their availability in the API or even contact Windows technical team to see if these are supported.
When you make a connection with a remote device, the following factors impact the speed/latency of the connection:-
Connection Interval: this specifies the interval at which the packets are sent during a connection. The lower the value, the higher the speed. The minimum value as per the Bluetooth spec is 7.5ms.
Slave Latency: this is the value you originally mentioned - it specifies the number of packets that can be missed before a connection is considered lost. A value of 0 means that you have the fastest most robust connection.
Connection PHY: this is the modulation on which the packets are sent. If both devices support 2MPHY, then the connection should be quicker.
Data Length/MTU Extension: these are two separate features but I am looping them together becuase the effect is the same - more bytes are sent per packet, which results in a higher throughput. The maximum value is 251 bytes per packet.
You can find more information about these parameters here:-
A Practical Guide to BLE Throughput
Maximizing BLE Throughput: Everything You Need to Know
Bluetooth 5 Speed - How to Achieve Maximum Throughput
And below are some other links that might help you understand what is supported on UWP:-
Bluetooth Developer FAQ
BluetoothLEConnectionParameters.OptimizedProperty
Bluetooth LE Preferred Connection Parameter Class
Bluetooth LE Connection PHY class
How to Change MTU Size and PHY on Windows UWP C++

Linux/Qt auto detect baud rate?

I'm in a situation where we are hooking up to a device that may speak a variety of different baud rates depending on model. Some of which may be non-standard, like 10000, but that's another problem for another day.
Ideally I could use Qt to auto detect the baud rate, but from my research that's likely not possible for a few reasons, which I'm okay with. However, is there any native Linux based method to auto detect the baud rate of the connected device? Even a 3rd party open source application could suffice.
Linux serial drivers don't support autobauding, because most hardware doesn't support it, because there's no agreement on how it might work. It's highly application-specific.
If you're using FTDI serial adapters, then most of them support the bit-bang mode, and you should use them as a digital oscilloscope in such a mode to get a bitstream that's very easy to autobaud on.
On other devices, the simplest way towards autobauding is to set the device to 2-3x the highest baudrate you expect, then treat the input data like a chunked digital oscilloscope, taking account of error bits, and use heuristics to detect the baud rate. It will succeed in a surprising number of cases, but you must get the statistical model of the data source right. I don't know of any pre-canned solutions for that.
Some additional kernel support could be had to better timestamp the input from the UART (whether hardware or USB) and thus decrease the uncertainity in your data and thus the number of samples you need to take to detect baud.
Some of which may be non-standard, like 10000, but that's another problem for another day.
No biggie. I figured it out 16 years ago :) This is the answer you're looking for. If you think that the API is sick as in very, very sick, then you'd be right.

PCIe cards interfere with each others function

Hello Good People of the Internet!
First time asking...
I have a modern PC running Fedora 24 with a real-time patch (CCRMA audio tools) with an ASUS Essence STX II sterio sound card installed on PCIe. With it we run a playback/capture application. Also, we need to integrate CAN and BLE into the system and have a PCIe-card for each of these functions. The CAN PCIe card is from PEAK and the BLE card is an Intel 8260 M2 card that HP have put on a PCIe card (AFAIK).
With only the audio card installed it works fine (using ALSA as API). When the CAN and BLE is installed the following is observed:
Playback works as before.
One capture channel only returns zero (0) or minus one (-1) in all samples.
The other capture channel returns values in the range -2..2 and when applying our application signal processing low quality, but detectable, expected results are presented.
The ALSA API report no problems in setup and configuration.
CAN and BLE functions as expected.
Without any deeper PCIe experience I suspect that CAN and/or BLE PCIe cards jumble the mapping of the sound card functions.
Can someone:
- Tell me if my hunch is in the ballpark?
- Inform me on where I might go for information on how to rectify the problem?
- ...or, share a solution?
Thanks!

Bluetooth SPP throughput

I am trying to figure out what the maximum throughput of a Bluetooth 2.1 SPP connection is.
I found 2 publications concerned with the topic (1, 2) and they both show diagrams, which show the throughput as a function of the Signal to noise ratio (that I can assume to be perfect for my concideration) and the kind of ACL package used. My problem is, I have no Idea which ACL packets are used. How is this decision made? Is it made on the fly, like "what's needed to transfer the current data is used"?
Furthermore, in the Serial Port Profile specification (chapter 2.3) I found this sentence:
This profile requires support for one-slot packets only. This means that this profile
ensures that data rates up to 128 kbps can be used. Support for higher rates is optional.
The last sentence realy confuses me. How do I find out whether this "option" applies in my case?
This means that in SPP mode, all bluetooth modules should work up to 128kbps, and some modules may work even faster.
Under SPP is RFCOMM, which tries to deliver the packets as quickly as possible. If only one packet is sent in one timeslot, you get the 128kbps. The firmware of the bluetooth module, or the HCI driver however can do things differently.
There are SPP speeds up to 480kbps reported - however this requires that both SPP modules are from the same vendor (e.g. BlueGiga iWrap modules can do this speed).
On the other end, if you're connecting to an unknown device, for example a BT112, or an RN41 module to an Android device, the actual usable SPP bandwidth can be much lower than 128 kbps (I have measurements as low as 10kbps).
In case of some old generation iPhones, the usable SPP bandwidth is around 8 kbps.
It is a wise idea to treat "standards" and "datasheets" very conservative, and do your own measurements if actual net data bandwidth is critical.
Even though BT, BT+EDR has theoretical on-the-air bitrates of 3Mbps, the actual usable net data bandwidth is a way smaller.

Does RSSI depend on beacon and device?

I've read in many places that RSSI is highly environment specific (e.g., walls or weather) which can make it difficult to infer which beacon is the closest in a Euclidean sort of way. I also gather that RSSI is measured in arbitrary units from 0 (good connection strength) to -100 (bad connection strength). In spite of these challenges, I have questions about the following two thought experiments related to the reliability of the RSSI values for beacon <--> device communications.
Experiment 1. Given a particular beacon and two devices located at the exact same location, will those two devices register the same RSSI for that beacon?
Experiment 2. Given a particular device and two beacons located at the exact same location, will those two beacons register the same RSSI for that device?
To formalize this in a statistical sense, will p(signal | beacon1, device1) = p(signal | beacon2, device2) if beacon1-device1 are placed in the exact same environment of beacon2-device2?
Since different antennas and devices have different RF properties, I'm going to go ahead and say that unless your beacons/devices are identical to each other, then no, you should not expect the same RSSI reading, even if their locations are identical. This is because the device cannot know how much power is in an RF signal before it passes through its circuitry, and better and bigger antennas will transmit/receive better than crappier ones.
That said, RSSI values of 0 will be read as 0 with both devices, and also maximum RSSI values, assuming that the two devices use the same RSSI scale, which doesn't seem to have to be the case, as wikipedia says: "As an example, Cisco Systems cards have a RSSI_Max value of 100 and will report 101 different power levels, where the RSSI value is 0 to 100. Another popular Wi-Fi chipset is made by Atheros. An Atheros based card will return an RSSI value of 0 to 127 (0x7f) with 128 (0x80) indicating an invalid value."
Anyway, if your devices are identical, then I would expect the readings to be identical as well, or at least very close to each other.
Besides the differences in hardware and transmission power, timing is also important. If the interval between two measurements conducted by either the same device/beacon or different ones exceed channel coherence time, RSSI may vary drastically. Coherence time in indoor environment is at the level of 1s, and 10-100 times smaller outdoor.

Resources