trace of VoIP packet delay - statistics

I am interested about packet delays due to network jitter in voip calls. The distribution of the delay should be something similar to a Gaussian distribution with heavy tail to the right of the peak, and truncated to the left. Does anyone know where I can find a trace to download of packets sent during a voip call?

Why don't you simply install 2 free VoIP clients and make a Wireshark trace yourself ?
For H.323 you could use Spranto or X-Lite for SIP.

Related

Constant Delay in Bluetooth Low Energy (BLE) data transmission

I am trying to evaluate the suitability of some different wireless interfaces for our project on 2xRaspberry Pi 4 and currently I’m evaluating Bluetooth Low Energy. Therefore I have written an Central and Peripheral device application with the Qt framework (5.15). In my case the latency time between messages is important, because of some security aspects. The message size of each command is around 80-100 Bytes. In one of my tests I have sent 80 Bytes commands every 80ms. Ideally the messages should be received on the other device in 80ms interval as well. For the LAN (TCP) interface this test works well.
For Bluetooth Low Energy I observed that messages, which are sent from Peripheral to Central work quite good and I measured no big delay. Different results I got for the Central to Peripheral direction. Here, I have received the messages in the interval of 100ms to 150ms really exactly. It seems that there couldn’t be a very big magic behind it, so is there any plausible explanation for this? I tested it with a Python script as well and I observed the same results. So it seems that the Qt implementation shouldn’t be the problem.
During research I found out, that the connection interval may influence this, but in Qt the QLowEnergyConnectionParameterRequest (QLowEnergyConnectionParameters Class | Qt Bluetooth 5.15.4) doesn’t work for me. Is there any command, where I can set the connection interval for test purposes at the command line on Linux?
Kind regards,
BenFR
It is possible that your code is slower from central to peripheral because WRITE is used instead of WRITE WITHOUT RESPONSE. The difference is that WRITE waits for an acknowledgement, therefore slowing the communication down, while WRITE WITHOUT RESPONSE is very much like how notifications/indications work in that there's no ACK at the ATT layer. You can change this by changing the write mode of your application and ensuring that the peripheral's characteristic supports WriteNoResponse.
Regarding changing the connection interval, the change needs to be accepted from the remote side in order for it to take effect. In other words, if you are requesting the connection parameter change from the peripheral, then the central needs to have code to receive this connection parameter change request and accept it.
Have a look at the links below for more information:-
How does BLE parameter negotiation work
Understand BLE connection intervals and events
The different types of BLE write

Should I be using BUILKIO to output Vita49 packets from a REDHAWK device?

I feel like I am missing something, all of the VITA49 examples seem to be using TCP or UDP.
Is there a specification or standard way of providing VITA49 packets for consumption?
Should I be performing the conversion and providing standard complex samples with Keywords?
I have looked at the rh.vita49 loopback demo waveform, and the MSDD device source, as well as the sourceVITA49 and sinkVITA49 component. All of these use either a tcp or udp packet stream.
If the standard is to use sockets to pass VITA49 packets, then where should I be looking to understand how to construct a device that adheres to the standard?
ANSWER
I was able to talk to an experienced REDHAWK developer.
There is no standard, per-se, with that said the approach I took was to make use of the socket.sourceVita49 asset. This asset consumes the Vita49 packets and inserts appropriate keywords etc based upon context packet. This required me update my device to support setting the hardware up to send Vita49 via TCP. This actually provided an easier solution for me, as I wasn't having to bust the VRT apart.
Examples:
The best example I found of consuming Vita49 was the MSDD device asset.
NOTE:
After reviewing the MSDD, it does not look to be too difficult to create a device that consumes VITA49 VRL,VRT packets and produce time stamped samples. I will be investigating that in the future.

Bluez not reading advertisement packet

I'm trying to build a small program that reads the BLE beacons around my devices and parses the ones that I'm interested in to publish on MQTT. I'm using Raspberry Pis to run the code, I develop using my mac. The language is JS (Node 10.x), my Pis are running latest Buster, that is Bluez 5.50 and a fork of Noble to interface with the bluetooth layer.
For some reason, on one of the Pis that I moved to an open area (in order to get clear readings), I only receive the Scan Response Packets. I never receive the Advertisement Packet. I do sometimes receive the Advertisement packet for one of the device that is quite far away, making me suspect that signal comes in play here. From the Pis in the network cabinet (small Faraday cage) I do get inconsistently both packets every now and then (reason for dedicating a Pi in an open location).
Is there any way to force Bluez to always receive the Advertisement packet? Is there a bug somehow or a feature that I am not using properly?
EDIT
I installed tshark to monitor closely, and I do see the advertisement packet reaching my device. This means that BlueZ is ignoring them. Is there some complete documentation on how to use bluetoothctl and how to configure the bluetooth deamon/tool in order for these packets to be read?
After many days investigating, I manage to get the desired result for my project. I first thought of using the bluewalker project to access the raw packets. With this you can scan in passive mode, meaning only the advertising packet is retrieved.
By looking more in depth into noble project, the one I actually use to interface with, there is a workaround to scan in passive mode (https://github.com/noble/noble/issues/701), but also a variable to capture both advertising and response packets: NOBLE_REPORT_ALL_HCI_EVENTS. Setting this to 1 will give me exactly what I need, both the scan request which contains the data that changes more frequently, and the scan response that contains more data, such as min/max 24h values. And as a matter of fact, combining this setting with duplicates=false seems to give me only the scan request data, just like in passive mode.
Question still opened:
Having this, I still don't know how to use bluetoothctl to display both scan request data along with the scan response. Nor did I find a way to force the scan mode to passive. I could investigate more hcitool hciconfig, but they are deprecated (although every article on the internet refers to them).

Why is my iBeacon signal not detected every time it should send according to the interval?

I have a small BLE beacon which is configured to send iBeacon packets every 1000ms.
In my usecase i want to detect the signal on multiple recievers every time it is sent. However the detection is not reliable no matter which receiving device and software i use (phone, computer, raspberry). The signal is sometimes detected after 2 seconds, another time 5, 6 or whatever. It seems like there is no pattern behind it.
Also it seems that sometimes the signal is received on one receiver but not on the others while definetly being in range! Also the area i am testing in is not "problematic".
What could be the problem?
Not 100% of beacon packets transmitted are ever received. There are a number of reasons for this, including radio noise, packet collisions and channel hopping. That said, the typical detection rate in a quiet radio environment is around 90 percent.
If you are seeing a much lower rate than this on multiple receivers I would check the transmitter. First, use one of.your devices to transmit a software beacon (Android and iOS have free apps like Locate Beacon that do this.) If you get a higher detection rate with a different device transmitting, the issue may be your transmitter.
A few possible issues with it:
Bad antenna sending out a very weak signal. (Measure the received RSSI when you do get a detection and verify it is over -60 dBm)
Weak transmitter power setting. See if you can configure this higher.
Advertising on wrong channels
Advertising is stopped before packet can go out. Try to leave the advertiser on for at least 2x the transmission rate to ensure at least on packet gets out.
I assume you use a large enough scan window for your scan interval so your receiver radio is actually turned on most of the time.
You could try to send advertising packets of the type ADV_NONCONN_IND (non-connectable and no scan response packet). That way if the receiver radio scans with 100% duty it should see the packet.
Otherwise if you use normal ADV_IND packets, then at least Android always waits for the SCAN_RSP packet before it sends anything to the scanning app. But if the are multiple scanners nearby, your peripheral can only respond to a single scanner's SCAN_REQ for each advertising packet. To avoid collisions of SCAN_REQ packets in the air, Bluetooth controllers also back off if they don't get a SCAN_RSP in return. If you use a BLE sniffer you can see all three kinds of packets and what happens when you have multiple scanners nearby.
Read the BLE Link Layer part of the Bluetooth Core specification for more details.

Linux: packets reordering simulation

I would like to simulate packets reordering for UDP packets on Linux to measure the performance and fault tolerance of my application.
Is there a simple way to do this?
look into WANEM
WANem thus allows the application
development team to setup a
transparent application gateway which
can be used to simulate WAN
characteristics like Network delay,
Packet loss, Packet corruption,
Disconnections, Packet re-ordering,
Jitter, etc.
You can use the "netem" feature built-into the Linux kernel. It is shipped with most modern distributions. netem is a traffic control discipline module which deliberately delays, drops and reorders packets and is highly configurable.
This only works for sending packets (because the queues are outbound only), so you may wish to place a router host with netem between the two test machines, and run netem on both interfaces (with differing parameters if you like).
The easiest way to achieve this is to run netem in a VM to route between two VM networks. I've found this to be quite handy.
You could try scapy. It's a python library to manipulate packets. You could capture a pcap session, with tcpdump, wireshark, whatever, and then replay the captured packets in arbitrary order with scapy.
a=rdpcap("/spare/captures/isakmp.cap")
for pkt in a.reverse():
sendp(pkt)
Depending on how you catpured the packets you may need send(layer 3) rather than sendp(layer 2)

Resources