How to measure audio delay between two Asterisk channels? - audio

I have the following Setup:
A Centos machine with asterisk running that is connected to a Router via Ethernet. The Router answers the calls from my asterisk console and is itself again connected to my Centos machine via analogue RJ11 cable and a Digium PCI Card. (all for testing purposes of the router and running fine)
So I call and pick up the call at the same machine.
How can I now measure the delay between the audio of both channels?
I already tried to use the Monitor() function and evaluate delay signal based with cross correlation, but it seems like the Monitor function is not an accurate tool for recording at a precise time. (It doesn't start recording when I start the Playback() of my testfile)
Is there another possibility to get the delay between caller and callee?
Thanks

You can use Echo application on one end and send Beep on other end.
After that you can measure delay between beeps.
There are no any app which do such measure, but you can ask some developer do it for you.

Related

Has anyone used bleak to connect multiple BLE devices and receive notifications from all of them simultaneously?

I am able to use bleak and get data from all 5 BLE sensors. But the problem is that I am unable to identify which data is from which device. I mean i need a string representing the address of the device alongwith the data itself.
I was able to get data simultaneously from all the BLE modules using bleak in Windows, as well as on raspberry pi.
The only problem with Windows is that if you use Bluetooth 4.2 adapter & above. It will be much better for the high speed data rate, and proper devices connection handling. As with the 4.0 adapter, I have always gone through one or two exceptions each time i started the script, and the maximum number of connections I got was 3.
When i tried this script on Raspberry pi 3b+, it has on-board chip of Bluetooth 4.2. It was able to give high speed data rate, and my 5 sensors were connected to it simultaneously.
Also, The two_devices examples in bleak source code on Github, is very good example for starting with further coding.
And if you run the script and found the data on console, but you were unable to get that which data is from device. Then you need to use functools, (what it does is it will inject the client you're connected to at present, with the callback function, and it will make the work much easier).
Check this out on Github: https://github.com/hbldh/bleak/issues/601

Constant Delay in Bluetooth Low Energy (BLE) data transmission

I am trying to evaluate the suitability of some different wireless interfaces for our project on 2xRaspberry Pi 4 and currently I’m evaluating Bluetooth Low Energy. Therefore I have written an Central and Peripheral device application with the Qt framework (5.15). In my case the latency time between messages is important, because of some security aspects. The message size of each command is around 80-100 Bytes. In one of my tests I have sent 80 Bytes commands every 80ms. Ideally the messages should be received on the other device in 80ms interval as well. For the LAN (TCP) interface this test works well.
For Bluetooth Low Energy I observed that messages, which are sent from Peripheral to Central work quite good and I measured no big delay. Different results I got for the Central to Peripheral direction. Here, I have received the messages in the interval of 100ms to 150ms really exactly. It seems that there couldn’t be a very big magic behind it, so is there any plausible explanation for this? I tested it with a Python script as well and I observed the same results. So it seems that the Qt implementation shouldn’t be the problem.
During research I found out, that the connection interval may influence this, but in Qt the QLowEnergyConnectionParameterRequest (QLowEnergyConnectionParameters Class | Qt Bluetooth 5.15.4) doesn’t work for me. Is there any command, where I can set the connection interval for test purposes at the command line on Linux?
Kind regards,
BenFR
It is possible that your code is slower from central to peripheral because WRITE is used instead of WRITE WITHOUT RESPONSE. The difference is that WRITE waits for an acknowledgement, therefore slowing the communication down, while WRITE WITHOUT RESPONSE is very much like how notifications/indications work in that there's no ACK at the ATT layer. You can change this by changing the write mode of your application and ensuring that the peripheral's characteristic supports WriteNoResponse.
Regarding changing the connection interval, the change needs to be accepted from the remote side in order for it to take effect. In other words, if you are requesting the connection parameter change from the peripheral, then the central needs to have code to receive this connection parameter change request and accept it.
Have a look at the links below for more information:-
How does BLE parameter negotiation work
Understand BLE connection intervals and events
The different types of BLE write

Task schedule management

I am doing a pet feeder with React-Native app - Node.js server - Arduino (ESP32) but I can't figure it out how to make it work so it drops food at specific times.
I was looking into nodeJS libraries like node-schedule or cron but I can't figure it out or they don't seem to fit my needs.
At this moment I can make it drop food when I press a button in my app, but that would make it too simple (I want both manual AND automatized tasks).
My intention is to schedule feeding hours for your pet to eat, for example at 9:00, at 15:00 and at 21:00 via app, with some sort of alarms, while also being able to check them on demand and edit/delete options.
Any ideas on how could I do it, please?
You don't necessarily have to trigger the "drop food" command from the node app. I've written firmware for a device that connects to wifi, updates its internal date/time from a NTP server, then wakes up at a specified time every data to connect to a server and get setting updates. Ours is a battery powered device, so it doesn't just stay connected to the server all the time, and I used the ESP-IDF but the code was simple enough. I did some research and you can do the same process with an ESP32 using the Arduino Core.
Basic Idea
You could:
Set the times you want the feeding to occur in the app, which then sends those times to the device through BLE or your node app and store them in Flash
Calculate the number of milliseconds until the next feeding
Set a FreeRTOS timer to interrupt after that number of milliseconds to trigger a feeding event
Then after a feeding event occurs:
Check Flash for the next feeding event
Calculate the number of milliseconds
Set a FreeRTOS timer to interrupt and trigger a feeding event
Repeat
Resources:
Setting Local Time on Arduino using NTP
Using FreeRTOS timer interrupts on Arduino

How to route microphone & speaker audio between virtual machines?

I'm trying to create an interactive voice-tree for an art project. Think of something like a choose-you-own-adventure, but on the phone and with voice commands. I already have a fair amount of experience working with Construct 2 (game-making software), and can easily build a branching, voice controlled interaction loadable through a modern browser with it. For reasons relevant to the overall story, I need players to connect to the interaction through a Google Voice number they will call.
I already have a GV number and have written an AutoHotKey script to auto-answer the Hangouts call, but I'm stuck trying to route the audio from the caller in Hangouts to the browser AND the audio response output of the browser back to the caller.
I know of an extremely primitive way to accomplish this, [which I've illustrated with this diagram:
Unfortunately, this is rather cumbersome and I suspect I can achieve my goal through virtualization or at the VERY least some sort of attenuation cables between two physical machines (I tried running a generic AUX cable between two laptops, but couldn't get speaker audio to go into microphone audio from one to the other).
I've been experimenting on Parallels running Windows 8.1 with Virtual Audio Cable(no luck), JACK(too robust), Chevolume(too limited), and IndieVolume(too limited).
I suspect VAC would be the best bet, but I can't seem to find a way to route Firefox audio output to a microphone input which directs to Chrome and vice versa. If I try accomplishing it all through just one virtual machine I have to use two different browsers for the voice-tree webpage and Hangouts call since Hangouts pushes its audio through Chrome (even the stand-alone application).
Is there any way to route microphone input and speaker output separately between two virtual machines? If not, could I still try and accomplish this with a specific type of cables between two laptops running windows 7/8 that have generic audio jacks?

Convert voip audio to text for debugging

While working on voip apps, I usually end up picking up one phone, talking to it, picking up the other phone and check if I hear myself. This even gets trickier if I'm doing apps with three way calling.
Using a softphone doesn't help.
Ideally, I want to be able to run multiple instances of some command line based sip ua wherein i can dial a number. Once the ua has dialed and the other party ha picked up, both agents exchange audio. But instead of having to hear some audio, the apps instead display some text which identifies the other end. Possibly some frequency pattern that can be converted to text. Then this text is displayed on the app.
Can something like this be done? I'm creating apps against freeswitch. Ideas how to debug voip apps are also welcome in the comments.
yes, absolutely. The easiest would be to have a separate FreeSWITCH server that is used for placing the test calls and sending/receiving your test signals.
tone_stream will generate the tones at frequencies that you need: https://freeswitch.org/confluence/display/FREESWITCH/Tone_stream
tone_detect can detect the frequencies and execute actions, or even better, generate events that you can catch over an ESL socket: https://freeswitch.org/confluence/display/FREESWITCH/mod_dptools%3A+tone_detect
The best way to generate such calls is to use a dialer script which communicates to FreeSWITCH via Event Socket. Here you can see some (working) examples that I made with Perl:
https://github.com/voxserv/rring/blob/master/lib/Rring/Caller/FreeSWITCH.pm -- this is a part of a test suite tat I build for testing a provider's SIP infrastructure. As you can see, it connects to FreeSWITCH, starts event listener, and then originates the call and also expects an inbound call. It then sends and analyzes DTMF.
https://github.com/voxserv/freeswitch-helper-scripts/tree/master/esl -- these are special-purpose dialers, you can also use them as examples.
https://github.com/voxserv/freeswitch-perf-dialer -- this one generates a series of calls, like SIPp does.
Another technique is to play a sample audio file and record the audio being received on the other end[call recording] and then comparing the two. This system works on setup where systems are located at various places and you are testing end to end quality.
There are lot of Audio Comparison tools [like PESQ] should help you not just detect the presence of Audio but also give stats about the degradation of various parameters in the audio stream.
This can be extended to do test analysis of FS patches as and when they are released and also for other hooks or quality standards you want to enforce.

Resources