open source signal processing libraries - audio

Are there any open source libraries/projects which work in a similar way to http://www.tagattitude.fr/en/products/technology?
I am trying to understand the process. At first I thought this could work like when you send a fax to a fax machine.
It is basically using the mobile phone’s microphone as a captor and its audio channel as a transporter.
Are there any libraries for generating the signal and then being able to decode it?

Take a look at this library saajoby.
And here
And here

Related

Can I make a media player pause/play with another C program by using MPRIS/d-bus or some other means? (on Linux)

I have a program written in C on Linux that can send/receive messages over BLE. I'd like to have this program communicate with a media player program running at the same time - specifically being able to "pause" and "play" the media player depending on what messages the program receives over the BLE connection. I looked into adding a media player to the C program and found that this is no trivial task. Hence, how can I make my program communicate with another program like a media player? I have read a bit about MPRIS/d-bus and calling media player APIs. This seems like the way to go but I'm unfamiliar and so not sure if it's possible and, if so, how I'd go about implementing it.
Edit: Would it be a better idea to try and make a media player with something like OpenCV?
Playerctl may come in handy.
Playerctl is a command-line utility and library for controlling media players that implement the MPRIS D-Bus Interface Specification.

Getting data from audio mixer

I am trying to build an open-source in-ear monitoring system. I have created the UI and was wondering how I would get the channels that are on an audio mixing console so that I can edit the channels and stream them to each musician. Is there a certain protocol that all the mixers use? You can find the project at https://gitlab.com/openstagemix. We would love to have contributors.
I can't really test whether this is the correct answer as I am trapped in my house during the coronavirus time. But, all mixers use something called OSC which is a protocol between mixers, synthesizers, etc. to computers. You can find more information here http://opensoundcontrol.org/introduction-osc.
Update:
It's neither! I am going to use the AES67 standard to receive information from my mixer and with that process the audio. This is because my mixer is ethernet capable.

Access to audio from audio card with WebRTC

I'd like to be able to capture the audio from the audio card of my computer and to dispatch it with WebRTC. However, I am not sure if it's possible or not to have access to the audio directly produced by my computer.
According to this repo https://github.com/niklasenbom/RecordingApp/blob/master/app.js there is a system audio stuff but not sure if it's what I'm looking for.
Thanks,
You can do it by using NAudio. Actually I did the same project myself and will put it in GitHub in a few weeks and update this answer. You can configure the frequency etc. and use it's OnDataAvailable event to dispatch the sound to registered clients.

Convert voip audio to text for debugging

While working on voip apps, I usually end up picking up one phone, talking to it, picking up the other phone and check if I hear myself. This even gets trickier if I'm doing apps with three way calling.
Using a softphone doesn't help.
Ideally, I want to be able to run multiple instances of some command line based sip ua wherein i can dial a number. Once the ua has dialed and the other party ha picked up, both agents exchange audio. But instead of having to hear some audio, the apps instead display some text which identifies the other end. Possibly some frequency pattern that can be converted to text. Then this text is displayed on the app.
Can something like this be done? I'm creating apps against freeswitch. Ideas how to debug voip apps are also welcome in the comments.
yes, absolutely. The easiest would be to have a separate FreeSWITCH server that is used for placing the test calls and sending/receiving your test signals.
tone_stream will generate the tones at frequencies that you need: https://freeswitch.org/confluence/display/FREESWITCH/Tone_stream
tone_detect can detect the frequencies and execute actions, or even better, generate events that you can catch over an ESL socket: https://freeswitch.org/confluence/display/FREESWITCH/mod_dptools%3A+tone_detect
The best way to generate such calls is to use a dialer script which communicates to FreeSWITCH via Event Socket. Here you can see some (working) examples that I made with Perl:
https://github.com/voxserv/rring/blob/master/lib/Rring/Caller/FreeSWITCH.pm -- this is a part of a test suite tat I build for testing a provider's SIP infrastructure. As you can see, it connects to FreeSWITCH, starts event listener, and then originates the call and also expects an inbound call. It then sends and analyzes DTMF.
https://github.com/voxserv/freeswitch-helper-scripts/tree/master/esl -- these are special-purpose dialers, you can also use them as examples.
https://github.com/voxserv/freeswitch-perf-dialer -- this one generates a series of calls, like SIPp does.
Another technique is to play a sample audio file and record the audio being received on the other end[call recording] and then comparing the two. This system works on setup where systems are located at various places and you are testing end to end quality.
There are lot of Audio Comparison tools [like PESQ] should help you not just detect the presence of Audio but also give stats about the degradation of various parameters in the audio stream.
This can be extended to do test analysis of FS patches as and when they are released and also for other hooks or quality standards you want to enforce.

How to capture Siri's audio data

I'm currently developing a Cydia tweak about speaker recognition on iPhone. This tweak can identify if the current user is the phone owner (after training). This tweak has already be implemented on Android and we have already compiled and tested the core library. The only difficult that we are facing is how to capture the audio data from Siri. We have tried:
hooked function "- (void)_tellSpeechDelegateRecordingWillBegin" and "- (void)_tellSpeechDelegateRecordingDidEnd" and used AvAudioRecorder to record the audio - failed because all the AvAudioSession will be interrupted when Siri is recording.
hooked function "- (void)startSpeechRequestWithSpeechFileAtURL:(id)arg1". This function seemed to be something related to the audio file but we could then get the function hooked with Logos tweak framework.
There are two possible ways we are considering:
Implement a low level audio recorder that can bypass Siri's interruption. (Something like a call recorder.)
Implement a Http(s) proxy server inside iPhone and capture the requests which forwards to the Siri's server.
But we have little experience for those options. Does anyone have ideas to capture the audio from Siri (by the phone but not through a external server)
Update (Feb 12 2014)
Check this.
I found there was a class named "AFSpeechRecorder". It was used in Siri. I guess it must be related to the audio data. But unluckily, this class is removed in the iOS 7. Can't get any idea about the changes.

Resources