Our are looking for a way to develop an automated system for our ALSA/Linux based product to test our codex card automatically. For this we envision the following steps:
1. Play an existing known wave file
2. Record sound from the MIC.
3. If both speaker and MIC are working fine (as they are placed close to each other) the recorded file should have some version of the played sound
4. Automatically analyse if the recorded file has some version of sound file.
Question: Is there an API, which can help analyse the recorded to find sound level or other parameters to help detect that both MIC and speaker in proximity are working fine.
Related
is there a scenario where we can use the Google Resonance Audio SDK not with headphones, but with real speakers (e.g. mounted in a 360° cyrcle setting)?
Or are all algorithms not working for real speaker outputs?
Thank you!
Currently, Resonance Audio is optimized for headphone playback. For example, HRTF processing is done in the Ambisonics domain, without generating (virtual) speaker signals - this is because it is a much more efficient way of generating binaural output.
However, in the Resonance Audio open source release, the Ambisonic Codec class can readily be used to decode Ambisonics to any arbitrary loudspeaker array. To use that with the rest of the Resonance Audio system, however, it would be necessary to modify/extend the audio processing graph by adding a new decoder node.
Please, feel free to add a feature request and, depending on popularity, we might consider adding that in the future!
I am working on a film analysis program, which retrieves data in realtime from a movie, that is playing in the same sketch. For analysing the sound I tried the minim library, but I can't figure out, how to get the audio signal from the movie. All I could do was accessing an audio file, I was loading into the sketch manually, or the line-in through the mic.
Thanks a lot!
Although GStreamer (used by the processing-video library) has access to audio, the processing-video library itself doesn't expose that at the moment.
You will need to use workarounds at the moment:
Extract audio from your movie and load straight into minim. (you can trigger audio playback at the same time as movie playback if you need to)
or use a tool to use system audio output as an input (minim get line in). On OSX you can use Soundflower. Another option is JACK and it's patch interface.
I'm working on a WinRT project in which I'm playing multiple video files at the same time. I have 3 audio devices attached to machine which will be used distinctively to render audio from video file(s) that's playing. Maximum number of videos that can be played simultaneously is 3. Hence each audio device would be used to render audio from its corresponding video file. i.e. Audio device 1 would play video 1 and so on. That's the requirement I have.
So far, I came across two approaches. First, we use Dolby or any other API to channelize audio to corresponding device. i.e. left channel is rendered to device 1, middle/center to device 2, and right to device 3. I've tried Dolby Audio sample app for Windows 10. They've done channeling in embedded video, not in code. I couldn't find documentation for Windows 10 Dolby API. So for this approach, can I render audio in form of a channel to a particular audio device? And I don't want to merge audio in anyway.
Second, we use 3 sound cards and attach an audio device to each one. We choose the device we want to play audio on by providing device ID. I've tried this approach with XAudio2 by calling createMasteringVoice() method with device ID I want. That worked for single audio file, however, I want to render audio of multiple videos that are being played.
Both approaches didn't solve the core requirement yet. So considering the scenario, what is best approach to follow to fulfill the requirement?
I would say you can go with XAudio2 as you mentioned in second approach. Since you can pass deviceId to createMasteringVoice(), you can create multiple instance of UniversalAudioPlayer and pass different IDs to each one. This way multiple sounds can be played concurrently. Take a look at function definition and community additions here.
I am starting a project to test the audio performance on linux.
What I need to do is to play the audio on our websystem and check the audio quality (or just check it has audio output) on linux.
I am going to record the audio on linux with ffmpeg. Is there any other better choice?
I don't know how to (automation) check I recorded is what I played, as well as the quality of recorded audio.
I think what you need is PESQ (Perceptual Evaluation of Sound Quality). However I have not found anything which is open source/free and out of the box.
You can download the recommendation from here:
http://www.itu.int/rec/T-REC-P.862-200511-I!Amd2/en
Basically this is the reference implementation of PESQ.
Sevana has an audio quality analyser which is not an ITU standard, it is AQuA:
http://www.sevana.fi/aqua_wiki.php
It is available for linux but I think you have to pay for it.
You can also check the similarities for two audio files with cross-correlation, please refer to here:
https://dsp.stackexchange.com/questions/736/how-do-i-implement-cross-correlation-to-prove-two-audio-files-are-similar
I just learned that lot of people are using Matlab or Octave to generate the necessary data, for example:
http://bagustris.blogspot.ie/2011/11/calculate-time-lag-from-cross.html
I would like to hook up several piezos to an arduino so that, when they are activated each piezo plays/triggers a separate tone. For instance, I'll have five piezos connected to the arduino - when I apply pressure to each one they play a separate note, either through a software interface on a computer or from the piezos themselves. Basically an Arduino synth using piezos as keys.
I'm just not quite sure how to go about doing this. I'm sure its possible but just need a push in the right direction. Any ideas? Thanks!
The practical difficulty of using one device as both an input sensor and output device, is that once activated to output (a sound) you would have to disable using it as input for some fixed time. Something more responsive would be to use separate sensors for the keys, and just one speaker for all sounds. The good folks who came up the Arduino tutorials have a 3 key sensor player example here:
http://arduino.cc/en/Tutorial/Tone3
and another example of using a piezo as a sound sensor here:
http://www.arduino.cc/en/Tutorial/KnockSensor
I can Help you with the Software interface , You can use your smart phone to play sounds for each Piezo Sensor.
See this app : https://play.google.com/store/apps/details?id=ram.mere.DoDuino
You can connect arduino using Serial ( Android 3.1 and higher ) or Bluetooth to this app.
And to use the Sound Action follow this tutorial :
https://www.youtube.com/watch?v=RQhx6qBElVk
. So you specify what sound to be played on your android phone , and when you detect which piezo you send data to the android and then the Sound Specified will be played .
So for example if android App Received : #p1; then it will played the sound related to Piezo one
and when you send #s1; then it will stop playing that sound ..etc.
Hope this help someone :D .