Audio signal comparison from Radar - audio

I an working on a student project. We have a radar that gives an audio detection using headphones to indicate the type of target. Target types are (eg car/truck/man). Radar distinguishes between these targets based on doppler variation, down converts this into audible range and operator can hear it through headphone. System has provided sample audio files corresponding to each type of target(man/car/truck) to train the operator to know as to what he is hearing when live signal is fed and accordingly decide what target it is.
I intend that a software can do the job of this operator.
I want to compare live audio signal input from Radar with 7 different test audio files and want the software to tell me which file matches the input.
kindly educate me .... can these audio fingerprinting softwares do my job.

What you're trying to be done can be implemented in GNU Radio, in a lot of ways.
You could, for example, take the audio signal as input to an audio source, connect that to a set of xlating FIR filters, which you'd design using the gr_filter_design tool; you then would estimate the (potentially decimated) signal in these bands by converting the complex samples to their power (complex to Mag^2) and would then further low-pass and decimate, to then select the band with the highest energy. All this can be done in a nice graphical way in the GNU Radio Companion (gnuradio-companion), which will then generate Python code, which is used to set up the signal flow graph based on the C++ GNU Radio framework.
I recommend you read the Guided Tutorials and see where you get from there.

Related

Fieldwork audio recording for acoustic analysis: stereo or mono? appropriate gain?

I work in the field of phonetics and often need to record human speech for acoustic analysis. I have two questions that I couldn't find answers:
If I record in stereo channels, I need to convert to mono later on to proceed with annotation. So in principle mono signal is good enough. Are there reasons that stereo sound should be used (e.g. the signal would be better?)
Also, we were warned that the gain level should be kept small so that the recording level shouldn't exceed the maximum, which leads to signal cuttoff. However, I was also criticised when the recording file shows too low an amplitude (it's still very clear though), for that leads to a low SNR. How do people choose an appropriate gain level?
As the act of recording is involved, the Sound Design forum might be your best bet.
I can't think anything that might be gained, in terms of frequency analysis, by having a stereo signal. Stereo is more about locating the source of a sound in 3D space. Does the source of sound emit different frequency profiles in different directions? Does the environment filter the sound differently over the course of the two paths to the stereo inputs? If the the answer is "not significantly" then mono should be fine.
Choosing an appropriate gain level is mostly a matter of knowing your equipment. Ideally, your recording setup will provide feedback (usually a visual meter of some sort) that shows the signal strength. The "best" would be (theoretically) the loudest level that does not distort. So you have to know at what level distortion happens on all the elements of the recording chain.
There can be some fudging on this, given that the loudest peak on a recorded segment may be an outlier.

audio processing in labVIEW( Is stream process possible ?? )

I am quite new to LabVIEW and NI devices.
I am working on Active Noise Cancellation Project, where I will be using two microphones input and one loud speaker as output. I have NI myRIO 1900 and CDAQ 9178 devices in our university lab. I need to do real time audio processing, I will collect data from microphone and process it using filtered XLMS algorithm to produce anti noise from loud speaker and other microphone is error microphone. I want to process data so quickly( within 1.7 msec ) so I will have real time response at 44100 sample rate !! My question is , 'is it possible to do with labview ?? and is stream processing possible in labVIEW?? and can I achieve so small audio latencies as mentioned above ??'
I have searched for audio processing objects in labview help. I can only find 'Acquire Sound', 'Play Waveform', surprisingly 'Acquire Sound configuration ' will work only for duration of minimum of 1 second not less than that !!! I can't input the time milli seconds !!!( I am still facing problem installing myRIO, so I have used host computed VI to do this.)
Please help !! Thank You
The thing you should be looking into is the FPGA part of the myRIO. You’re never going to be able to get 1.7ms response time via the host computer. The FPGA can access the Analogue inputs and outputs, so if you can get your algorithm to compile onto the FPGA then it should work.
Yes, it is possible with LabVIEW, insofar as any algorithm you want to code up can be executed by LabVIEW. If you're asking whether there is a library that already exists to do the filtering you're wanting to do, you may want to explore the NI Sound & Vibration toolkit, which is sold separate from LabVIEW, or explore third-party libraries.
The raw waveform mathematics abilities that come with LabVIEW are fairly extensive. You should be able to code whatever transforms you want if you know the base math.

How can I look for certain sounds in a live sound input?

I've combed StackOverflow and the web for many questions on whistle detection, etc, and many people did explain as much as they could as to how they can go about detecting their stuff.
capturing sound for analysis and visualizing frequences in android
analyzing whistle sound for pitch note
But what I don't get is how does FFT help you to detect certain sounds in a given sample audio data?
Here's what I understand so far from some stuff I found here and there.
-The sine wave is more or less the building block of ALL signals, musical or not
-Three parameters - FREQUENCY, AMPLITUDE, and INITIAL PHASE, characterize every steady sine wave completely.
-They make each and any kind of wave unique.
-Fourier transform can be used to inspect what kinds of sine waves there are in a signal
SOURCE -- [Audio signal processing basics][3]
Audio data that the computer generates as received from the mic or other input source, for live processing, is an array of amplitudes processed (or stored or taken) at a particular sample rate.
So how does one go from that to detecting whistles and claps?
And complex things such as say, a short period of whistling to a particular song?
My theory of detecting is that we test our whistles in a spectogram, and record the particular frequency and amplitude characteristics. And then if those particular characteristics are repeated again in the input, we've detected a whistle.
Am I right or wrong?
This sound processing stuff is a little complicated.
Forgot to mention this - I'm using Python. Java is also okay, since most of the examplar code I found was for Android which is in Java. And I can work in Java too. Any mention of any libraries or APIs would be helpful too.

Audio content analysis for online audiovisual data

I want to work on a project where I have to segment and classify online audiovisual data based on its audio content, i.e. different parts of the audio visual data will be segmented and classified as silence, music, speech, speech+background music, etc based on their audio content.
I am aware that I have to obtain the audio part from the audiovisual data and extract features like zero crossing, spectral peaks, etc. and find out segment boundaries in order to segment audio data.
But I'm lost in the beginning itself.
I do not know how to start off with the project. The output of the software are segments of audiovisual data under different categories like silence, speech, music, etc.
It will be really helpful if someone lets me know
Which programming language is convenient for this purpose?
What steps should i follow in order to develop this software?
I have no background in digital signal processing. It'll be really helpful if I get some guidance
I'd suggest to look into a multimedia framework such as GStreamer. It is crossplatform, but the easiest to get started on Linux where it originates from. It already comes with all kind of plugins to receve, demux and decode audio and video. It also has a couple of analyzers (such as level and spectrum analyzers for audio as well as voice activity detection). Those could be a good starting point for your experiments. Gstreamer itself is written in C, but applications can use the language bindings to python, perl, c#, c++, java, ...

Real Time Audio Analysis In Linux

I'm wondering what is the recommended audio library to use?
I'm attempting to make a small program that will aid in tuning instruments. (Piano, Guitar, etc.). I've read about ALSA & Marsyas audio libraries.
I'm thinking the idea is to sample data from microphone, do analysis on chunks of 5-10ms (from what I've read). Then perform a FFT to figure out which frequency contains the largest peak.
This guide should help. Don't use ALSA for your application. Use a higher level API. If you decide you'd like to use JACK, http://jackaudio.org/applications has three instrument tuners you can use as example code.
Marsyas would be a great choice for doing this, it's built for exactly this kind of task.
For tuning an instrument, what you need to do is to have an algorithm that estimates the fundamental
frequency (F0) of a sound. There are a number of algorithms to do this, one of the newest and best
is the YIN algorithm, which was developed by Alain de Cheveigne. I recently added the YIN algorithm
to Marsyas, and using it is dead simple.
Here's the basic code that you would use in Marsyas:
MarSystemManager mng;
// A series to contain everything
MarSystem* net = mng.create("Series", "series");
// Process the data from the SoundFileSource with AubioYin
net->addMarSystem(mng.create("SoundFileSource", "src"));
net->addMarSystem(mng.create("ShiftInput", "si"));
net->addMarSystem(mng.create("AubioYin", "yin"));
net->updctrl("SoundFileSource/src/mrs_string/filename",inAudioFileName);
while (net->getctrl("SoundFileSource/src/mrs_bool/notEmpty")->to<mrs_bool>()) {
net->tick();
realvec r = net->getctrl("mrs_realvec/processedData")->to<mrs_realvec>();
cout << r(0,0) << endl;
}
This code first creates a Series object that we will add components to. In a Series, each of the components
receives the output of the previous MarSystem in serial. We then add a SoundFileSource, which you can feed
in a .wav or .mp3 file into. We then add the ShiftInput object which outputs overlapping chunks of audio, which
are then fed into the AubioYin object, which estimates the fundamental frequency of that chunk of audio.
We then tell the SoundFileSource that we want to read the file inAudioFileName.
The while statement then loops until the SoundFileSource runs out of data. Inside the while
loop, we take the data that the network has processed and output the (0,0) element, which is the
fundamental frequency estimate.
This is even easier when you use the Python bindings for Marsyas.
http://clam-project.org/
CLAM is a full-fledged software framework for research and application development in the Audio and Music Domain. It offers a conceptual model as well as tools for the analysis, synthesis and processing of audio signals.
They have a great API, nice GUI and a few finished apps where you can see everything.
ALSA is sort of the default standard for linux now by virtue of the kernel drivers being included in the kernel and OSS being depreciated. However there are alternatives to ALSA userspace, like jack, which seems to be aimed at low-latency professional type applications. It's API seems to have a nicer API, although I've not used it, my brief exposure to the ALSA API would make me think that almost anything would be better.
Audacity includes a frequency plot feature and has built-in FFT filters.

Resources