Simple audio filter-bank - audio

I'm new to audio filters so please excuse me if i'm saying something wrong.
I like to write a code which can split up audio stored in PCM samples into two or three frequency bands and do some manipulation (like modifying their audio levels) or analysis on them then reconstruct audio samples from the output.
As far as i read on the internet for this task i could use FFT-IFFT and do manipulation on the complex form or use a time domain based filterbank which for example is used by the MP2 audio encoding format. Maybe a filter-bank is a better choice, at least i read somewhere it can be more CPU usage friendly in real time streaming environments. However i'm having hard times understanding the mathematical stuff behind a filterbank. I'm trying to find some source code (preferably in Java or C/C++) about this topic, so far with no success.
Can somebody provide me tips or links which can get me closer to an example filter bank?

Using FFT to split an Audio signal into few bands is overkill.
What you need is one or two Linkwitz-Riley filters. These filters split a signal into a high and low frequency part.
A nice property of this filter is, that if you add the low and high frequency parts you get almost the original signal back. There will be a little bit of phase-shift but the ear will not be able to hear this.
If you need more than two bands you can chain the filters. For example if you want to separate the signal at 100 and 2000Hz it would in pseudo-code somewhat like this:
low = linkwitz-riley-low (100, input-samples)
temp = linkwitz-riley-high (100, input-samples)
mids = linkwitz-riley-low (2000, temp)
highs = linkwitz-riley-high (2000, temp);
and so on..
After splitting the signal you can for example amplifiy the three output bands: low, mids and highs and later add them together to get your processed signal.
The filter sections itself can be implemented using IIR filters. A google search for "Linkwitz-Riley digital IIR" should give lots of good hits.
http://en.wikipedia.org/wiki/Linkwitz-Riley_filter

You should look up wavelets, especially Daubechies wavelets. They will let you do the trick, they're FIR filters and they're really short.
Update
Downvoting with no explanation isn't cool. Additionally, I'm right. Wavelets are filter banks and their job is to do precisely what is described in the question. IMHO, that is. I've done it many times myself.

There's a lot of filter source code to be found here

Related

How to create and play a vector from samples of tones?

I'm reading through Mark Owen's "Practical Signal Processing" and an exercise in the second chapter says to "Create a vector containing samples of several seconds of a 400Hz sine wave at 32kHz."(question 2.3)
Since the book doesn't endorse any one technology, I'm trying to do this in Supercollider with:
"Pbind(\freq, Pseq([400,400,400,400,400,400,400,400,400,400,]), \dur, 0.15;).play;"
but I have two problems: how do I remove the gap between the notes during pattern playback and how do I generate the tones in the pattern at a particular sample rate?
Thank you!
It sounds like you're working at the "wrong" level. Using Pbind is very high-level, specifying patterns of musical events, whereas the author presumably wants you to think about the maths involved in generating individual audio data samples.
Since it's an exercise for the reader I won't give a complete answer, but: SuperCollider has a sin() operator as do many other languages. You can generate a list of values and then apply sin() e.g. via
sin([0,1,2,3,4,5])
or
sin((0..100))
Those are simple examples; they don't get the frequency or sample rate or duration you specify.
The question doesn't seem to ask you to play back the result, but if you wanted to do that you could do it by loading your calculated audio into a Buffer:
x = sin((0..1000));
b = Buffer.sendCollection(s, x);
b.play

Determining the 'amount' of speaking in a video

I'm working on a project to transcribe lecture videos. We are currently just using humans to do the transcriptions as we believe it is easier to transcribe than editing ASR, especially for technical subjects (not the point of my question, though I'd love any input on this). From our experiences we've found that after about 10 minutes of transcribing we get anxious or lose focus. Thus we have been splitting videos into ~5-7 minute chunks based on logical breaks in the lecture content. However, we've found that the start of a lecture (at least for the class we are piloting) often has more talking than later on, which often has time where the students are talking among themselves about a question. I was thinking that we could do signal processing to determine the rough amount of speaking throughout the video. The idea is to break the video into segments containing roughly the same amount of lecturing, as opposed to segments that are the same length.
I've done a little research into this, but everything seems to be a bit overkill for what I'm trying to do. The videos for this course, though we'd like to generalize, contain basically just the lecturer with some occasional feedback and distant student voices. So can I just simply look at the waveform and roughly use the spots containing audio over some threshold to determine when the lecturer is speaking? Or is an ML approach really necessary to quantify the lecturer's speaking?
Hope that made sense, I can clarify anything if necessary.
Appreciate the help as I have no experience with signal processing.
Although there are machine learning mehtods that are very good at discriminating voice from other sounds, you don't seem to require that sort of accuracy for your application. A simple level-based method similar to the one you proposed should be good enough to get you an estimate of speaking time.
Level-Based Sound Detection
Goal
Given an audio sample, discriminate the portions with a high amount of sounds from the portions that consist of background noise. This can then be easily used to estimate the amount of speech in a sound file.
Overview of Method
Rather than looking at raw levels in the signal, we will first convert it to a sliding-window RMS. This gives a simple measure of how much audio energy is at any given point of the audio sample. By analyzing the RMS signal we can automatically determine a threshold for distinguishing between backgroun noise and speech.
Worked Example
I will be working this example in MATLAB because it makes the math easy to do and lets me create illustrations.
Source Audio
I am using President Kennedy's "We choose to go to the moon" speech. I'm using the audio file from Wikipedia, and just extracting the left channel.
imported = importdata('moon.ogg');
audio = imported.data(:,1);
plot(audio);
plot((1:length(audio))/imported.fs, audio);
title('Raw Audio Signal');
xlabel('Time (s)');
Generating RMS Signal
Although you could techinically implement an overlapping per-sample sliding window, it is simpler to avoid the overlapping and you'll get very similar results. I broke the signal into one-second chunks, and stored the RMS values in a new array with one entry per second of audio.
audioRMS = [];
for i = 1:imported.fs:(length(audio)-imported.fs)
audioRMS = [audioRMS; rms(audio(i:(i+imported.fs)))];
end
plot(1:length(audioRMS), audioRMS);
title('Audio RMS Signal');
xlabel('Time (s)');
This results in a much smaller array, full of positive values representing the amount of audio energy or "loudness" per second.
Picking a Threshold
The next step is to determine how "loud" is "loud enough." You can get an idea of the distribution of noise levels with a histogram:
histogram(audioRMS, 50);
I suspect that the lower shelf is the general background noise of the crowd and recording environment. The next shelf is probably the quieter applause. The rest is speech and loud crowd reactions, which will be indistinguishable to this method. For your application, the loudest areas will almost always be speech.
The minimum value in my RMS signal is .0233, and as a rough guess I'm going to use 3 times that value as my criterion for noise. That seems like it will cut off the whole lower shelf and most of the next one.
A simple check against that threshold gives a count of 972 seconds of speech:
>> sum(audioRMS > 3*min(audioRMS))
ans =
972
To test how well it actually worked, we can listen to the audio that was eliminated.
for i = 1:length(speech)
if(~speech(i))
clippedAudio = [clippedAudio; audio(((i-1)*imported.fs+1):i*imported.fs)];
end
end
>> sound(clippedAudio, imported.fs);
Listening to this gives a bit over a minute of background crowd noise and sub-second clips of portions of words, due to the one-second windows used in the analysis. No significant lengths of speech are clipped. Doing the opposite gives audio that is mostly the speech, with clicks heard as portions are skipped. The louder applause breaks also make it through.
This means that for this speech, the threshold of three times the minimum RMS worked very well. You'll probably need to fiddle with that ratio to get good automatic results for your recording environment, but it seems like a good place to start.

Comparing audio recordings

I have 5 recorded wav files. I want to compare the new incoming recordings with these files and determine which one it resembles most.
In the final product I need to implement it in C++ on Linux, but now I am experimenting in Matlab. I can see FFT plots very easily. But I don't know how to compare them.
How can I compute the similarity of two FFT plots?
Edit: There is only speech in the recordings. Actually, I am trying to identify the response of answering machines of a few telecom companies. It's enough to distinguish two messages "this person can not be reached at the moment" and "this number is not used anymore"
This depends a lot on your definition of "resembles most". Depending on your use case this can be a lot of things. If you just want to compare the bare spectra of the whole file you can just correlate the values returned by the two ffts.
However spectra tend to change a lot when the files get warped in time. To figure out the difference with this, you need to do a windowed fft and compare the spectra for each window. This then defines your difference function you can use in a Dynamic time warping algorithm.
If you need perceptual resemblance an FFT probably does not get you what you need. An MFCC of the recordings is most likely much closer to this problem. Again, you might need to calculate windowed MFCCs instead of MFCCs of the whole recording.
If you have musical recordings again you need completely different aproaches. There is a blog posting that describes how Shazam works, so you might be able to find this on google. Or if you want real musical similarity have a look at this book
EDIT:
The best solution for the problem specified above would be the one described here ("shazam algorithm" as mentioned above).This is however a bit complicated to implement and easier solution might do well enough.
If you know that there are only 5 different different possible incoming files, I would suggest trying first something as easy as doing the euclidian distance between the two signals (in temporal or fourier). It is likely to give you good result.
Edit : So with different possible starts, try doing an autocorrelation and see which file has the higher peak.
I suggest you compute simple sound parameter like fundamental frequency. There are several methods of getting this value - I tried autocorrelation and cepstrum and for voice signals they worked fine. With such function working you can make time-analysis and compare two signals (base - to which you compare, in - which you would like to match) on given interval frequency. Comparing several intervals based on such criteria can tell you which base sample matches the best.
Of course everything depends on what you mean resembles most. To compare function you can introduce other parameters like volume, noise, clicks, pitches...

What properties of sound can be represented / computed in code?

This one is probably for someone with some knowledge of music theory. Humans can identify certain characteristics of sounds such as pitch, frequency etc. Based on these properties, we can compare one sound to another and get a measure pf likeliness. For instance, it is fairly easy to distinguish the sound of a piano from that of a guitar, even if both are playing the same note.
If we were to go about the same process programmatically, starting with two audio samples, what properties of the sounds could we compute and use for our comparison? On a more technical note, are there any popular APIs for doing this kind of stuff?
P.S.: Please excuse me if I've made any elementary mistakes in my question or I sound like a complete music noob. Its because I am a complete music noob.
There are two sets of properties.
The "Frequency Domain" -- the amplitudes of overtones in a specific sample. This is the amplitudes of each overtone.
The "Time Domain" -- the sequence of amplitude samples through time.
You can, using Fourier Transforms, convert between the two.
The time domain is what sound "is" -- a sequence of amplitudes. The frequency domain is what we "hear" -- a set of overtones and pitches that determine instruments, harmonies, and dissonance.
A mixture of the two -- frequencies varying through time -- is the perception of melody.
The Echo Nest has easy-to-use analysis apis to find out all you might want to know about a piece of music.
You might find the analyze documentation (warning, pdf link) helpful.
Any and all properties of sound can be represented / computed - you just need to know how. One of the more interesting is spectral analysis / spectrogramming (see http://en.wikipedia.org/wiki/Spectrogram).
Any properties you want can be measured or represented in code. What do you want?
Do you want to test if two samples came from the same instrument? That two samples of different instruments have the same pitch? That two samples have the same amplitude? The same decay? That two sounds have similar spectral centroids? That two samples are identical? That they're identical but maybe one has been reverberated or passed through a filter?
Ignore all the arbitrary human-created terms that you may be unfamiliar with, and consider a simpler description of reality.
Sound, like anything else that we perceive is simply a spatial-temporal pattern, in this case "of movement"... of atoms (air particles, piano strings, etc.). Movement of objects leads to movement of air that creates pressure waves in our ear, which we interpret as sound.
Computationally, this is easy to model; however, because this movement can be any pattern at all -- from a violent random shaking to a highly regular oscillation -- there often is no constant identifiable "frequency", because it's often not a perfectly regular oscillation. The shape of the moving object, waves reverberating through it, etc. all cause very complex patterns in the air... like the waves you'd see if you punched a pool of water.
The problem reduces to identifying common patterns and features of movement (at very high speeds). Because patterns are arbitrary, you really need a system that learns and classify common patterns of movement (i.e. movement represented numerically in the computer) into various conceptual buckets of some sort.

Audio normalization/fixation?

I am using some audio fingerprinting technique to mark songs in long recordings. For example, in radio show records. Fingerprinting mechanism works fine but i have a problem with normalization (or downsampling).
Here you can see two same songs but different waveforms. I know i should make some DC Offset fixation and use some high and low gain filters. I already do them by Sox using highpass 1015 and lowpass 1015. And i use wavegain to fix the volume and DC Offset. But in this case wave forms turns to one like below:
But even in this case, i can't get the same fingerprint. (I am not expecting %100 same but at least %50 would be good)
So. What do you think? What can i do to fix records to have same fingerprints? Maybe some audio filtering would work but i don't know which one to use? Can you help me?
By the way, here is the explanation of fingerprinting technique.
http://wiki.musicbrainz.org/Future_Proof_Fingerprint
http://wiki.musicbrainz.org/Future_Proof_Fingerprint_Function
Your input waveforms appear to be clipping, so no amount of filtering is going to result in a meaningful "fingerprint". Make sure you collect valid input samples that have a reasonable dynamic range but which do not clip.

Resources