Filter for audio signal processing? - audio

I would like to ask what is a good filter for audio signal processing, particularly note onset processing?
Particularly, what I need is a filter that makes peaks sharper while smoothing out others, something like in the image below:
I am not sure if what I need are low/high-pass filters because I know that those filters work in the frequency domain, and I particularly want to work with the time-domain. I am only working on monophonic signals, recorded in .WAV 44.1Khz 16-bit mono format.
Thanks!

I would suggest a non-linear approach - effectively you want to do envelope detection with a short time constant.
y_1 = 0; // init y_1 = previous value of output signal, y
loop
y = abs(x); // rectify input signal
y = k * y + (1.0 - k) * y_1; // apply single pole recursive low pass filter
y_1 = y; // save output value for next iteration
end
Choosing k (NB: 0.0 < k < 1.0) is the tricky part and may require some experimentation. If k is too small then you will have a large time constant and this may result in too much lag in your onset detection. If k is too large then the time constant may be too small and you may get false positives. (In the latter case though you may be able to improve results by rejecting onsets that fall within a given minimum tine window from the previous "real" onset (e.g. 10 ms).) Start with, say, k = 0.1 and then perhaps try reducing it until the lag becomes unacceptable.

Instead of a filter for your desired audio processing, you might want to try some form of AGC (automatic gain control) to normalize the signal's envelope amplitude, with a time constant somewhere in the neighborhood of 1 beat time.
But accurate note onset detection may require more advanced signal processing and pattern matching techniques. There seem to be more than a few research papers on the topic.

Related

Reducing a FFT spectrum range

I am currently running Python's Numpy fft on 44100Hz audio samples which gives me a working frequency range of 0Hz - 22050Hz (thanks Nyquist). Once I use fft on those time domain values, I have 128 points in my fft spectrum giving me 172Hz for each frequency bin size.
I would like to tighten the frequency bin to 86Hz and still keep to only 128 fft points, instead of increasing my fft count to 256 through an adjustment on how I'm creating my samples.
The question I have is whether or not this is theoretically possible. My thought would be to run fft on any Hz values between 0Hz to 11025Hz only. I don't care about anything above that anyway. This would cut my working spectrum in half and put my frequency bins at 86Hz and while keeping to my 128 spectrum bins. Perhaps this can be accomplished via a window function in the time domain?
Currently the code I'm using to create my samples and then convert to fft is:
import numpy as np
sample_rate = 44100
chunk = 128
record_seconds = 2
stream = self.audio.open(format=pyaudio.paInt16, channels=1,
rate=sample_rate, input=True, frames_per_buffer=6300)
sample_list = []
for i in range(0, int(sample_rate / chunk * record_seconds)):
data = stream.read(chunk)
sample_list.append(np.fromstring(data, dtype=np.int16))
### then later ###:
for samp in sample_list:
samp_fft = np.fft.fft(samp) ...
I hope I worded this clearly enough. Let me know if I need to adjust my explanation or terminology.
What you are asking for is not possible. As you mentioned in a comment you require a short time window. I assume this is because you're trying to detect when a signal arrives at a certain frequency (as I've answered your earlier question on the subject) and you want the detection to be time sensitive. However, it seems your bin size is too large for your requirements.
There are only two ways to decrease the bin size. 1) Increase the length of the FFT. Unfortunately this also means that it will take longer to acquire the data. 2) lower the sample rate (either by sample rate conversion or at the hardware level) but since the samples arrive slower it will also take longer to acquire the data.
I'm going to suggest to you a 3rd option (from what I've gleaned from this and your other questions is possibly a better solution) which is: Perform the frequency detection in the time domain. What this would require is a time-domain bandpass filter followed by an RMS meter. Implementation wise this would be one or more biquad filters that you could implement in python for the filter - there are probably implementations already available. The tricky part would be designing the filter but I'd be happy to help you in chat. The RMS meter is basically taking the square root of the sum of the squares of the output samples from the filter.
Doubling the size of the FFT would be the obvious thing to do, but if there is a good reason why you can't do this then consider 2x downsampling prior to the FFT to get the effective sample rate down to 22050 Hz:
- Apply low pass filter with cut off at 11 kHz
- Discard every other sample from filtered output
- Apply FFT to down-sampled data
If you are not trying to resolve between close adjacent frequency peaks or noise, then, to half the frequency bin spacing, you can zero-pad your data to double the FFT length, without having to wait for more data. Then, if you only want the lower half of the frequency range 0..Fs/2, just throw away the middle half of the FFT result vector (which is usually far more efficient than trying to compute the lower half of the frequency range via non-FFT means).
Note that zero-padding gives the same result as high-quality interpolation (as in smoothing a plot of the original FFT result points). It does not increase peak separation resolution, but might make it easier to pick out more precise peak locations in the plot if the noise level is low enough.

Note Onset Detection using Spectral Difference

Im fairly new to onset detection. I read some papers about it and know that when working only with the time-domain, it is possible that there will be a large number of false-positives/negatives, and that it is generally advisable to work with either both the time-domain and frequency-domain or the frequency domain.
Regarding this, I am a bit confused because, I am having trouble on how the spectral energy or the results from the FFT bin can be used to determine note onsets. Because, aren't note onsets represented by sharp peaks in amplitude?
Can someone enlighten me on this? Thank you!
This is the easiest way to think about note onset:
think of a music signal as a flat constant signal. When and onset occurs you look at it as a large rapid CHANGE in signal (a positive or negative peak)
What this means in the frequency domain:
the FT of a constant signal is, well, CONSTANT! and flat
When the onset event occurs there is a rapid increase in spectrial content.
While you may think "Well you're actually talking about the peak of the onset right?" not at all. We are not actually interested in the peak of the onset, but rather the rising edge of the signal. When there is a sharp increase in the signal, the high frequency content increases.
one way to do this is using the spectrial difference function:
take your time domain signal and cut it up into overlaping strips (typically 50% overlap)
apply a hamming/hann window (this is to reduce spectrial smudging) (remember cutting up the signal into windows is like multiplying it by a pulse, in the frequency domain its like convolving the signal with a sinc function)
Apply the FFT algorithm on two sucessive windows
For each DFT bin, calculate the difference between the Xn and Xn-1 bins if it is negative set it to zero
square the results and sum all th bins together
repeat till end of signal.
look for peaks in signal using median thresholding and there are your onset times!
Source:
https://adamhess.github.io/Onset_Detection_Nov302011.pdf
and
http://www.elec.qmul.ac.uk/people/juan/Documents/Bello-TSAP-2005.pdf
You can look at sharp differences in amplitude at a specific frequency as suspected sound onsets. For instance if a flute switches from playing a G5 to playing a C, there will be a sharp drop in amplitude of the spectrum at around 784 Hz.
If you don't know what frequency to examine, the magnitude of an FFT vector will give you the amplitude of every frequency over some window in time (with a resolution dependent on the length of the time window). Pick your frequency, or a bunch of frequencies, and diff two FFTs of two different time windows. That might give you something that can be used as part of a likelihood estimate for a sound onset or change somewhere between the two time windows. Sliding the windows or successive approximation of their location in time might help narrow down the time of a suspected note onset or other significant change in the sound.
"Because, aren't note onsets represented by sharp peaks in amplitude?"
A: Not always. On percussive instruments (including piano) this is true, but for violin, flute, etc. notes often "slide" into each other as frequency changes without sharp amplitude increases.
If you stick to a single instrument like the piano onset detection is do-able. Generalized onset detection is a much more difficult problem. There are about a dozen primitive features that have been used for onset detection. Once you code them, you still have to decide how best to use them.

Programmatically increase the pitch of an array of audio samples

Hello kind people of the audio computing world,
I have an array of samples that respresent a recording. Let us say that it is 5 seconds at 44100Hz. How would I play this back at an increased pitch? And is it possible to increase and decrease the pitch dynamically? Like have the pitch slowly increase to double the speed and then back down.
In other words I want to take a recording and play it back as if it is being 'scratched' by a d.j.
Pseudocode is always welcomed. I will be writing this up in C.
Thanks,
EDIT 1
Allow me to clarify my intentions. I want to keep the playback at 44100Hz and so therefore I need to manipulate the samples before playback. This is also because I would want to mix the audio that has an increased pitch with audio that is running at a normal rate.
Expressed in another way, maybe I need to shrink the audio over the same number of samples somehow? That way when it is played back it will sound faster?
EDIT 2
Also, I would like to do this myself. No libraries please (unless you feel I could pick through the code and find something interesting).
EDIT 3
A sample piece of code written in C that takes 2 arguments (array of samples and pitch factor) and then returns an array of the new audio would be fantastic!
PS I've started a bounty on this not because I don't think the answers already given aren't valid. I just thought it would be good to get more feedback on the subject.
AWARD OF BOUNTY
Honestly I wish I could distribute the bounty over several different answers as they were quite a few that I thought were super helpful. Special shoutout to Daniel for passing me some code and AShelly and Hotpaw2 for putting in such detailed responses.
Ultimately though I used an answer from another SO question referenced by datageist and so the award goes to him.
Thanks again everyone!
Take a look at the "Elephant" paper in Nosredna's answer to this (very similar) SO question:
How do you do bicubic (or other non-linear) interpolation of re-sampled audio data?
Sample implementations are provided starting on page 37, and for reference, AShelly's answer corresponds to linear interpolation (on that same page). With a little tweaking, any of the other formulas in the paper could be plugged into that framework.
For evaluating the quality of a given interpolation method (and understanding the potential problems with using "cheaper" schemes), take a look at this page:
http://www.discodsp.com/highlife/aliasing/
For more theory than you probably want to deal with (with source code), this is a good reference as well:
https://ccrma.stanford.edu/~jos/resample/
One way is to keep a floating point index into the original wave, and mix interpolated samples into the output wave.
//Simulate scratching of `inwave`:
// `rate` is the speedup/slowdown factor.
// result mixed into `outwave`
// "Sample" is a typedef for the raw audio type.
void ScratchMix(Sample* outwave, Sample* inwave, float rate)
{
float index = 0;
while (index < inputLen)
{
int i = (int)index;
float frac = index-i; //will be between 0 and 1
Sample s1 = inwave[i];
Sample s2 = inwave[i+1];
*outwave++ += s1 + (s2-s1)*frac; //do clipping here if needed
index+=rate;
}
}
If you want to change rate on the fly, you can do that too.
If this creates noisy artifacts when rate > 1, try replacing *outwave++ += s1 + (s2-s1)*frac; with this technique (from this question)
*outwave++ = InterpolateHermite4pt3oX(inwave+i-1,frac);
where
public static float InterpolateHermite4pt3oX(Sample* x, float t)
{
float c0 = x[1];
float c1 = .5F * (x[2] - x[0]);
float c2 = x[0] - (2.5F * x[1]) + (2 * x[2]) - (.5F * x[3]);
float c3 = (.5F * (x[3] - x[0])) + (1.5F * (x[1] - x[2]));
return (((((c3 * t) + c2) * t) + c1) * t) + c0;
}
Example of using the linear interpolation technique on "Windows Startup.wav" with a factor of 1.1. The original is on top, the sped-up version is on the bottom:
It may not be mathematically perfect, but it sounds like it should, and ought to work fine for the OP's needs..
Yes, it is possible.
But this is not a small amount of pseudo code. You are asking for a time pitch modification algorithm, which is a fairly large and complicated amount of DSP code for decent results.
Here's a Time Pitch stretching overview from DSP Dimensions. You can also Google for phase vocoder algorithms.
ADDED:
If you want to "scratch", as a DJ might do with an LP on a physical turntable, you don't need time-pitch modification. Scratching changes the pitch and the speed of play by the same amount (not independently as would require time-pitch modification).
And the resulting array won't be of the same length, but will be shorter or longer by the amont of pitch/speed change.
You can change the pitch, as well as make the sound play faster or slower by the same ratio, by just resampling the signal using properly filtered interpolation. Just move each sample point, instead of by 1.0, by floating point addition by your desired rate change, then filter and interpolate the data at that point. Interpolation using a windowed Sinc interpolation kernel, with a low-pass filter transition frequency below the lower of the original and interpolated local sample rate, will work fairly well. Searching for "windowed Sinc interpolation" on the web returns lots of suitable result.
You need an interpolation method that includes a low-pass filter, or else you will hear horrible aliasing noise. (The exception to this might be if your original sound file is already severely low-pass filtered a decade or more below the sample rate.)
If you want this done easily, see AShelly's suggestion [edit: as a matter of fact, try it first anyway]. If you need good quality, you basically need a phase vocoder.
The very basic idea of a phase vocoder is to find the frequencies that the sound consists of, change those frequencies as needed and resynthesize the sound. So a brutal simplification would be:
run FFT
change all frequencies by a factor
run inverse FFT
If you're going to implement this yourself, you definitely should read a thorough explanation of how a phase vocoder works. The algorithm really needs many more considerations than the three-step simplification above.
Of course, ready-made implementations exist, but from the question I gather you want to do this yourself.
To decrease and increase the pitch is as simple as playing the sample back at a lower or higher rate than 44.1kHz. This will produce the slower/faster record sound but you'll need to add the 'scratchiness' of real records.
This helped me with resampling, which is same thing you need just looked from the opposite side.
If you can't find code, ping me, I have a nice C routine for this.

Measure audio noise level

I'm trying to get a qualitative handle on the amount of static or noise present in a audio stream. The normal content of the stream is voice or music.
I've been experiementing with taking the stddev of the samples, and that does give me some handle on the presence of voice vs. empty channel noise (ie. a high stddev usually indicates voice or music)
Was wondering if anyone else had some pointers on this.
Doesn't the peak value give you the answer? If you're looking at a signal from a good ADC, the ambient level should be in the 1's or 10's of counts, while voice or music will get up into the thousands of counts. Is there some kind of automatic gain control that makes this strategy not work?
If you need something more complex, the peak to RMS ratio might be a bit more reliable than simply RMS level (RMS = stddev). Pure noise will have a ratio of around 3-5, while sinusoids, for instance, have a peak to RMS ratio of 1.4. However, you can get more discrimination by looking at the spectrum of the signal. Static is usually spectrally smooth or even flat, while voice and music are spectrally structured. So a Fourier transform might be what you're looking for. Assuming a signal x that contains, say 0.5 seconds worth of data, here's some Matlab code:
Sx = fft(x .* hann(length(x), 'periodic'))
The HANN function applies a Hann window to reduce spectral leakage, while the FFT function quickly calculates the Fourier transform. Now you have a couple of choices. If you want to determine whether the signal x consists of static or voice/music, take the peak to RMS ratio of the spectrum:
pk2rms = max(abs(Sx))/sqrt(sum(abs(Sx).^2)/length(Sx))
I'd expect pure static to have a peak to RMS ratio around 3-5 (again), while voice/music would be at least an order of magnitude higher. This takes advantage of the fact that pure white noise has the same "structure" in time and frequency domains.
If you want to get a numerical estimate of the noise level, you can calculate the power in Sx over time, using an average:
Gxx = ((k-1)*Gxx + Sx.*conj(Sx))/k
Over time, the peaks in Gxx should come and go, but you should see a constant minimum value corresponding to the noise floor. In general, audio spectra are easier to look at on a dB (log vertical) scale.
Some notes:
1. I picked 0.5 seconds for the length of x, but I'm not sure what an optimal value here is. If you pick a value that's too short, x will not have much structure. In that case, the DC component of the signal will have a lot of energy. I expect you can still use the peak to RMS discriminator, though, if you first toss out the bin in Sx corresponding to DC.
2. I'm not sure what a good value for k is, but that equation corresponds to exponential averaging. You can probably experiment with k to figure out an optimal value. This might work best with a short x.
There are different kinds of noise. White, pink, brown. Noise can come from many places. Is a 60hertz hum noise or signal?
For white noise, I'd look at the fft and find the lowest value to see what your noise floor is.

Bandlimited waveform generation [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
I am writing a software synthesizer and need to generate bandlimited, alias free waveforms in real time at 44.1 kHz samplerate. Sawtooth waveform would do for now, since I can generate a pulse wave by mixing two sawtooths together, one inverted and phase shifted.
So far I've tried the following approaches:
Precomputing one-cycle perfectly bandlimited waveform samples at different bandlimit frequencies at startup, then playing back the two closest ones mixed together. Works okay I guess, but does not feel very elegant. A lot of samples are needed or the "gaps" between them will be heard. Interpolating and mixing is also quite CPU intensive.
Integrating a train of DC compensated sinc pulses to get a sawtooth wave. Sounds great except that the wave drifts away from zero if you don't get the DC compensation exactly right (which I found to be really tricky). The DC problem can be reduced by adding a bit of leakage to the integrator, but then you lose the low frequencies.
So, my question is: What is the usual way this is done? Any suggested solution must be efficient in terms of CPU, since it must be done in real time, for many voices at once.
One fast way to generate band-limited waveforms is by using band-limited steps (BLEPs). You generate the band-limited step itself:
and store that in a wavetable, then replace each transition with a band-limited step, to create waveforms that look like this:
See the walk-through at Band-Limited Sound Synthesis.
Since this BLEP is non-causal (meaning it extends into the future), for generating real-time waveforms, it's better to use the minimum-phase band-limited step, called a MinBLEP, which has the same frequency spectrum, but only extends into the past:
MinBLEPs take the idea further and
take a windowed sinc, perform a
minimum phase reconstruction and then
integrate the result and store it in a
table. Now to make an oscillator you
just insert a MinBLEP at each
discontinuity in the waveform. So for
a square wave you insert a MinBLEP
where the waveform inverts, for saw
wave you insert a MinBLEP where the
value inverts, but you generate the
ramp as normal.
There are a lot of ways to approach the bandlimited waveform generation. You will end up trading computational cost against quality as usual.
I suggest that you take a look at this site here:
http://www.musicdsp.org/
Check out the archive! It's full of good material. I just did a search on the keyword "bandlimited". The material that pops up should you keep busy for at least a week.
Btw - Don't know if that's what you looking for, but I did alias reduced (e.g. not really band limited) waveform generation a couple of years ago. I just calculated the integral between the last and current sample-position. For traditional synth-waveforms you can do that rather easy if you split your integration interval at the singularities (e.g. when the sawtooth get's his reset). The CPU load was low and the quality acceptable for my needs.
I had the same drift-problems, but applying a high-pass with a very low cutoff-frequency on the integral got rid of that effect. Real analog-synth don't go down into the subhertz region anyway, so you won't miss much.
This is what I came up with, inspired by Nils' ideas. Pasting it here in case it is useful for someone else. I simply box filter a sawtooth wave analytically using the change in phase from the last sample as a kernel size (or cutoff). It works fairly well, there is some audible aliasing at the very highest notes, but for normal usage it sounds great.
To reduce aliasing even more the kernel size can be increased a bit, making it 2*phaseChange for example sounds good as well, though you lose a bit of the highest frequencies.
Also, here is another good DSP resource I found when browsing SP for similar topics: The Synthesis ToolKit in C++ (STK). It's a class library that has lot's of useful DSP tools. It even has ready to use bandlimited waveform generators. The method they use is to integrate sinc as I described in my first post (though I guess they do it better then me...).
float getSaw(float phaseChange)
{
static float phase = 0.0f;
phase = fmod(phase + phaseChange, 1.0f);
return getBoxFilteredSaw(phase, phaseChange);
}
float getPulse(float phaseChange, float pulseWidth)
{
static float phase = 0.0f;
phase = fmod(phase + phaseChange, 1.0f);
return getBoxFilteredSaw(phase, phaseChange) - getBoxFilteredSaw(fmod(phase + pulseWidth, 1.0f), phaseChange);
}
float getBoxFilteredSaw(float phase, float kernelSize)
{
float a, b;
// Check if kernel is longer that one cycle
if (kernelSize >= 1.0f) {
return 0.0f;
}
// Remap phase and kernelSize from [0.0, 1.0] to [-1.0, 1.0]
kernelSize *= 2.0f;
phase = phase * 2.0f - 1.0f;
if (phase + kernelSize > 1.0f)
{
// Kernel wraps around edge of [-1.0, 1.0]
a = phase;
b = phase + kernelSize - 2.0f;
}
else
{
// Kernel fits nicely in [-1.0, 1.0]
a = phase;
b = phase + kernelSize;
}
// Integrate and divide with kernelSize
return (b * b - a * a) / (2.0f * kernelSize);
}
The DC offset from a blit - can be reduced with a simple High Pass Filter! - much like a real analogue circuit where they use a DC blocking cap!

Resources