Overlap add in audio analysis-synthesis - audio

I wrote some code that takes an audio signal (currently a sine wave) as an input and does the following:
Take frames of n (1024) samples
Apply FFT
Apply iFFT
Play output
With this process the output signal is basically the same as the input signal.
Now, in a second attempt I do:
Take overlapping frames from the input
Apply a window function
FFT
iFFT
Overlap the output frames
In step 1, if I take overlapping frames using a hop size (number of samples to jump to take next frame) of a power of 2 (4, 8, 256...) the output sound is smooth and resembles the original input sound, but with any other hop size, the sound starts to crack down. This happens for any frequency of the input signal. Question 1. Why is the sound smooth only if the hop size is 2^n?.
Currently I use a Hanning window. When the hop size is large (e.g. 512) the output sound has a lower volume than when the hop size is small (e.g.64). This seems an expected behavior, because a small hop-size implies that a sample is reconstructed with more frames, so more signals are added. Question 2. Is there a way to properly scale the output signal so that the volume resembles the original signal?
Thank you!

This should not be happening, Overlap-add method can reconstruct your signal without the problems described, we not know exactly what are you doing, I did it some time ago and its works for any hop size and window size, a small secret is apply zeros after and before your signal to ensure a continuous signal, if you look calmly will realize that your window function works like a fade-in/fade-out if you just concatenate the frames you will notice some clicks or the output signal will look like a vibrato, gets a little tricky to tell where your problem actually find !
Just for Debug, skip the FFT and iFFT steps and see if your signal was correctly constructed, if yes your overlap-add process works, and your problem can be in your FFT/iFFT ...

Overlap-add is normally done without using a non-rectangular window function. Zero-pad instead.
If you do use a window function, then you have to make sure that all the offset window functions sum to a constant level, which for a Von Hann window happens with certain offsets (except at the very beginning or end of the series sum). As
2 - (cos(x)+cos(x+Pi)) == 2
Sum more windows into a result without any scaling, and of course the level of the sum will increase.

Related

Realtime STFT and ISTFT in Julia for Audio Processing

I'm new to audio processing and dealing with data that's being streamed in real-time. What I want to do is:
listen to a built-in microphone
chunk together samples into 0.1second chunks
convert the chunk into a periodogram via the short-time Fourier transform (STFT)
apply some simple functions
convert back to time series data via the inverse STFT (ISTFT)
play back the new audio on headphones
I've been looking around for "real time spectrograms" to give me a guide on how to work with the data, but no dice. I have, however, discovered some interesting packages, including PortAudio.jl, DSP.jl and MusicProcessing.jl.
It feels like I'd need to make use of multiprocessing techniques to just store the incoming data into suitable chunks, whilst simultaneously applying some function to a previous chunk, whilst also playing another previously processed chunk. All of this feels overcomplicated, and has been putting me off from approaching this project for a while now.
Any help will be greatly appreciated, thanks.
As always start with a simple version of what you really need ... ignore for now pulling in audio from a microphone, instead write some code to synthesize a sin curve of a known frequency and use that as your input audio, or read in audio from a wav file - benefit here is its known and reproducible unlike microphone audio
this post shows how to use some of the libs you mention http://www.seaandsailor.com/audiosp_julia.html
You speak of "real time spectrogram" ... this is simply repeatedly processing a window of audio, so lets initially simplify that as well ... once you are able to read in the wav audio file then send it into a FFT call which will return back that audio curve in its frequency domain representation ... as you correctly state this freq domain data can then be sent into an inverse FFT call to give you back the original time domain audio curve
After you get above working then wrap it in a call which supplies a sliding window of audio samples to give you the "real time" benefit of being able to parse incoming audio from your microphone ... keep in mind you always use a power of 2 number of audio samples in your window of samples you feed into your FFT and IFFT calls ... lets say your window is 16384 samples ... your julia server will need to juggle multiple demands (1) pluck the next buffer of samples from your microphone feed (2) send a window of samples into your FFT and IFFT call ... be aware the number of audio samples in your sliding window will typically be wider than the size of your incoming microphone buffer - hence the notion of a sliding window ... over time add your mic buffer to the front of this window and remove same number of samples off from tail end of this window of samples

NI Reaktor Audio Input Vertical Line

I recently restartet patching in Reaktor and found something strange. In one of my Primary structures there is an input pin switching its icon to vertical line every now and then when I edit the structure. It somehow depends on the audio processing sequence.
Unfortunately I cannot post an image of the structure, because this needs more reputation... however, does anyone know what is the point with this port? It feels spooky. Thanks!
sounds to me like a audio loop indicator.
It appears if the signal flow is feed back so that it can't processed at the same time. In this case the feedback signal gets feed in one audio clock later.
What means that this port has a delay of 1 sample.
In order to determine yourself where this delay applies you may insert the Unit Delay what does the same job (1 sample delay) but at the place you want.

Creating audio level meter - signal normalization

I have program which tracks audio signal in real time. Every processed sample I am able to read value of it in range between <-1, 1>.
I would like to create(and later display) audio level meter. From what I understand - to do it I need to keep converting my audio signal in real time, on each channel to dB and then display dB values on each channel in some graphical form of bars.
I am a bit lost how to do it and it should be simple matter. Would just normalization from <-1, 1> to <0, 1> (like... [n-sample +1]/2) and then calculating 20*log10 from each upcoming sample make it?
You can't plot the signal directly, as it always varying positive and negative.
Therefore you need to average out the strength of the signal every so many samples.
Say you're sampling at 44.1kHz, perhaps you might choose 4410 samples so you're updating your display 10 times per second.
So you calculate the RMS of your 4410 samples - see http://en.wikipedia.org/wiki/Root_mean_square
The RMS value is always positive.
You can then convert this to Db:
dBV = 20 x log10(Vrms)
This assumes that your maximum signal -1 to +1 corresponds to -1 to +1 volt. You will need to do further adjustments if not.

Digital delay decay

I am developing a digital delay on a microcontroller and I am stuck with the delay decay. The delay is implemented with a comb filter.
Here it is: http://www.tonmeister.ca/main/textbook/intro_to_sound_recording837x.png
The delay line, "emulating the tape", is implemented as a circula buffer. The effect can be killed and such case does not represents an issue; when turning the effect off though, I have the tail of the delay left in the buffer to process, as if the delay had been frozen and the tail slowly decay (depending on the feedback gain).
My question is: how many times I have to recirculate samples through the buffer?
One way I thought to approach this could be by modelling the physical process ... assuming that the input sequence has a loudness of 0dB for its entire duration and that, after going through the delay line, it gets attenuated by a factor of 1/10. In terms of loudness this corresponds to a drop of 20dB, as power = voltage^2, every time the sequence goes through the feedback path. The weakest audible sound has a loudness of −130dB but, taking into consideration the ambient noise as well, −120dB will be sufficient as the least reference power. Hence, after the echoes have been through the feedback path 6 times (120dB/20dB) they will be no longer audible.
Is there a more efficient way?
Thank you!

Does ADPCM has some sample rate?

ADPCM is adaptive, so it has varible sample rate. But does it have some average rate or something? Does it have frames of fixed time duration?
You misunderstood it here :-). "Adaptive" doesn't mean that sample rate is adjusted according to the signal it contains.
"Adaptive" means that the limited available delta steps (4Bit = only 16 possibilities to encode a sample) are adapted to the signal by prediction. It attempts to approximate from a given sample which value the next sample may have and adapts the delta steps to that.
If the signal has less change from sample to sample, the steps are chosen closer togheter than if the signal has much change. It is very unlikely that the signal goes from very oscillating to quiet from one sample to the next.
You notice that behavior if you encode a square wave with 100Hz using such algorithm and re-open it in an audio editor that makes the waveform visible. When the waveform changes from one polarity to other, the signal "speeds up" (the steps are more and more apart) until it reaches the other end and then it slows down again (The steps are more and more close togheter).
It still has a fixed sample rate. The one you will give to it. In RIFF WAVE, the sample rate is stored in the header.

Resources