I have some questions about FFT. I want to use FFT to analyse frequency of WAV file, 16 bit dual chanel, 44100Hz. I want to analyse every 50ms, so that I have 2205 sample at the given time. So:
I have to use FFT for 2205 sample as input array data ? And out put is an array which has 2205 elements too ?
I want to draw spectrum of WAV file as some media player do, but I must use all of array or one value of it ?
This question isn't very clear, and I may have misunderstood, but I don't think you were asking how to perform an FFT.
But you should use all your data samples as input to the FFT, and draw a spectrum using all your output data.
Basically if your sampling rate is 2205, the max FFT frequency you can calculate is half of the sampling rate without aliasing i.e 1103.
For drawing the spectrum you need to have all the values of frequency values and corresponding real part of the FFT values.
Related
I need to create a software can capture sound (from NOAA Satellite with RTL-SDR). The problem is not capture the sound, the problem is how I converted the audio or waves into an image. I read many things, Fourier Fast Transformed, Hilbert Transform, etc... but I don't know how.
If you can give me an idea it would be fantastic. Thank you!
Over the past year I have been writing code which makes FFT calls and have amassed 15 pages of notes so the topic is vast however I can boil it down
Open up your WAV file ... parse the 44 byte header and note the given bit depth and endianness attributes ... then read across the payload which is everything after that header ... understand notion of bit depth as well as endianness ... typically a WAV file has a bit depth of 16 bits so each point on the audio curve will be stored across two bytes ... typically WAV file is little endian not big endian ... knowing what that means you take the next two bytes then bit shift one byte to the left (if little endian) then bit OR that pair of bytes into an integer then convert that int which typically varies from 0 to (2^16 - 1) into its floating point equivalent so your audio curve points now vary from -1 to +1 ... do that conversion for each set of bytes which corresponds to each sample of your payload buffer
Once you have the WAV audio curve as a buffer of floats which is called raw audio or PCM audio then perform your FFT api call ... all languages have such libraries ... output of FFT call will be a set of complex numbers ... pay attention to notion of the Nyquist Limit ... this will influence how your make use of output of your FFT call
Now you have a collection of complex numbers ... the index from 0 to N of that collection corresponds to frequency bins ... the size of your PCM buffer will determine how granular your frequency bins are ... lookup this equation ... in general more samples in your PCM buffer you send to the FFT api call will give you finer granularity in the output frequency bins ... essentially this means as you walk across this collection of complex numbers each index will increment the frequency assigned to that index
To visualize this just feed this into a 2D plot where X axis is frequency and Y axis is magnitude ... calculate this magnitude for each complex number using
curr_mag = 2.0 * math.Sqrt(curr_real*curr_real+curr_imag*curr_imag) / number_of_samples
For simplicity we will sweep under the carpet the phase shift information available to you in your complex number buffer
This only scratches the surface of what you need to master to properly render a WAV file into a 2D plot of its frequency domain representation ... there are libraries which perform parts or all of this however now you can appreciate some of the magic involved when the rubber hits the road
A great explanation of trade offs between frequency resolution and number of audio samples fed into your call to an FFT api https://electronics.stackexchange.com/questions/12407/what-is-the-relation-between-fft-length-and-frequency-resolution
Do yourself a favor and checkout https://www.sonicvisualiser.org/ which is one of many audio workstations which can perform what I described above. Just hit menu File -> Open -> choose a local WAV file -> Layer -> Add Spectrogram ... and it will render the visual representation of the Fourier Transform of your input audio file as such
I am trying to use the librosa library to compute the MFCC of my time series. The time series is directly from data collected from a device at a sampling rate of 50 Hz.
Could someone help clarify on what values I could use for n_fft, hop_length, win_length and window? And their meaning?
Thanks in advance
MFCC is based on short-time Fourier transform (STFT), n_fft, hop_length, win_length and window are the parameters for STFT.
STFT divide a longer time signal into shorter segments of equal length and then compute Fourier transform separately on each shorter segment. Fourier transform transforms the signal from time domain to frequency domain. The figure below demonstrates the steps to calculate STFT.
n_fft is the number of bins of Fourier transform. Its value depends on the type of the signal and relates sampling rate, typically are the power of two. In your case, it's hard to say what is the appropriate value, since I don't know what the signal is. hop_length is the overlap of two consecutive segments, typically chosen to be 1/2 or 1/4 of n_fft. We typically apply a window on the segment. If you are not familiar with signal processing, you can leave this value to its default.
I am currently running Python's Numpy fft on 44100Hz audio samples which gives me a working frequency range of 0Hz - 22050Hz (thanks Nyquist). Once I use fft on those time domain values, I have 128 points in my fft spectrum giving me 172Hz for each frequency bin size.
I would like to tighten the frequency bin to 86Hz and still keep to only 128 fft points, instead of increasing my fft count to 256 through an adjustment on how I'm creating my samples.
The question I have is whether or not this is theoretically possible. My thought would be to run fft on any Hz values between 0Hz to 11025Hz only. I don't care about anything above that anyway. This would cut my working spectrum in half and put my frequency bins at 86Hz and while keeping to my 128 spectrum bins. Perhaps this can be accomplished via a window function in the time domain?
Currently the code I'm using to create my samples and then convert to fft is:
import numpy as np
sample_rate = 44100
chunk = 128
record_seconds = 2
stream = self.audio.open(format=pyaudio.paInt16, channels=1,
rate=sample_rate, input=True, frames_per_buffer=6300)
sample_list = []
for i in range(0, int(sample_rate / chunk * record_seconds)):
data = stream.read(chunk)
sample_list.append(np.fromstring(data, dtype=np.int16))
### then later ###:
for samp in sample_list:
samp_fft = np.fft.fft(samp) ...
I hope I worded this clearly enough. Let me know if I need to adjust my explanation or terminology.
What you are asking for is not possible. As you mentioned in a comment you require a short time window. I assume this is because you're trying to detect when a signal arrives at a certain frequency (as I've answered your earlier question on the subject) and you want the detection to be time sensitive. However, it seems your bin size is too large for your requirements.
There are only two ways to decrease the bin size. 1) Increase the length of the FFT. Unfortunately this also means that it will take longer to acquire the data. 2) lower the sample rate (either by sample rate conversion or at the hardware level) but since the samples arrive slower it will also take longer to acquire the data.
I'm going to suggest to you a 3rd option (from what I've gleaned from this and your other questions is possibly a better solution) which is: Perform the frequency detection in the time domain. What this would require is a time-domain bandpass filter followed by an RMS meter. Implementation wise this would be one or more biquad filters that you could implement in python for the filter - there are probably implementations already available. The tricky part would be designing the filter but I'd be happy to help you in chat. The RMS meter is basically taking the square root of the sum of the squares of the output samples from the filter.
Doubling the size of the FFT would be the obvious thing to do, but if there is a good reason why you can't do this then consider 2x downsampling prior to the FFT to get the effective sample rate down to 22050 Hz:
- Apply low pass filter with cut off at 11 kHz
- Discard every other sample from filtered output
- Apply FFT to down-sampled data
If you are not trying to resolve between close adjacent frequency peaks or noise, then, to half the frequency bin spacing, you can zero-pad your data to double the FFT length, without having to wait for more data. Then, if you only want the lower half of the frequency range 0..Fs/2, just throw away the middle half of the FFT result vector (which is usually far more efficient than trying to compute the lower half of the frequency range via non-FFT means).
Note that zero-padding gives the same result as high-quality interpolation (as in smoothing a plot of the original FFT result points). It does not increase peak separation resolution, but might make it easier to pick out more precise peak locations in the plot if the noise level is low enough.
I'd like to build a an audio visualizer display using led strips to be used at parties. Building the display and programming the rendering engine is fairly straightforward, but I don't have any experience in signal processing, aside from rendering PCM samples.
The primary feature I'd like to implement would be animation driven by audible frequency. To keep things super simple and get the hang of it, I'd like to start by simply rendering a color according to audible frequency of the input signal (e.g. the highest audible frequency would be rendered as white).
I understand that reading input samples as PCM gives me the amplitude of air pressure (intensity) with respect to time and that using a Fourier transform outputs the signal as intensity with respect to frequency. But from there I'm lost as to how to resolve the actual frequency.
Would the numeric frequency need to be resolved as the inverse transform of the of the Fourier transform (e.g. the intensity is the argument and the frequency is the result)?
I understand there are different types of Fourier transforms that are suitable for different purposes. Which is useful for such an application?
You can transform the samples from time domain to frequency domain using DFT or FFT. It outputs frequencies and their intensities. Actually you get a set of frequencies not just one. Based on that LED strips can be lit. See DFT spectrum tracer
"The frequency", as in a single numeric audio frequency spectrum value, does not exist for almost all sounds. That's why an FFT gives you all N/2 frequency bins of the full audio spectrum, up to half the sample rate, with a resolution determined by the length of the FFT.
I've seen the various FFT questions on here but I'm confused on part of the implementation. Instead of performing the FFT in real time, I want to do it offline. Lets say I have the raw data in float[] audio. The sampling rate is 44100 and so audio[0] to audio[44099] will contain 1 seconds worth of audio. If my FFT function handles the windowing (e.g. Hanning), do I simply put the entire audio buffer into the function in one go? Or, do I have to cut the audio into chunks of 4096 (my window size) and then input that into the FFT which will then perform the windowing function on top?
You may need to copy your input data to a separate buffer and get it in the correct format, e.g. if your FFT is in-place, or if it requires interleaved complex data (real/imaginary). However if your FFT routine can take a purely real input and is not in-place (i.e. non-destructive) then you may just be able to pass a pointer to the original sample data, along with an appropriate size parameter.
Typically for 1s of audio, e.g. speech or music, you would pick an FFT size which corresponds to a reasonably stationary chunk of audio, e.g. 10 ms or 20 ms. So at 44.1 kHz your FFT size might be say 512 or 1024. You would then generate successive spectra by advancing through your buffer and doing a new FFT at each starting point. Note that it's common practice to overlap these successive buffers, typically by 50%. So if N = 1024 your first FFT would be for samples 0..1023, your second would be for samples 512..1535, then 1024..2047, etc.
The choice of whether to calculate one FFT over the entire data set (in the OP's case, 44100 samples representing 1-second of data), or whether to do a series of FFT's over smaller subsets of the full data set, depends on the data, and on the intended purpose of the FFT.
If the data is relatively static spectrally over the full data set, then one FFT over the entire data set is probably all that's needed.
However, if the data is spectrally dynamic over the data set, then multiple sliding FFT's over small subsets of the data would create a more accurate time-frequency representation of the data.
The plot below shows the power spectrum of an acoustic guitar playing an A4 note. The audio signal was sampled at 44.1 KHz and the data set contains 131072 samples, almost 3 seconds of data. This data set was pre-multiplied with a Hann window function.
The plot below shows the power spectrum of a subset of 16384 samples (0 to 16383) taken from the full data set of the acoustic guitar A4 note. This subset was also pre-multiplied with a Hann window function.
Notice how the spectral energy distribution of the subset is significantly different from the spectral energy distribution of the full data set.
If we were to extract subsets from the full data set, using a sliding 16384 sample frame, and calculate the power spectrum of each frame, we would create an accurate time-frequency picture of the full data set.
References:
Real audio signal data, Hann window function, plots, FFT, and spectral analysis were done here:
Fast Fourier Transform, spectral analysis, Hann window function, audio data
The chunk size or window length you pick controls the frequency resolution and the time resolution of the FFT result. You have to determine which you want or what trade-off to make.
Longer windows give you better frequency resolution, but worse time resolution. Shorter windows, vice versa. Each FFT result bin will contain a frequency bandwidth of roughly 1 to 2 times the sample rate divided by the FFT length, depending on the window shape (rectangular, von Hann, etc.), not just one single frequency. If your entire data chunk is stationary (frequency content doesn't change), then you may not need any time resolution, and can go for 1 to 2 Hz frequency "resolution" in your 1 second of data. Averaging multiple short FFT windows might also help reduce the variance of your spectral estimations.