"Winamp style" spectrum analyzer - audio

I have a program that plots the spectrum analysis (Amp/Freq) of a signal, which is preety much the DFT converted to polar. However, this is not exactly the sort of graph that, say, winamp (right at the top-left corner), or effectively any other audio software plots. I am not really sure what is this sort of graph called (if it has a distinct name at all), so I am not sure what to look for.
I am preety positive about the frequency axis being base two exponential, the amplitude axis puzzles me though.
Any pointers?

Actually an interesting question. I know what you are saying; the frequency axis is certainly logarithmic. But what about the amplitude? In response to another poster, the amplitude can't simply be in units of dB alone, because dB has no concept of zero. This introduces the idea of quantization error, SNR, and dynamic range.
Assume that the received digitized (i.e., discrete time and discrete amplitude) time-domain signal, x[n], is equal to s[n] + e[n], where s[n] is the transmitted discrete-time signal (i.e., continuous amplitude) and e[n] is the quantization error. Suppose x[n] is represented with b bits, and for simplicity, takes values in [0,1). Then the maximum peak-to-peak amplitude of e[n] is one quantization level, i.e., 2^{-b}.
The dynamic range is the defined to be, in decibels, 20 log10 (max peak-to-peak |s[n]|)/(max peak-to-peak |e[n]|) = 20 log10 1/(2^{-b}) = 20b log10 2 = 6.02b dB. For 16-bit audio, the dynamic range is 96 dB. For 8-bit audio, the dynamic range is 48 dB.
So how might Winamp plot amplitude? My guesses:
The minimum amplitude is assumed to be -6.02b dB, and the maximum amplitude is 0 dB. Visually, Winamp draws the window with these thresholds in mind.
Another nonlinear map, such as log(1+X), is used. This function is always nonnegative, and when X is large, it approximates log(X).
Any other experts out there who know? Let me know what you think. I'm interested, too, exactly how this is implemented.

To generate a power spectrum you need to do the following steps:
apply window function to time domain data (e.g. Hanning window)
compute FFT
calculate log of FFT bin magnitudes for N/2 points of FFT (typically 10 * log10(re * re + im * im))
This gives log magnitude (i.e. dB) versus linear frequency.
If you also want a log frequency scale then you will need to accumulate the magnitude from appropriate ranges of bins (and you will need a fairly large FFT to start with).

Well I'm not 100% sure what you mean but surely its just bucketing the data from an FFT?
If you want to get the data such that you have (for a 44Khz file) frequency points at 22Khz, 11Khz 5.5Khz etc then you could use a wavelet decomposition, i guess ...
This thread may help ya a bit ...
Converting an FFT to a spectogram
Same sort of information as a spectrogram I'd guess ...

What you need is power spectrum graph. You have to compute DFT of your signal's current window. Then square each value.

Related

What does a sample of audio data represent?

I want to know what a single sample of audio data (uncompressed PCM) represents.
It is a number, but what exactly is that number and how come it can be converted back to audio?
For example if it is a 4-bit sample, does 0 represent absolute silence and 15 represent max volume?
If it is volume, what frequency are we talking about? How is the information about the frequency stored?
In songs we can hear various instruments (frequencies) at the same time, meaning each frequency is somehow stored in a single sample. How is that done?
Audio is just a curve which wobbles up/down with time going left/right. At a given point in time a Sample is a measure of the curve height. Silence is when the curve does not wobble ... it just goes flatline ... at value zero with a Sample value of 0 (more accurately the middle value of its range from max to min) ... when curve reaches its maximum height up or down that stretch of audio is the loudest possible
The notion of normalization is important ... the absolute range of curve values (maximum up or down) is arbitrary ... could be anything ... lets say max is 15 and minimum is 0 ... remember silence is no wobble so middle of max up/down silence would be about 7
Curves can be encoded into any number of bits ... this roughly maps into how many horizontal lines you dice the curve into ... more lines more bits so greater accuracy in value of your Sample of curve height
A sin or cos curve is considered a pure tone ... Joseph Fourier proved an arbitrary curve (audio or otherwise) can be stored in the form of a set sin curves of (A) various volumes (max up/down) (B) various frequencies (C) various phase offsets ... interestingly this transformation works in either direction : from a curve of arbitrary shape into a set of above (A/B/C) or from a set of (A/B/C) back into synthesizing a curve of arbitrary shape (this is how audio synthesizers work)
Information about frequency storage is baked into the curve shape ... its all about how often the curve wobbles up/down ... lazy wobbles taking a long time to cross from below to above the middle line are low frequency ... a stretch of tightly spaced squiggles implies a high frequency squawk
When a microphone records multiple people all talking at once or various instruments all emitting their own sounds we have many simultaneous frequencies yet the recording somehow just works - How ? think of what happens inside the microphone ( or to your flat eardrum ) ... its coil can be considered as a flat surface (a 2D surface ) which can only get sloshed up or down period ... either only moves back and forth ... this is an arbitrary curve ... one curve which at a point in time has a value of its height as it progresses from max to min

( p5.js ) FFT report lower frequencies "too loud" and higher frequencies "mute"?

I have been experimenting with simple FFT using p5 sound and then plotting the bands of the spectrum visually.
One thing i noticed is that the lower frequencies appears very high in almost all tracks while the high frequencies seems to be mute.
So for instance when doing FFT only with 16 bands most of the sound happens only on the first 4 bands and it seems that the other frequencies ( the higher ones ) are reported to be "muted" or just too quiet.
You can see this on this example for instance: http://p5js.org/reference/#/p5.FFT where even with relatively high frequencies the right side of the spectrum stays totally down, the lower frequencies are reported to be the highest even tough what you here is more of a middle / higher pitch kind of sound.
It seems that some sort of transformation have to be applied to the FFT result in order to have a visual representation that matches better that we hearing?
Am i missing something? I mean, i'm surely missing some basic information about how FFT works and how the frequencies are reported, but i mean, is that a common problem that has a common solution?
The human auditory system is fundamentally logarithmic base-2 in nature - each subsequent octave has twice the bandwidth of the next. As a consequence of this, the vast majority of the frequency content of human perceivable sound is below 1kHz, and signal power is spread more thinly between FFT bins at higher frequencies - which is precisely what your graph shows.
Spectrograms - which is what I suspect you're expecting to see here - are plotted with log(F) on the x-axis and signal power in dB on the Y axis. Your code draw a graph with both axes linear.
In addition, because you are not specifically applying a window function to the samples used to calculate the FFT , what you get by default is the rectangular window - very far from a good choice in this application.

Reducing a FFT spectrum range

I am currently running Python's Numpy fft on 44100Hz audio samples which gives me a working frequency range of 0Hz - 22050Hz (thanks Nyquist). Once I use fft on those time domain values, I have 128 points in my fft spectrum giving me 172Hz for each frequency bin size.
I would like to tighten the frequency bin to 86Hz and still keep to only 128 fft points, instead of increasing my fft count to 256 through an adjustment on how I'm creating my samples.
The question I have is whether or not this is theoretically possible. My thought would be to run fft on any Hz values between 0Hz to 11025Hz only. I don't care about anything above that anyway. This would cut my working spectrum in half and put my frequency bins at 86Hz and while keeping to my 128 spectrum bins. Perhaps this can be accomplished via a window function in the time domain?
Currently the code I'm using to create my samples and then convert to fft is:
import numpy as np
sample_rate = 44100
chunk = 128
record_seconds = 2
stream = self.audio.open(format=pyaudio.paInt16, channels=1,
rate=sample_rate, input=True, frames_per_buffer=6300)
sample_list = []
for i in range(0, int(sample_rate / chunk * record_seconds)):
data = stream.read(chunk)
sample_list.append(np.fromstring(data, dtype=np.int16))
### then later ###:
for samp in sample_list:
samp_fft = np.fft.fft(samp) ...
I hope I worded this clearly enough. Let me know if I need to adjust my explanation or terminology.
What you are asking for is not possible. As you mentioned in a comment you require a short time window. I assume this is because you're trying to detect when a signal arrives at a certain frequency (as I've answered your earlier question on the subject) and you want the detection to be time sensitive. However, it seems your bin size is too large for your requirements.
There are only two ways to decrease the bin size. 1) Increase the length of the FFT. Unfortunately this also means that it will take longer to acquire the data. 2) lower the sample rate (either by sample rate conversion or at the hardware level) but since the samples arrive slower it will also take longer to acquire the data.
I'm going to suggest to you a 3rd option (from what I've gleaned from this and your other questions is possibly a better solution) which is: Perform the frequency detection in the time domain. What this would require is a time-domain bandpass filter followed by an RMS meter. Implementation wise this would be one or more biquad filters that you could implement in python for the filter - there are probably implementations already available. The tricky part would be designing the filter but I'd be happy to help you in chat. The RMS meter is basically taking the square root of the sum of the squares of the output samples from the filter.
Doubling the size of the FFT would be the obvious thing to do, but if there is a good reason why you can't do this then consider 2x downsampling prior to the FFT to get the effective sample rate down to 22050 Hz:
- Apply low pass filter with cut off at 11 kHz
- Discard every other sample from filtered output
- Apply FFT to down-sampled data
If you are not trying to resolve between close adjacent frequency peaks or noise, then, to half the frequency bin spacing, you can zero-pad your data to double the FFT length, without having to wait for more data. Then, if you only want the lower half of the frequency range 0..Fs/2, just throw away the middle half of the FFT result vector (which is usually far more efficient than trying to compute the lower half of the frequency range via non-FFT means).
Note that zero-padding gives the same result as high-quality interpolation (as in smoothing a plot of the original FFT result points). It does not increase peak separation resolution, but might make it easier to pick out more precise peak locations in the plot if the noise level is low enough.

Measure audio noise level

I'm trying to get a qualitative handle on the amount of static or noise present in a audio stream. The normal content of the stream is voice or music.
I've been experiementing with taking the stddev of the samples, and that does give me some handle on the presence of voice vs. empty channel noise (ie. a high stddev usually indicates voice or music)
Was wondering if anyone else had some pointers on this.
Doesn't the peak value give you the answer? If you're looking at a signal from a good ADC, the ambient level should be in the 1's or 10's of counts, while voice or music will get up into the thousands of counts. Is there some kind of automatic gain control that makes this strategy not work?
If you need something more complex, the peak to RMS ratio might be a bit more reliable than simply RMS level (RMS = stddev). Pure noise will have a ratio of around 3-5, while sinusoids, for instance, have a peak to RMS ratio of 1.4. However, you can get more discrimination by looking at the spectrum of the signal. Static is usually spectrally smooth or even flat, while voice and music are spectrally structured. So a Fourier transform might be what you're looking for. Assuming a signal x that contains, say 0.5 seconds worth of data, here's some Matlab code:
Sx = fft(x .* hann(length(x), 'periodic'))
The HANN function applies a Hann window to reduce spectral leakage, while the FFT function quickly calculates the Fourier transform. Now you have a couple of choices. If you want to determine whether the signal x consists of static or voice/music, take the peak to RMS ratio of the spectrum:
pk2rms = max(abs(Sx))/sqrt(sum(abs(Sx).^2)/length(Sx))
I'd expect pure static to have a peak to RMS ratio around 3-5 (again), while voice/music would be at least an order of magnitude higher. This takes advantage of the fact that pure white noise has the same "structure" in time and frequency domains.
If you want to get a numerical estimate of the noise level, you can calculate the power in Sx over time, using an average:
Gxx = ((k-1)*Gxx + Sx.*conj(Sx))/k
Over time, the peaks in Gxx should come and go, but you should see a constant minimum value corresponding to the noise floor. In general, audio spectra are easier to look at on a dB (log vertical) scale.
Some notes:
1. I picked 0.5 seconds for the length of x, but I'm not sure what an optimal value here is. If you pick a value that's too short, x will not have much structure. In that case, the DC component of the signal will have a lot of energy. I expect you can still use the peak to RMS discriminator, though, if you first toss out the bin in Sx corresponding to DC.
2. I'm not sure what a good value for k is, but that equation corresponds to exponential averaging. You can probably experiment with k to figure out an optimal value. This might work best with a short x.
There are different kinds of noise. White, pink, brown. Noise can come from many places. Is a 60hertz hum noise or signal?
For white noise, I'd look at the fft and find the lowest value to see what your noise floor is.

Identifying common periodic waveforms (square, sine, sawtooth, ...)

Without any user interaction, how would a program identify what type of waveform is present in a recording from an ADC?
For the sake of this question: triangle, square, sine, half-sine, or sawtooth waves of constant frequency. Level and frequency are arbitrary, and they will have noise, small amounts of distortion, and other imperfections.
I'll propose a few (naive) ideas, too, and you can vote them up or down.
You definitely want to start by taking an autocorrelation to find the fundamental.
With that, take one period (approximately) of the waveform.
Now take a DFT of that signal, and immediately compensate for the phase shift of the first bin (the first bin being the fundamental, your task will be simpler if all phases are relative).
Now normalise all the bins so that the fundamental has unity gain.
Now compare and contrast the rest of the bins (representing the harmonics) against a set of pre-stored waveshapes that you're interested in testing for. Accept the closest, and reject overall if it fails to meet some threshold for accuracy determined by measurements of the noisefloor.
Do an FFT, find the odd and even harmonic peaks, and compare the rate at which they decrease to a library of common waveform.. peak... ratios.
Perform an autocorrelation to find the fundamental frequency, measure the RMS level, find the first zero-crossing, and then try subtracting common waveforms at that frequency, phase, and level. Whichever cancels out the best (and more than some threshold) wins.
This answer presumes no noise and that this is a simple academic exercise.
In the time domain, take the sample by sample difference of the waveform. Histogram the results. If the distribution has a sharply defined peak (mode) at zero, it is a square wave. If the distribution has a sharply defined peak at a positive value, it is a sawtooth. If the distribution has two sharply defined peaks, one negative and one positive,it is a triangle. If the distribution is broad and is peaked at either side, it is a sine wave.
arm yourself with more information...
I am assuming that you already know that a theoretically perfect sine wave has no harmonic partials (ie only a fundamental)... but since you are going through an ADC you can throw the idea of a theoretically perfect sine wave out the window... you have to fight against aliasing and determining what are "real" partials and what are artifacts... good luck.
the following information comes from this link about csound.
(*) A sawtooth wave contains (theoretically) an infinite number of harmonic partials, each in the ratio of the reciprocal of the partial number. Thus, the fundamental (1) has an amplitude of 1, the second partial 1/2, the third 1/3, and the nth 1/n.
(**) A square wave contains (theoretically) an infinite number of harmonic partials, but only odd-numbered harmonics (1,3,5,7,...) The amplitudes are in the ratio of the reciprocal of the partial number, just as sawtooth waves. Thus, the fundamental (1) has an amplitude of 1, the third partial 1/3, the fifth 1/5, and the nth 1/n.
I think that all of these answers so far are quite bad (including my own previous...)
after having thought the problem through a bit more I would suggest the following:
1) take a 1 second sample of the input signal (doesn't need to be so big, but it simplifies a few things)
2) over the entire second, count the zero-crossings. at this point you have the cps (cycles per second) and know the frequency of the oscillator. (in case that's something you wanted to know)
3) now take a smaller segment of the sample to work with: take precisely 7 zero-crossings worth. (so your work buffer should now, if visualized, look like one of the graphical representations you posted with the original question.) use this small work buffer to perform the following tests. (normalizing the work buffer at this point could make life easier)
4) test for square-wave: zero crossings for a square wave are always very large differences, look for a large signal delta followed by little to no movement until the next zero crossing.
5) test for saw-wave: similar to square-wave, but a large signal delta will be followed by a linear constant signal delta.
6) test for triangle-wave: linear constant (small) signal deltas. find the peaks, divide by the distance between them and calculate what the triangle wave should look like (ideally) now test the actual signal for deviance. set a deviance tolerance threshold and you can determine whether you are looking at a triangle or a sine (or something parabolic).
First find the base frequency and the phase. You can do that with FFT. Normalize the sample. Then subtract each sample with the sample of the waveform you want to test (same frequency and same phase). Square the result add it all up and divide it by the number of samples. The smallest number is the waveform you seek.

Resources