Does "16bit integer PCM data" mean it's signed or unsigned? - audio

I'm using FMOD to develop an application which would immediately start playing the recording of the next/previous sentence exactly from its beginning in a MP3 file which contains speech, without music, when the user clicked the Next/Prev button. I got the PCM data of a mp3 file by calling Sound::lock, but Sound::getFormat only told me it was "16bit integer PCM data", without saying whether it was signed or unsigned. How would I know that?
Some articles on the Internet say that almost all 16-bit integer PCM data are signed. If my PCM data is signed, what range of values represent silence, those values close to 0 (e.g. -10 ~ 10), or the values close to -32768 (e.g. -32768 ~ -32750)? If they are the values close to 0, does this mean that there's no difference in meaning between opposite numbers like -32767 and 32767?
I need to detect silences which are long enough, e.g. longer than 500ms, to determine where each sentence in the speech begins.
Could anyone give me any suggestions on how to detect silence between sentences?

16-bit audio is, by convention, usually signed.
Think about what PCM audio is: each measure is how far along its axis the speaker should physically rest at that moment in time. Therefore perfect silence is absolutely any repeating value — that represents the speaker not moving.
0 is then the centre of the range, and usually where a microphone should be with no input. -32768 is the speaker as close to one end of its axis as it can be, 32767 is it at the other end.
The safest way to detect silence would be to run a spectral analysis over the relevant range and look for periods where there is no activity in any audible frequency range.
If you're looking for pauses between speech then the easiest thing would probably be to go to somewhere like this, plug in an acceptable frequency range for speech (it's considered to be around 300Hz to around 3500Hz in telephony), your sampling rate and however many multiplications you think you can afford. Copy the coefficients supplied. E.g. I assumed you'll do 37 taps across the speech range with a 44100Hz input and, converted to a C array, I got:
double coefficients[] = {
-0.000560, -0.001290, -0.002332, -0.003606, -0.004911, -0.005921, -0.006201,
-0.005256, -0.002610, 0.002106, 0.009059, 0.018139, 0.028924, 0.040691, 0.052479,
0.063203, 0.071794, 0.077351, 0.079274, 0.077351, 0.071794, 0.063203, 0.052479,
0.040691, 0.028924, 0.018139, 0.009059, 0.002106, -0.002610, -0.005256, -0.006201,
-0.005921, -0.004911, -0.003606, -0.002332, -0.001290, -0.000560};
If it were double input, for each input sample c I'd then compute a sampled value:
double *inputWave = ... input, an infinite array for the purposes of the example ...
double sampledValue = 0.0;
for(size_t coeff = 0; coeff < numberOfTaps; coeff++) {
sampledValue += coefficients[coeff] * inputWave[c + coeff];
}
// (where numberOfTaps = sizeof(coefficients) / sizeof(coefficients[0]),
// i.e. the number of coefficients: 37 with the array given above)
What I've then got is a bandpass filter. Only that part of the signal representing sound in the frequency range 300–3500Hz should remain in the output values. In real life no such filter is perfect; increase the number of coefficients to increase the quality of your filter.
Having cut irrelevant parts of the signal I could then look for prolonged periods of sampledValue = [close to] 0.0.

Surprisingly if I create directsound soundbuffers with 8Bit format, directsound expects the samples to be 8Bit SIGNED (-127 - 127) on my machine while when I create a 16Bit buffer directsound expects them to be 16Bit UNSIGNED (0 - 65535). So at least on my machine the standard seems to be the opposite of Tommy's answer.

Related

Detecting a specific pattern from a FFT in Arduino

I have an FFT output from a microphone and I want to detect a specific animal's howl from that (it howls in a characteristic frequency spectrum). Is there any way to implement a pattern recognition algorithm in Arduino to do that?
I already have the FFT part of it working with 128 samples #2kHz sampling rate.
lookup audio fingerprinting ... essentially you probe the frequency domain output from the FFT call and take a snapshot of the range of frequencies together with the magnitude of each freq then compare this between known animal signal and unknown signal and output a measurement of those differences.
Naturally this difference will approach zero when unknown signal is your actual known signal
Here is another layer : For better fidelity instead of performing a single FFT of the entire audio available, do many FFT calls each with a subset of the samples ... for each call slide this window of samples further into the audio clip ... lets say your audio clip is 2 seconds yet here you only ever send into your FFT call 200 milliseconds worth of samples this gives you at least 10 such FFT result sets instead of just one had you gulped the entire audio clip ... this gives you the notion of time specificity which is an additional dimension with which to derive a more lush data difference between known and unknown signal ... experiment to see if it helps to slide the window just a tad instead of lining up each window end to end
To be explicit you have a range of frequencies say spread across X axis then along Y axis you have magnitude values for each frequency at different points in time as plucked from your audio clips as you vary your sample window as per above paragraph ... so now you have a two dimensional grid of data points
Again to beef up the confidence intervals you will want to perform all of above across several different audio clips of your known source animal howl against each of your unknown signals so now you have a three dimensional parameter landscape ... as you can see each additional dimension you can muster will give more traction hence more accurate results
Start with easily distinguished known audio against a very different unknown audio ... say a 50 Hz sin curve tone for known audio signal against a 8000 Hz sin wave for the unknown ... then try as your known a single strum of a guitar and use as unknown say a trumpet ... then progress to using actual audio clips
Audacity is an excellent free audio work horse of the industry - it easily plots a WAV file to show its time domain signal or FFT spectrogram ... Sonic Visualiser is also a top shelf tool to use
This is not a simple silver bullet however each layer you add to your solution can give you better results ... it is a process you are crafting not a single dimensional trigger to squeeze.

How can I convert an audio (.wav) to satellite image

I need to create a software can capture sound (from NOAA Satellite with RTL-SDR). The problem is not capture the sound, the problem is how I converted the audio or waves into an image. I read many things, Fourier Fast Transformed, Hilbert Transform, etc... but I don't know how.
If you can give me an idea it would be fantastic. Thank you!
Over the past year I have been writing code which makes FFT calls and have amassed 15 pages of notes so the topic is vast however I can boil it down
Open up your WAV file ... parse the 44 byte header and note the given bit depth and endianness attributes ... then read across the payload which is everything after that header ... understand notion of bit depth as well as endianness ... typically a WAV file has a bit depth of 16 bits so each point on the audio curve will be stored across two bytes ... typically WAV file is little endian not big endian ... knowing what that means you take the next two bytes then bit shift one byte to the left (if little endian) then bit OR that pair of bytes into an integer then convert that int which typically varies from 0 to (2^16 - 1) into its floating point equivalent so your audio curve points now vary from -1 to +1 ... do that conversion for each set of bytes which corresponds to each sample of your payload buffer
Once you have the WAV audio curve as a buffer of floats which is called raw audio or PCM audio then perform your FFT api call ... all languages have such libraries ... output of FFT call will be a set of complex numbers ... pay attention to notion of the Nyquist Limit ... this will influence how your make use of output of your FFT call
Now you have a collection of complex numbers ... the index from 0 to N of that collection corresponds to frequency bins ... the size of your PCM buffer will determine how granular your frequency bins are ... lookup this equation ... in general more samples in your PCM buffer you send to the FFT api call will give you finer granularity in the output frequency bins ... essentially this means as you walk across this collection of complex numbers each index will increment the frequency assigned to that index
To visualize this just feed this into a 2D plot where X axis is frequency and Y axis is magnitude ... calculate this magnitude for each complex number using
curr_mag = 2.0 * math.Sqrt(curr_real*curr_real+curr_imag*curr_imag) / number_of_samples
For simplicity we will sweep under the carpet the phase shift information available to you in your complex number buffer
This only scratches the surface of what you need to master to properly render a WAV file into a 2D plot of its frequency domain representation ... there are libraries which perform parts or all of this however now you can appreciate some of the magic involved when the rubber hits the road
A great explanation of trade offs between frequency resolution and number of audio samples fed into your call to an FFT api https://electronics.stackexchange.com/questions/12407/what-is-the-relation-between-fft-length-and-frequency-resolution
Do yourself a favor and checkout https://www.sonicvisualiser.org/ which is one of many audio workstations which can perform what I described above. Just hit menu File -> Open -> choose a local WAV file -> Layer -> Add Spectrogram ... and it will render the visual representation of the Fourier Transform of your input audio file as such

What do sample bytes in raw audio files mean?

Yes, I know - e.g. in 16bit signed integer, every 2 bytes represent a "sample" which is an integer from -32768 to 32767, but I don't understand, and can't find information, what is the mapping between actual values and sounds (sound wave parameters, to be exact). Could anyone explain it to me or point me somewhere?
If you visualize a sound wave, it is a curve in form of a line. And as we all know, a line consists out of infinite points. Since a hard drive is limited in space, it can't store infinite points. It just can store a few points. So what can we do? We just take out a few points of this "line" and store them. And each of these points is a sample. It is the displacement of the audio wave at a specific time.
So if you've got a sound like this:
(source: sourceforge.net)
A computer can't store the whole wave. It will take out a few points of that wave and store them. And how many points he takes out for storing one second is measured by the samplerate. The higher the samplerate is the higher is the quality of the sound. If the samplerate would be an infinite number, the quality would be nearly as good as the original wave. But why just nearly? Thats because a computer uses 8, 16, 24, 32,... whatever bits to store one sample. The more bits he uses to store one sample the better the quality is. As a result, we can say that in theory, the quality of a sound would be as good as the original sound, if the samplerate would be infinite AND the amount of bits, used to store one sample, would be infinite.

Signal Processing and Audio Beat Detection

I am trying to do some work with basic Beat Detection (in both C and/or Java) by following the guide from GameDev.net. I understand the logic behind the implementation of the algorithms, however I am confused as to how one would get the "sound amplitude" data for the left and right channels of a song (i.e. mp3 or wav).
For example, he starts with the following assumption:
In this model we will detect sound energy variations by computing the average sound energy of the signal and comparing it to the instant sound energy. Lets say we are working in stereo mode with two lists of values : (an) and (bn). (an) contains the list of sound amplitude values captured every Te seconds for the left channel, (bn) the list of sound amplitude values captured every Te seconds for the right channel.
He then proceeds to manipulate an and bn using his following algorithms. I am wondering how one would do the Signal Processing necessary to get an and bn every Te seconds for both channels, such that I can begin to follow his guide and mess around with some simple Beat Detection in songs.
An uncompressed audio file (a .wav or.aiff for example) is for the most part a long array of samples. Each sample consists of the amplitude at a given point in time. When music is recorded, many of these amplitude samples are taken each second.
For stereo (2-channel) audio files, the samples in the array usually alternate channels: [sample1 left, sample1 right, sample2 left, sample2 right, etc...].
Most audio parsing libraries will already have a way of returning the samples separately for each channel.
Once you have the sample array for each channel, it is easy to find the samples for a particular second, as long as you know the sample rate, or number of samples per second. For example, if the sample rate for your file is 44100 samples per second, and you want to capture the samples in n th second, you would use the part of your vector that is between (n * 44100 ) and ((n + 1) * 44100).

Sound pressure display for WAVE PCM data

The digital sound is playing using DirectSound device. It is necessary to display sound activity in decibels - like analog devices do.
What is the right way to calculate sound pressure from the WAVE PCM data (44100 Hz, 16-bit)?
if you just need an "idea" of the sound pressure, you can simply compute the log-energy on some time franmes of the signal: split the signal every N samples, compute 10*log(sum(xn**2)) where x are the N samples, and you get a value in the dB domain. If you need to precisely display a measure (that is your 0 dB matches say a mixtable 0dB), it is a bit more complicated.
See here for more details:
http://music.columbia.edu/pipermail/music-dsp/2002-April/048341.html
Sound pressure is a measure of force per unit area. To determine this you would have to have information about the speaker(s) on which the audio is played. You can obtain a decibel level with respect to an arbitrary reference (as opposed to the threshold of hearing) with the algorithm proposed by cournape.
Calculate the average signal power over a time interval, compute the base-10 logarithm and multiply by 19. The average power is calculated by averaging the the square of each sample over the interval. Note that positive and negative values are necessary (i.e. it must be an AC signal). So, make sure the PCM values are either floating-point, 2's complement or offset unsigned values accordingly.
Also, by applying Parseval's theorum and the Fourier transform you can also generate signal levels for different frequency bands.

Resources