I need to pass an audio recording from mic to buffer, and then from buffer to speakers(I send the buffer via network).
My configuration: Mic->AudioFrameOutput->Network->AudioFrameInput->Speakers.
I need the recording to be in 16 bits/sample PCM(for the network).
The documentation of AudioGraph mentions that it only supports 32 bit float format.
How can I convert the 32 bit recording to 16 bit and then play the recording?
Thanks,
Tony
How to convert 32 bit float to 16 bit integer is a very common desire in the world of streaming audio ... here we convert an element of your 32 bit float buffer (array) into a lossy (32 bit does not fit into 16 bits) unsigned 16 bit integer ... with input float varying from -1 to +1
my_16_bit_unsigned_int = ((input_32_bit_floats[index] + 1.0) * 32768) - 1;
When playing with audio data at this most direct level you are exposed to many fundamental design decisions :
is input audio wave of floats varying from say -1 to +1, or -0.5 to +0.5, or from say 0 to +1 or other
do I want my output 16 bit PCM to be signed or unsigned (typically unsigned)
am I dealing with big endian or little endian byte ordering which is important when sending memory buffers over the wire (typically little endian) in particular when you might need to collapse a 16 bit integer buffer into a byte stream
Knowing these questions and having answers after mulling your data above equation does assume the input 32 bit float representation of the audio wave varies from -1.0 to +1.0 ( typical )
You ask where did that value 32768 come from ? ... well 16 bit integers have 2^16 distinct values which range from 0 to ( 2^16 - 1 ) so if your input float varies from -1 to +1 we first add 1 to make it vary from 0 to +2 which makes our output unsigned ( no negative numbers ), then we multiply values in that range by 32768 then subtract 1 to accommodate a starting lower bound of 0 such that output range of integers varies from 0 to ( 2^16 - 1 ) ... or 0 to 65537 which gives you a total of 2^16 distinct integer values
Lets break it down with concrete examples
this time input 32 bit floats vary from -1.0 to +1.0 ... actually range is from -1 < value < 1
example A
inputA = -0.999 # close to minimum possible value
outputA = int((input_32_bit_floats[index] + 1.0) * 32768) - 1;
outputA = int(( -0.999 + 1.0) * 32768) - 1;
outputA = int( 0.001 * 32768) - 1;
outputA = int( 32.768) - 1;
outputA = 33 - 1;
outputA = 32; # close to min possible value of 0
example B
inputB = 0.999 # almost max possible value
outputB = int((input_32_bit_floats[index] + 1.0) * 32768) - 1;
outputB = int((0.999 + 1.0) * 32768) - 1;
outputB = 65503 - 1;
outputB = 65502 # close to our max possible value of 65537
You can speed up the multiplication by 32768 by replacing it by a bit shift left ... how many bit positions you shift is driven by what power of 2 your shift operation is replacing ...
outputA = int((input_32_bit_floats[index] + 1.0) * 32768) - 1;
would become
outputA = ( int(input_32_bit_floats[index] + 1.0) << 15) - 1;
Related
I use knowles sph0645lm4h-b microphone to acquire data, which is a 24-bits PCM format with 18 data presicion. Then the 24-bits PCM data is truncated to 18-bits data, because the last 6 bits is alway 0 according to the specification. After that, the 18-bits data is stored as a 32-bits unsigned integer. When the MSB bit is 0, which means it's a positive integer, and the MSB is 0, which means it's a negative integer.
After that, i find all data is positive, no matter which sound i used to test. I tested it with a dual frequency, and do a FFT, then I found the result is almost right except the lower frequency about 0-100Hz is larger. But i reconstructed the sound with the data, which i used for FFT algorithm. The reconstructed sound is almost right but with noise.
I use a buffer to store the microphone data, which is transmitted using DMA. The buffer is
uint16_t fft_buffer[FFT_LENGTH*4]
The DMA configuration is doing as following:
DMA_InitStructure.DMA_Channel = DMA_Channel_0;
DMA_InitStructure.DMA_PeripheralBaseAddr = (uint32_t)&(SPI2->DR);
DMA_InitStructure.DMA_Memory0BaseAddr = (uint32_t)fft_buffer;
DMA_InitStructure.DMA_DIR = DMA_DIR_PeripheralToMemory;
DMA_InitStructure.DMA_PeripheralInc = DMA_PeripheralInc_Disable;
DMA_InitStructure.DMA_MemoryInc = DMA_MemoryInc_Enable;
DMA_InitStructure.DMA_PeripheralDataSize =DMA_PeripheralDataSize_HalfWord;
DMA_InitStructure.DMA_MemoryDataSize = DMA_MemoryDataSize_HalfWord;
DMA_InitStructure.DMA_BufferSize = FFT_LENGTH*4;
DMA_InitStructure.DMA_Mode = DMA_Mode_Normal;
DMA_InitStructure.DMA_Priority = DMA_Priority_VeryHigh;
DMA_InitStructure.DMA_FIFOMode = DMA_FIFOMode_Disable;
DMA_InitStructure.DMA_FIFOThreshold = DMA_FIFOThreshold_Full;
DMA_InitStructure.DMA_MemoryBurst = DMA_MemoryBurst_Single;
DMA_InitStructure.DMA_PeripheralBurst = DMA_PeripheralBurst_Single;
extract data from buffer, truncate to 18 bits and extends it to 32 bits and the store at fft_integer:
int32_t fft_integer[FFT_LENGTH];
fft_buffer stores the original data from one channel and redundant data from other channel. Original data is store at two element of array, like fft_buffer[4] and fft_buffer[5], which are both 16 bits. And fft_integer store just data from one channel and each data take a 32bits.This is why the size of fft_buffer Array is [FFT_LENGTH*4]. 2 elements are used for data from one channel and 2 element is used for the other channel. But for fft_integer, the size of fft_integer array is FFT_LENGTH. Because data from one channel is stored and 18bits can be stored in one element of type int32_t.
for (t=0;t<FFT_LENGTH*4;t=t+4){
uint8_t first_8_bits, second_8_bits, last_2_bits;
uint32_t store_int;
/* get the first 8 bits, middle 8 bits and last 2 bits, combine it to a new value */
first_8_bits = fft_buffer[t]>>8;
second_8_bits = fft_buffer[t]&0xFF;
last_2_bits = (fft_buffer[t+1]>>8)>>6;
store_int = ((first_8_bits <<10)+(second_8_bits <<2)+last_2_bits);
/* convert it to signed integer number according to the MSB of value
* if MSB is 1, then set all the bits before MSB to 1
*/
const uint8_t negative = ((store_int & (1 << 17)) != 0);
int32_t nativeInt;
if (negative)
nativeInt = store_int | ~((1 << 18) - 1);
else
nativeInt = store_int;
fft_integer[cnt] = nativeInt;
cnt++;
}
The microphone is using I2S Interface and it's a single mono microphone, which means that there is just half of the data is effective at half of the transmission time. It works for about 128ms, and then will stop working.
This picture shows the data, which i convert to a integer.
My question is why there is are large components of lower frequency although it can reconstruct the similar sound. I'm sure there is no problem in Hardware configuration.
I have done a experiment to see which original data is stored in buffer. I have done the following test:
uint8_t a, b, c, d
for (t=0;t<FFT_LENGTH*4;t=t+4){
a = (fft_buffer[t]&0xFF00)>>8;
b = fft_buffer[t]&0x00FF;
c = (fft_buffer[t+1]&0xFF00)>>8;
/* set the tri-state to 0 */
d = fft_buffer[t+1]&0x0000;
printf("%.2x",a);
printf("%.2x",b);
printf("%.2x",c);
printf("%.2x\n",d);
}
The PCM data is shown like following:
0ec40000
0ec48000
0ec50000
0ec60000
0ec60000
0ec5c000
...
0cf28000
0cf20000
0cf10000
0cf04000
0cef8000
0cef0000
0cedc000
0ced4000
0cee4000
0ced8000
0cec4000
0cebc000
0ceb4000
....
0b554000
0b548000
0b538000
0b53c000
0b524000
0b50c000
0b50c000
...
Raw data in Memory:
c4 0e ff 00
c5 0e ff 40
...
52 0b ff c0
50 0b ff c0
I use it as little endian.
The large low-frequency component starting from DC in the original data is due to the large DC offset caused by incorrectly translating the 24 bit two's complement samples to int32_t. DC offset is inaudible unless it caused clipping or arithmetic overflow to occur. There are not really any low frequencies up to 100Hz, that is merely an artefact of the FFT's response to the strong DC (0Hz) element. That is why you cannot hear any low frequencies.
Below I have stated a number of assumptions as clearly as possible so that the answer may perhaps be adapted to match the actualité.
Given:
Raw data in Memory:
c4 0e ff 00
c5 0e ff 40
...
52 0b ff c0
50 0b ff c0
I use it as little endian.
and
2 elements are used for data from one channel and 2 element is used for the other channel
and given the subsequent comment:
fft_buffer[0] stores the higher 16 bits, fft_buffer[1] stores the lower 16 bits
Then the data is in fact cross-endian such that for example, for:
c4 0e ff 00
then
fft_buffer[n] = 0x0ec4 ;
fft_buffer[n+1] = 0x00ff ;
and the reconstructed sample should be:
0x00ff0ec4
then the translation is a matter of reinterpreting fft_buffer as a 32 bit array, swapping the 16 bit word order, then a shift to move the sign-bit to the int32_t sign-bit position and (optionally) a re-scale, e.g.:
c4 0e ff 00 => 0x00ff0ec4
0x00ff0ec4<< 8 = 0xff0ec400
0xff0ec400/ 16384 = 0xffff0ec4(-61756)
thus:
// Reinterpret DMA buffer as 32bit samples
int32_t* fft_buffer32 = (int32_t*)fft_buffer ;
// For each even numbered DMA buffer sample...
for( t = 0; t < FFT_LENGTH * 2; t += 2 )
{
// ... swap 16 bit word order
int32_t sample = fft_buffer32 [t] << 16 |
fft_buffer32 [t] >> 16 ;
// ... from 24 to 32 bit 2's complement and rescale to
// maintain original magnitude. Copy to single channel
// fft_integer array.
fft_integer[t / 2] = (sample << 8) / 16384 ;
}
I'm trying to convert a 24 bit usb audio stream into a 32 bit stream so my microcontroller's peripherals can play happily with the stream (it can only handle 16 or 32 bit data like most mcus...).
The following code is what I got from the mcu's company... didn't work as expected and I ended up getting really distorted audio.
// Function takes usb stream and processes the data for our peripherals
// #data - usb stream data
// #byte_count - size of stream
void process_usb_stream(uint8_t *data, uint16_t byte_count) {
// Etc code that gets buffers ready to read the stream...
// Conversion here!
int32_t *buffer;
int sample_count = 0;
for (int i = 0; i < byte_count; i += 3) {
buffer[sample_count++] = data[i] | data[i+1] << 8 | data[i+2] << 16;
}
// Send buffer to peripherals for them to use...
}
Any help with converting the data from a 24 bit stream to 32 bit stream would be super awesome! This area of work is very hard for me :(
data[...] is a uint8_t. You need to cast that before shifting, because data[...]<<8 and data[...]<<16 are undefined. They'll either be 0 or unchanged, neither of which is what you want.
Also, you need to shift by another 8 bits to get the full range and put the sign bit in the right place.
Also, you're treating the data as if it were in little-endian format. Make sure it is. I'll assume that's correct, so something like this works:
int32_t *buffer;
int sample_count = 0;
for (int i = 0; i+3 <= byte_count; ) {
int32_t v = ((int32_t)data[i++])<<8;
v |= ((int32_t)data[i++])<<16;
v |= ((int32_t)data[i++])<<24;
buffer[sample_count++] = v;
}
Finally, note that this assumes that byte_count is divisible by 3 -- make sure that's true!
this is DSP stuff if, also post this question on http://dsp.stackexchange.com
In DSP the process of changing the bit depth is called scaling
16 bit resolution has 65536 values
24 bit resolution has 16777216
possible values
32 bit has 4294967296 values so the factor is 256
According to https://electronics.stackexchange.com/questions/229268/what-is-name-of-process-used-to-change-sample-bit-depth/229271
reduction from 24 bit to 16 bit is called scaling down and is done by dividing each value by 256.
This can be done by bitwise shifting every bit by 8
y = x >> 8. When scaling down this way the LSB is lost
Scaling up to 32 bit is more complicated and there are several approaches how to do this. It may work by multiplying each bit of the value with a value between 2⁰ and 2⁸.
Push the 24 bit value in a 32 bit register and then left-shifting each bit by a value between 2⁰ and 2⁸:
data32[31] = data32[23] << 8;
data32[22] = data32[14] << 8;
...
data32[0] = data32[0];
and interpolate the bits you do not get with this (linear interpolation)
Maybe there are much better scaling up algortihms ask on http://dsp.stackexchange.com
See also http://blog.bjornroche.com/2013/05/the-abcs-of-pcm-uncompressed-digital.html for the scaling up problem...
I am trying to understand how the data obtained from XGetImage is disposed in memory:
XImage img = XGetImage(display, root, 0, 0, width, height, AllPlanes, ZPixmap);
Now suppose I want to decompose each pixel value in red, blue, green channels. How can I do this in a portable way? The following is an example, but it depends on a particular configuration of the XServer and does not work in every case:
for (int x = 0; x < width; x++)
for (int y = 0; y < height; y++) {
unsigned long pixel = XGetPixel(img, x, y);
unsigned char blue = pixel & blue_mask;
unsigned char green = (pixel & green_mask) >> 8;
unsigned char red = (pixel & red_mask) >> 16;
//...
}
In the above example I am assuming a particular order of the RGB channels in pixel and also that pixels are 24bit-depth: in facts, I have img->depth=24 and img->bits_per_pixels=32 (the screen is also 24-bit depth). But this is not a generic case.
As a second step I want to get rid of XGetPixel and use or describe img->data directly. The first thing I need to know is if there is anything in Xlib which exactly gives me all the informations I need to interpret how the image is built starting from the img->data field, which are:
the order of R,G,B channels in each pixel;
the number of bits for each pixels;
the numbbe of bits for each channel;
if possible, a corresponding FOURCC
The shift is a simple function of the mask:
int get_shift (int mask) {
shift = 0;
while (mask) {
if (mask & 1) break;
shift++;
mask >>=1;
}
return shift;
}
Number of bits in each channel is just the number of 1 bits in its mask (count them). The channel order is determined by the shifts (if red shift is 0, the the first channel is R, etc).
I think the valid values for bits_per_pixel are 1, 2, 4, 8, 15, 16, 24 and 32 (15 and 16 bits are the same 2 bytes per pixel format, but the former has 1 bit unused). I don't think it's worth anyone's time to support anything but 24 and 32 bpp.
X11 is not concerned with media files, so no 4CC code.
This can be read from the XImage structure itself.
the order of R,G,B channels in each pixel;
This is contained in this field of the XImage structure:
int byte_order; /* data byte order, LSBFirst, MSBFirst */
which tells you whether it's RGB or BGR (because it only depends on the endianness of the machine).
the number of bits for each pixels;
can be obtained from this field:
int bits_per_pixel; /* bits per pixel (ZPixmap) */
which is basically the number of bits set in each of the channel masks:
unsigned long red_mask; /* bits in z arrangement */
unsigned long green_mask;
unsigned long blue_mask;
the numbbe of bits for each channel;
See above, or you can use the code from #n.m.'s answer to count the bits yourself.
Yeah, it would be great if they put the bit shift constants in that structure too, but apparently they decided not to, since the pixels are aligned to bytes anyway, in "standard order" (RGB). Xlib makes sure to convert it to that order for you when it retrieves the data from the X server, even if they are stored internally in a different format server-side. So it's always in RGB format, byte-aligned, but depending on the endianness of the machine, the bytes inside an unsigned long can appear in a reverse order, hence the byte_order field to tell you about that.
So in order to extract these channels, just use the 0, 8 and 16 shifts after masking with red_mask, green_mask and blue_mask, just make sure you shift the right bytes depending on the byte_order and it should work fine.
I am looking at the Nvidia SDK for the convolution FFT example (for large kernels), I know the theory behind fourier transforms and their FFT implementations (the basics at least), but I can't figure out what the following code does:
const int fftH = snapTransformSize(dataH + kernelH - 1);
const int fftW = snapTransformSize(dataW + kernelW - 1);
....//gpu initialization code
printf("...creating R2C & C2R FFT plans for %i x %i\n", fftH, fftW);
cuf ftSafeCall( cufftPlan2d(&fftPlanFwd, fftH, fftW, CUFFT_R2C) );
cufftSafeCall( cufftPlan2d(&fftPlanInv, fftH, fftW, CUFFT_C2R) );
printf("...uploading to GPU and padding convolution kernel and input data\n");
cutilSafeCall( cudaMemcpy(d_Kernel, h_Kernel, kernelH * kernelW * sizeof(float), cudaMemcpyHostToDevice) );
cutilSafeCall( cudaMemcpy(d_Data, h_Data, dataH * dataW * sizeof(float), cudaMemcpyHostToDevice) );
cutilSafeCall( cudaMemset(d_PaddedKernel, 0, fftH * fftW * sizeof(float)) );
cutilSafeCall( cudaMemset(d_PaddedData, 0, fftH * fftW * sizeof(float)) );
padKernel(
d_PaddedKernel,
d_Kernel,
fftH,
fftW,
kernelH,
kernelW,
kernelY,
kernelX
);
padDataClampToBorder(
d_PaddedData,
d_Data,
fftH,
fftW,
dataH,
dataW,
kernelH,
kernelW,
kernelY,
kernelX
);
I've never used CUFFT library before so I don't know what the snapTransformSize does
(here's the code)
int snapTransformSize(int dataSize){
int hiBit;
unsigned int lowPOT, hiPOT;
dataSize = iAlignUp(dataSize, 16);
for(hiBit = 31; hiBit >= 0; hiBit--)
if(dataSize & (1U << hiBit)) break;
lowPOT = 1U << hiBit;
if(lowPOT == dataSize)
return dataSize;
hiPOT = 1U << (hiBit + 1);
if(hiPOT <= 1024)
return hiPOT;
else
return iAlignUp(dataSize, 512);
}
nor why the complex plane is such initialized.
Can you provide me explanation links or answers please?
It appears to be rounding up the FFT dimensions to the next power of 2, unless the dimension would exceed 1024, in which case it's rounded up to the next multiple of 512.
Having rounded up the FFT size you then of course need to pad your data with zeroes to make it the correct size for the FFT.
Note that the reason that we typically need to round up and pad for convolution is because each FFT dimension needs to be image_dimension + kernel_dimension - 1, which is not normally a convenient number, such as a power of 2.
What #Paul R says is correct. Why it does that is because The Fast Fourier Transform operation
requires multiple of two to be executed at the fastest speed. See the Cooley-Tukey algorithm
just make sure that you are declaring a matrix that is a power of two and you should not need that generic safe implementation.
It is rounding up the FFT dimensions to the power of 2, and until the dimension would exceed 1024, it rounded up to the multiple of 512. You should pad the data with zeroes to make it the correct size for the FFT. `
If I wanted to reduce a WAV file's amplitude by 25%, I would write something like this:
for (int i = 0; i < data.Length; i++)
{
data[i] *= 0.75;
}
A lot of the articles I read on audio techniques, however, discuss amplitude in terms of decibels. I understand the logarithmic nature of decibel units in principle, but not so much in terms of actual code.
My question is: if I wanted to attenuate the volume of a WAV file by, say, 20 decibels, how would I do this in code like my above example?
Update: formula (based on Nils Pipenbrinck's answer) for attenuating by a given number of decibels (entered as a positive number e.g. 10, 20 etc.):
public void AttenuateAudio(float[] data, int decibels)
{
float gain = (float)Math.Pow(10, (double)-decibels / 20.0);
for (int i = 0; i < data.Length; i++)
{
data[i] *= gain;
}
}
So, if I want to attenuate by 20 decibels, the gain factor is .1.
I think you want to convert from decibel to gain.
The equations for audio are:
decibel to gain:
gain = 10 ^ (attenuation in db / 20)
or in C:
gain = powf(10, attenuation / 20.0f);
The equations to convert from gain to db are:
attenuation_in_db = 20 * log10 (gain)
If you just want to adust some audio, I've had good results with the normalize package from nongnu.org. If you want to study how it's done, the source code is freely available. I've also used wavnorm, whose home page seems to be out at the moment.
One thing to consider: .WAV files have MANY different formats. The code above only works for WAVE_FORMAT_FLOAT. If you're dealing with PCM files, then your samples are going to be 8, 16, 24 or 32 bit integers (8 bit PCM uses unsigned integers from 0..255, 24 bit PCM can be packed or unpacked (packed == 3 byte values packed next to each other, unpacked == 3 byte values in a 4 byte package).
And then there's the issue of alternate encodings - For instance in Win7, all the windows sounds are actually MP3 files in a WAV container.
It's unfortunately not as simple as it sounds :(.
Oops I misunderstood the question… You can see my python implementations of converting from dB to a float (which you can use as a multiplier on the amplitude like you show above) and vice-versa
https://github.com/jiaaro/pydub/blob/master/pydub/utils.py
In a nutshell it's:
10 ^ (db_gain / 10)
so to reduce the volume by 6 dB you would multiply the amplitude of each sample by:
10 ^ (-6 / 10) == 10 ^ (-0.6) == 0.2512