LibAV - what approach to take for realtime audio and video capture? - audio

I'm using libav to encode raw RGB24 frames to h264 and muxing it to flv. This works
all fine and I've streamed for more then 48 hours w/o any problems! My next step
is to add audio to the stream. I'll be capturing live audio and I want to encode it
in real time using speex, mp3 or nelly moser.
Background info
I'm new to digital audio and therefore I might be doing things wrong. But basically my application gets a "float" buffer with interleaved audio. This "audioIn" function gets called by the application framework I'm using. The buffer contains 256 samples per channel,
and I have 2 channels. Because I might be mixing terminology, this is how I use the
data:
// input = array with audio samples
// bufferSize = 256
// nChannels = 2
void audioIn(float * input, int bufferSize, int nChannels) {
// convert from float to S16
short* buf = new signed short[bufferSize * 2];
for(int i = 0; i < bufferSize; ++i) { // loop over all samples
int dx = i * 2;
buf[dx + 0] = (float)input[dx + 0] * numeric_limits<short>::max(); // convert frame of the first channel
buf[dx + 1] = (float)input[dx + 1] * numeric_limits<short>::max(); // convert frame of the second channel
}
// add this to the libav wrapper.
av.addAudioFrame((unsigned char*)buf, bufferSize, nChannels);
delete[] buf;
}
Now that I have a buffer, where each sample is 16 bits, I pass this short* buffer, to my
wrapper av.addAudioFrame() function. In this function I create a buffer, before I encode
the audio. From what I read, the AVCodecContext of the audio encoder sets the frame_size. This frame_size must match the number of samples in the buffer when calling avcodec_encode_audio2(). Why I think this, is because of what is documented here.
Then, especially the line:
If it is not set, frame->nb_samples must be equal to avctx->frame_size for all frames except the last.*(Please correct me here if I'm wrong about this).
After encoding I call av_interleaved_write_frame() to actually write the frame.
When I use mp3 as codec my application runs for about 1-2 minutes and then my server, which is receiving the video/audio stream (flv, tcp), disconnects with a message "Frame too large: 14485504". This message is generated because the rtmp-server is getting a frame which is way to big. And this is probably due to the fact I'm not interleaving correctly with libav.
Questions:
There quite some bits I'm not sure of, even when going through the source code of libav and therefore I hope if someone has an working example of encoding audio which comes from a buffer which which comes from "outside" libav (i.e. your own application). i.e. how do you create a buffer which is large enough for the encoder? How do you make the "realtime" streaming work when you need to wait on this buffer to fill up?
As I wrote above I need to keep track of a buffer before I can encode. Does someone else has some code which does this? I'm using AVAudioFifo now. The functions which encodes the audio and fills/read the buffer is here too: https://gist.github.com/62f717bbaa69ac7196be
I compiled with --enable-debug=3 and disable optimizations, but I'm not seeing any
debug information. How can I make libav more verbose?
Thanks!

Related

Configure SAI peripheral on STM32H7

I'm trying to play a sound, on a single speaker (mono), from a .wav file in SD card using a STM32H7 controller and freertos environment.
I currently managed to generate sound but it is very dirty and jerky.
I'd like to show the parsed header content of my wav file but my reputation score is below 10.
Most important data are :
format : PCM
1 Channel
Sample rate : 44100
Bit per sample : 16
I initialize the SAI2 block A this way :
void MX_SAI2_Init(void)
{
/* USER CODE BEGIN SAI2_Init 0 */
/* USER CODE END SAI2_Init 0 */
/* USER CODE BEGIN SAI2_Init 1 */
/* USER CODE END SAI2_Init 1 */
hsai_BlockA2.Instance = SAI2_Block_A;
hsai_BlockA2.Init.AudioMode = SAI_MODEMASTER_TX;
hsai_BlockA2.Init.Synchro = SAI_ASYNCHRONOUS;
hsai_BlockA2.Init.OutputDrive = SAI_OUTPUTDRIVE_DISABLE;
hsai_BlockA2.Init.NoDivider = SAI_MASTERDIVIDER_ENABLE;
hsai_BlockA2.Init.FIFOThreshold = SAI_FIFOTHRESHOLD_EMPTY;
hsai_BlockA2.Init.AudioFrequency = SAI_AUDIO_FREQUENCY_44K;
hsai_BlockA2.Init.SynchroExt = SAI_SYNCEXT_DISABLE;
hsai_BlockA2.Init.MonoStereoMode = SAI_MONOMODE;
hsai_BlockA2.Init.CompandingMode = SAI_NOCOMPANDING;
hsai_BlockA2.Init.TriState = SAI_OUTPUT_NOTRELEASED;
if (HAL_SAI_InitProtocol(&hsai_BlockA2, SAI_I2S_STANDARD, SAI_PROTOCOL_DATASIZE_16BIT, 2) != HAL_OK)
{
Error_Handler();
}
/* USER CODE BEGIN SAI2_Init 2 */
/* USER CODE END SAI2_Init 2 */
}
I think I set the clock frequency correctly, as I measure a frame synch clock of 43Khz (closest I can get to 44,1Khz)
The file indicate it's using PCM protocol. My init function indicate SAI_I2S_STANDARD but it's only because I was curious of the result with this parameter value. I have bad result in both cases.
And here is the part where I read the file + send data to the SAI DMA
//Before infinite loop I extract the overall file size in bytes.
// Infinite Loop
for(;;)
{
if(drv_sdcard_getDmaTransferComplete()==true)
{
// BufferRead[0]=0xAA;
// BufferRead[1]=0xAA;
//
// ret = HAL_SAI_Transmit_DMA(&hsai_BlockA2, (uint8_t*)BufferRead, 2);
// drv_sdcard_resetDmaTransferComplete();
if((firstBytesDiscarded == true)&& (remainingBytes>0))
{
//read the next BufferRead size audio samples
if(remainingBytes < sizeof(BufferAudio))
{
remainingBytes -= drv_sdcard_readDataNoRewind(file_audio1_index, BufferAudio, remainingBytes);
}
else
{
remainingBytes -= drv_sdcard_readDataNoRewind(file_audio1_index, BufferAudio, sizeof(BufferAudio));
}
//send them by the SAI through DMA
ret = HAL_SAI_Transmit_DMA(&hsai_BlockA2, (uint8_t*)BufferAudio, sizeof(BufferAudio));
//reset transmit flag for forbidding next transmit
drv_sdcard_resetDmaTransferComplete();
}
else
{
//discard header size first bytes
//I removed this part here because it works properly on my side
firstBytesDiscarded = true;
}
}
I have one track of sound quality improvment : it is to filter speaker input. Yesterday I tried cutting # 20Khz and 44khz but it cut too much the signal... So I want to try different cutting frequencies until I find the sound is of good quality. It is a simple RC filter.
But to fix the jerky part, I dont know what to do. To give you an idea on how the sound comes out, I would describe it like this :
we can hear a bit of melody
then scratchy sound [krrrrrrr]
then short silence
and this looping until the end of the file.
Buffer Audio size is 16*1024 bytes.
Thank you for your help
Problems
No double-buffering. You are reading data from the SD-card into the same buffer that you are playing from. So you'll get some samples from the previous read, and some samples from the new read.
Not checking when the DMA is complete. HAL_SAI_Transmit_DMA() returns immediately, and you cannot call it again until the previous DMA has completed.
Not checking return values of HAL functions. You assign ret = HAL_SAI_Transmit_DMAbut then never check what ret is. You should check if there is an error and take appropriate action.
You seem to be driving things from how fast the SD-card can DMA the data. It needs to be based on how fast the SAI is consuming it, otherwise you will have glitches.
Possible solution
The STM32's DMA controller can be configured to run in circular-buffer mode. In this mode, it will DMA all the data given to it, and then start again from the beginning.
It also provides interrupts for when the DMA is half complete, and when it is fully complete.
These two things together can provide a smooth data transfer with no gaps and glitches, if used with the SAI DMA. You'd read data into the entire buffer to start with, and kick off the DMA. When you get the half-complete interrupt, read half a buffer's worth of data into the first half of the buffer. When you get a fully complete interrupt, read half a buffer's worth of data into the second half of the buffer.
This is psuedo-code-ish, but hopefully shows what I mean:
const size_t buff_len = 16u * 1024u;
uint16_t buff[buff_len];
void start_playback(void)
{
read_from_file(buff, buff_len);
if HAL_SAI_Transmit_DMA(&hsai_BlockA2, buff, buff_len) != HAL_OK)
{
// Handle error
}
}
void sai_dma_tx_half_complete_interrupt(void)
{
read_from_file(buff, buff_len / 2u);
}
void sai_dma_tx_full_complete_interrupt(void)
{
read_from_file(buff + buff_len / 2u, buff_len / 2u);
}
You'd need to detect when you have consumed the entire file, and then stop the DMA (with something like HAL_SAI_DMAStop()).
You might want to read this similar question where I gave a similar answer. They were recording to SD-card rather than playing back, but the same principles apply. They also supplied their actual code for the solution they employed.

Combining multiple input channels to one output channel audio live

I am trying to make my own basic mixer and wanted to know how I could take multiple channels of input audio and outputting all of the channels as one mixed audio source with controllable levels for each input channel. Right now I am trying to use pyo but I am unable to mix the channels in real-time.
here is some pseudo code to combine multiple input channels into a single output channel where each input channel has its own volume control in array mix_volume
max_index = length(all_chan[0]) // identify audio buffer size
all_chan // assume all channels live in a two dimensional array where
// dimension 0 is which channel and dim 1 is index into each audio sample
mix_volume // array holding multiplication factor to control volume per channel
// each element a floating point value between 0.0 and 1.0
output_chan // define and/or allocate your output channel buffer
for index := 0; index < max_index; index++ {
curr_sample := 0 // output audio curve height for current audio sample
for curr_chan := 0; curr_chan < num_channels; curr_chan++ {
curr_sample += (all_chan[curr_chan][index] * mix_volume[curr_chan])
}
output_chan[index] = curr_sample / num_channels // output audio buffer
}
the trick to perform above on a live stream is to populate the above all_chan audio buffers inside an event loop where you copy into these buffers the audio sample values for each channel then execute above code from inside that event loop ... typically you will want your audio buffers to have about 2^12 ( 4096 ) audio samples ... experiment using larger or smaller buffer size ... too small and this event loop will become very cpu intensive yet too large and you will incur an audible delay ... have fun
you may want to use a compiled language like golang YMMV

How to detect a basic audio signal into a much bigger one (mpg123 output signal)

I am new to signal processing and I don't really understand the basics (and more). Sorry in advance for any mistake into my understanding so far.
I am writing C code to detect a basic signal (18Hz simple sinusoid 2 sec duration, generating it using Audacity is pretty simple) into a much bigger mp3 file. I read the mp3 file and copy it until I match the sound signal.
The signal to match is { 1st channel: 18Hz sin. signal , 2nd channel: nothing/doesn't matter).
To match the sound, I am calculating the frequency of the mp3 until I find a good percentage of 18Hz freq. during ~ 2 sec. As this frequency is not very common, I don't have to match it very precisely.
I used mpg123 to convert my file, I fill the buffers with what it returns. I initialised it to convert the mp3 to Mono RAW audio:
init:
int ret;
const long *rates;
size_t rate_count, i;
mpg123_rates(&rates, &rate_count);
mpg123_handle *m = mpg123_new(NULL, &ret);
mpg123_format_none(m);
for(i=0; i<rate_count; ++i)
mpg123_format(m, rates[i], MPG123_MONO, MPG123_ENC_SIGNED_32);
if(m == NULL)
{
//err
} else {
mpg123_open_feed(m);
}
(...)
unsigned char out[8*MAX_MP3_BUF_SIZE];
ret = mpg123_decode(m, buf->data, buf->size, out, 8*MAX_MP3_BUF_SIZE, &size);
`(...)
unsigned char out[8*MAX_MP3_BUF_SIZE];
ret = mpg123_decode(m, buf->data, buf->size, out, 8*MAX_MP3_BUF_SIZE, &size);
(...) `
But I have to idea how to get the resulting buffer to calculate the FFT to get the frequency.
//FREQ Calculation with libfftw3
int transform_size = MAX_MP3_BUF_SIZE * 2;
fftw_complex *fftout = (fftw_complex*) fftw_malloc(sizeof(fftw_complex) * transform_size);
fftw_complex *fftin = (fftw_complex*) fftw_malloc(sizeof(fftw_complex) * transform_size);
fftw_plan p = fftw_plan_dft_r2c_1d(transform_size, fftin, fftout, FFTW_ESTIMATE);
I can get a good RAW Audio (PCM ?) into a buffer (if I write it, it can be read and converted into wave with sox:
sox --magic -r 44100 -e signed -b 32 -c 1 rps.raw rps.wav
Any help is appreciated. My knowledge of signal processing is poor, I am not even sure of what to do with the FFT to get the frequency of the signal. Code is just fyi, it is contained into a much bigger project (for which a simple grep is not an option)
Don't use MP3 for this. There's a good chance your 18 Hz will disappear or at least become distorted. 18 Hz is will below audible. MP3 and other lossy algorithms use a variety of techniques to remove sounds that we're not going to hear.
Assuming PCM, since you only need one frequency band, consider using the Goertzel algorithm. This is more efficient than FFT/DFT for your use case.

AAC stream resampled incorrectly

I do have a very particular problem, I wish I could find the answer to.
I'm trying to read an AAC stream from an URL (online streaming radio e.g. live.noroc.tv:8000/radionoroc.aacp) with NAudio library and get IEEE 32 bit floating samples.
Besides that I would like to resample the stream to a particular sample rate and channel count (rate 5512, mono).
Below is the code which accomplishes that:
int tenSecondsOfDownloadedAudio = 5512 * 10;
float[] buffer = new float[tenSecondsOfDownloadedAudio];
using (var reader = new MediaFoundationReader(pathToUrl))
{
var ieeeFloatWaveFormat = WaveFormat.CreateIeeeFloatWaveFormat(5512, 1); // mono
using (var resampler = new MediaFoundationResampler(reader, ieeeFloatWaveFormat))
{
var waveToSampleProvider = new WaveToSampleProvider(resampler);
int readSamples = 0;
int tempBuffer = new float[5512]; // 1 second buffer
while(readSamples <= tenSecondsOfDownloadedAudio)
{
int read = waveToSampleProvider.Read(tempBuffer, 0, tempBuffer.Length);
if(read == 0)
{
Thread.Sleep(500); // allow streaming buffer to get loaded
continue;
}
Array.Copy(tempBuffer, 0, buffer, readSamples, tempBuffer.Length);
readSamples += read;
}
}
}
These particular samples are then written to a Wave audio file using the following simple method:
using (var writer = new WaveFileWriter("path-to-audio-file.wav", WaveFormat.CreateIeeeFloatWaveFormat(5512, 1)))
{
writer.WriteSamples(samples, 0, samples.Length);
}
What I've encountered is that NAudio does not read 10 seconds of audio (as it was requested) but only 5, though the buffer array gets fully loaded with samples (which at this rate and channel count should contain 10 seconds of audio samples).
Thus the final audio file plays the stream 2 times as slower as it should (5 second stream is played as 10).
Is this somewhat related to different bit depths (should I record at 64 bits per sample as opposite to 32).
I do my testing at Windows Server 2008 R2 x64, with MFT codecs installed.
Would really appreciate any suggestions.
The problem seems to be with MediaFoundationReader failing to handle HE-AACv2 in ADTS container with is a standard online radio stream format and most likely the one you are dealing with.
Adobe products have the same problem mistreating this format exactly the same way^ stretching the first half of the audio to the whole duration and : Corrupted AAC files recorded from online stream
Supposedly, it has something to do with HE-AACv2 stereo stream being actually a mono stream with additional info channel for Parametric Stereo.

C & Fmod Ex - playing a PCM array/buffer in Real Time

I use an array to process radio signal and to obtain raw PCM audio. I am desperately trying to play this audio using Fmod Ex.
Basically, would it be possible to create a stream corresponding to my circular buffer, that I could access in a thread-safe way ? Any basic information about what methods to use would be greatly appreciated.
If no, could any other Windows 7 API do the trick and how ? (ASIO, Wasapi...)
Thx °-°
I'm assuming your data is continuous (always updating) so you would want to stream it into FMOD, to do this you could override the file callbacks for a particular sound. There is a good example of doing this with the FMOD API usercreatedsound example. If you just want to play a static buffer simply fill out a createsoundexinfo struct describing the data, use the FMOD_OPENMEMORY flag and pass a pointer to the data through createSound as name_or_data. Below is an example of the more complex stream case:
When creating the sound you would use FMOD_CREATESOUNDEXINFO to specify the details of your data, then pass that to createStream. Note this is basically how you would do the static sample case except you are using FMOD_OPENUSER, setting decode size and specifying the callbacks to read the data instead of FMOD_OPENMEMORY and passing the data via the name_or_data param:
FMOD_CREATESOUNDEXINFO exinfo;
memset(&createsoundexinfo, 0, sizeof(FMOD_CREATESOUNDEXINFO));
exinfo.cbsize = sizeof(FMOD_CREATESOUNDEXINFO); /* required. */
exinfo.decodebuffersize = 44100; /* Chunk size of stream update in samples. This will be the amount of data passed to the user callback. */
exinfo.length = 44100 * channels * sizeof(signed short) * 5; /* Length of PCM data in bytes of whole song (for Sound::getLength) */
exinfo.numchannels = channels; /* Number of channels in the sound. */
exinfo.defaultfrequency = 44100; /* Default playback rate of sound. */
exinfo.format = FMOD_SOUND_FORMAT_PCM16; /* Data format of sound. */
exinfo.pcmreadcallback = pcmreadcallback; /* User callback for reading. */
exinfo.pcmsetposcallback = pcmsetposcallback; /* User callback for seeking. */
result = system->createStream(NULL, FMOD_OPENUSER, &exinfo, &sound);
ERRCHECK(result);
Here you are saying that you will provide PCM16 44khz data, customize as required, and give two function callbacks for read and setposition which FMOD will call asking you to either seek your buffer or read something from it:
FMOD_RESULT F_CALLBACK pcmreadcallback(FMOD_SOUND *sound, void *data, unsigned int datalen)
{
// Read from your buffer here...
return FMOD_OK;
}
FMOD_RESULT F_CALLBACK pcmsetposcallback(FMOD_SOUND *sound, int subsound, unsigned int position, FMOD_TIMEUNIT postype)
{
// Seek to a location in your data, may not be required for what you want to do
return FMOD_OK;
}
That should be everything you need to get FMOD playing back your buffer.

Resources