av_write_frame fails when encoding a larger audio file to .mpg - audio

I am encoding live rendered video data and an existing .wav file into an mpg-file.
To do that I first write all audio frames, and then the video frames as they come in from the render engine. For smaller .wav files (< 25 seconds), everything works perfectly fine. But as soon as I use a longer .wav file, av_write_frame (when writing the audio frame) just returns -1 after having written some 100 frames. It is never the same frame at which it fails, also it is never the last frame.
All test files can be played perfectly with any player I tested.
I am following the muxing example (more or less).
Here is my function that writes an audio frame:
void write_audio_frame( Cffmpeg_dll * ptr, AVFormatContext *oc, AVStream *st, int16_t sample_val )
{
AVCodecContext *c;
AVPacket pkt = { 0 }; // data and size must be 0;
AVFrame *frame = avcodec_alloc_frame();
int got_packet;
av_init_packet(&pkt);
c = st->codec;
get_audio_frame(ptr, ptr->samples, ptr->audio_input_frame_size, c->channels);
frame->nb_samples = ptr->audio_input_frame_size;
int result = avcodec_fill_audio_frame(frame, c->channels, c->sample_fmt,
(uint8_t *) ptr->samples,
ptr->audio_input_frame_size *
av_get_bytes_per_sample(c->sample_fmt) *
c->channels, 0);
if (result != 0)
{
av_log(c, AV_LOG_ERROR, "Error filling audio frame. Code: %i\n", result);
exit(1);
}
result = avcodec_encode_audio2(c, &pkt, frame, &got_packet);
if (result != 0)
{
av_log(c, AV_LOG_ERROR, "Error encoding audio. Code: %i\n", result);
exit(1);
}
if (c->coded_frame && c->coded_frame->pts != AV_NOPTS_VALUE)
pkt.pts= av_rescale_q(c->coded_frame->pts, c->time_base, st->time_base);
pkt.flags |= AV_PKT_FLAG_KEY;
pkt.stream_index= st->index;
av_log(c, AV_LOG_ERROR, "Got? %i Pts: %i Dts: %i Flags: %i Side Elems: %i Size: %i\n",
got_packet, pkt.pts, pkt.dts, pkt.flags, pkt.side_data_elems, pkt.size);
/* write the compressed frame in the media file */
result = av_write_frame(oc, &pkt);
if (result != 0)
{
av_log(c, AV_LOG_ERROR, "Error while writing audio frame. Result: %i\n", result);
exit(1);
}
}
So "Error while writing audio frame. Result: -1" is what I always get after some frames.
And here is my get_audio_frame function:
void get_audio_frame( Cffmpeg_dll* ptr, int16_t* samples, int frame_size, int nb_channels, int16_t sample_val )
{
fread( samples, sizeof( int16_t ), frame_size * nb_channels, ptr->fp_sound_input );
};
And finally, this is the loop in which I write all audio frames (don't worry about the .wav header, I skipped it before that loop):
while (!feof(ptr->fp_sound_input))
{
write_audio_frame( ptr, ptr->oc, ptr->audio_st, -1 );
}
As you can see, I'm outputting almost everything in the packet and check for any possible error. Other than av_write_frame failing after some time when I am encoding a longer audio file, everything seems perfectly fine. All the packet values I am tracking are 100% the same for all frames (except the data pointer, obviously). Also, as stated, the same procedure works flawlessly for shorter fp_sound_input files. avcodec_encode_audio2() and avcodec_fill_audio_frame() also never fail.
The codecs I use for encoding are CODEC_ID_MPEG2VIDEO (video) and CODEC_ID_MP2 (audio). The .wav files are saved in PCM 16 LE (all use the exact same encoding).
What could be wrong here?

Try to use av_interleaved_write_frame instead of av_write_frame

Related

FFMpeg How to use multithreading?

I want to decode H264 by ffmpeg, BUT finally I found the decode function only used one cpu core
system monitor
env: Ubuntu 14.04 FFMpeg 3.2.4 CPU i7-7500U
So, I search ffmpeg multithreading and decide using all cpu cores for decoding.
I set AVCodecContext as this:
//Init works
//codecId=AV_CODEC_ID_H264;
avcodec_register_all();
pCodec = avcodec_find_decoder(codecId);
if (!pCodec)
{
printf("Codec not found\n");
return -1;
}
pCodecCtx = avcodec_alloc_context3(pCodec);
if (!pCodecCtx)
{
printf("Could not allocate video codec context\n");
return -1;
}
pCodecParserCtx=av_parser_init(codecId);
if (!pCodecParserCtx)
{
printf("Could not allocate video parser context\n");
return -1;
}
pCodecCtx->thread_count = 4;
pCodecCtx->thread_type = FF_THREAD_FRAME;
pCodec->capabilities &= CODEC_CAP_TRUNCATED;
pCodecCtx->flags |= CODEC_FLAG_TRUNCATED;
if (avcodec_open2(pCodecCtx, pCodec, NULL) < 0)
{
printf("Could not open codec\n");
return -1;
}
av_log_set_level(AV_LOG_QUIET);
av_init_packet(&packet);
//parse and decode
//after av_parser_parse2, the packet has a complete frame data
//in decode function, I just call avcodec_decode_video2 and do some frame copy work
while (cur_size>0)
{
int len = av_parser_parse2(
pCodecParserCtx, pCodecCtx,
&packet.data, &packet.size,
cur_ptr, cur_size,
AV_NOPTS_VALUE, AV_NOPTS_VALUE, AV_NOPTS_VALUE);
cur_ptr += len;
cur_size -= len;
if(GetPacketSize()==0)
continue;
AVFrame *pFrame = av_frame_alloc();
int ret = Decode(pFrame);
if (ret < 0)
{
continue;
}
if (ret)
{
//some works
}
}
But nothing different with before.
How can I use multithreading in FFMpeg? Any advise?
pCodec->capabilities &= CODEC_CAP_TRUNCATED;
And that's your bug. Please remove this line. The return value of avcodec_find_decoder() should for all practical intents and purposes be considered const.
Specifically, this statement removes the AV_CODEC_CAP_FRAME_THREADS flag from the codec's capabilities, thus effectively disabling frame-multithreading in the rest of the code.

FFMPEG - How to save audio stream detached from video to file whithout transcoding

I try to use fwrite() to save audio stream. But, generated file can not be opened.
At the same time, I also try to use av_frame_write() to write packet. But, it can not write.
Please help me with this problem. How to write audio stream without transcoding....
/* open the input file with generic avformat function */
err = avformat_open_input(input_format_context, filename, NULL, NULL);
if (err < 0) {
return err;
}
/* If not enough info to get the stream parameters, we decode the
first frames to get it. (used in mpeg case for example) */
ret = avformat_find_stream_info(*input_format_context, 0);
if (ret < 0) {
av_log(NULL, AV_LOG_FATAL, "%s: could not find codec parameters\n", filename);
return ret;
}
/* dump the file content */
av_dump_format(*input_format_context, 0, filename, 0);
for (size_t i = 0; i < (*input_format_context)->nb_streams; i++) {
AVStream *st = (*input_format_context)->streams[i];
if (st->codec->codec_type == AVMEDIA_TYPE_AUDIO) {
FILE *file = NULL;
file = fopen("C:\\Users\\MyPC\\Downloads\\test.aac", "wb");
AVPacket reading_packet;
av_init_packet(&reading_packet);
while (av_read_frame(*input_format_context, &reading_packet) == 0) {
if (reading_packet.stream_index == (int) i) {
fwrite(reading_packet.data, 1, reading_packet.size, file);
}
av_free_packet(&reading_packet);
}
fclose(file);
return 0;
}
}
aac file require that frames have ADTS headers. If the file you are reading from does not use use ADTS frames (mp4 for example) you will need to manually create these headers, or use a bitstream filter. Also your code does not check to see if the codec is AAC.

How to use alsa to play sounds simultaneously in c?

I'm using alsa lib in c under linux.
I'd like to load several wav files and play them depending on some test conditions.
I'm using the following code, but it needs to be improved:
// A simple C example to play a mono or stereo, 16-bit 44KHz
// WAVE file using ALSA. This goes directly to the first
// audio card (ie, its first set of audio out jacks). It
// uses the snd_pcm_writei() mode of outputting waveform data,
// blocking.
//
// Compile as so to create "alsawave":
// gcc -o alsawave alsawave.c -lasound
//
// Run it from a terminal, specifying the name of a WAVE file to play:
// ./alsawave MyWaveFile.wav
#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>
// Include the ALSA .H file that defines ALSA functions/data
#include <alsa/asoundlib.h>
#pragma pack (1)
/////////////////////// WAVE File Stuff /////////////////////
// An IFF file header looks like this
typedef struct _FILE_head
{
unsigned char ID[4]; // could be {'R', 'I', 'F', 'F'} or {'F', 'O', 'R', 'M'}
unsigned int Length; // Length of subsequent file (including remainder of header). This is in
// Intel reverse byte order if RIFF, Motorola format if FORM.
unsigned char Type[4]; // {'W', 'A', 'V', 'E'} or {'A', 'I', 'F', 'F'}
} FILE_head;
// An IFF chunk header looks like this
typedef struct _CHUNK_head
{
unsigned char ID[4]; // 4 ascii chars that is the chunk ID
unsigned int Length; // Length of subsequent data within this chunk. This is in Intel reverse byte
// order if RIFF, Motorola format if FORM. Note: this doesn't include any
// extra byte needed to pad the chunk out to an even size.
} CHUNK_head;
// WAVE fmt chunk
typedef struct _FORMAT {
short wFormatTag;
unsigned short wChannels;
unsigned int dwSamplesPerSec;
unsigned int dwAvgBytesPerSec;
unsigned short wBlockAlign;
unsigned short wBitsPerSample;
// Note: there may be additional fields here, depending upon wFormatTag
} FORMAT;
#pragma pack()
// Size of the audio card hardware buffer. Here we want it
// set to 1024 16-bit sample points. This is relatively
// small in order to minimize latency. If you have trouble
// with underruns, you may need to increase this, and PERIODSIZE
// (trading off lower latency for more stability)
#define BUFFERSIZE (2*1024)
// How many sample points the ALSA card plays before it calls
// our callback to fill some more of the audio card's hardware
// buffer. Here we want ALSA to call our callback after every
// 64 sample points have been played
#define PERIODSIZE (2*64)
// Handle to ALSA (audio card's) playback port
snd_pcm_t *PlaybackHandle;
// Handle to our callback thread
snd_async_handler_t *CallbackHandle;
// Points to loaded WAVE file's data
unsigned char *WavePtr;
// Size (in frames) of loaded WAVE file's data
snd_pcm_uframes_t WaveSize;
// Sample rate
unsigned short WaveRate;
// Bit resolution
unsigned char WaveBits;
// Number of channels in the wave file
unsigned char WaveChannels;
// The name of the ALSA port we output to. In this case, we're
// directly writing to hardware card 0,0 (ie, first set of audio
// outputs on the first audio card)
static const char SoundCardPortName[] = "default";
// For WAVE file loading
static const unsigned char Riff[4] = { 'R', 'I', 'F', 'F' };
static const unsigned char Wave[4] = { 'W', 'A', 'V', 'E' };
static const unsigned char Fmt[4] = { 'f', 'm', 't', ' ' };
static const unsigned char Data[4] = { 'd', 'a', 't', 'a' };
/********************** compareID() *********************
* Compares the passed ID str (ie, a ptr to 4 Ascii
* bytes) with the ID at the passed ptr. Returns TRUE if
* a match, FALSE if not.
*/
static unsigned char compareID(const unsigned char * id, unsigned char * ptr)
{
register unsigned char i = 4;
while (i--)
{
if ( *(id)++ != *(ptr)++ ) return(0);
}
return(1);
}
/********************** waveLoad() *********************
* Loads a WAVE file.
*
* fn = Filename to load.
*
* RETURNS: 0 if success, non-zero if not.
*
* NOTE: Sets the global "WavePtr" to an allocated buffer
* containing the wave data, and "WaveSize" to the size
* in sample points.
*/
static unsigned char waveLoad(const char *fn)
{
const char *message;
FILE_head head;
register int inHandle;
if ((inHandle = open(fn, O_RDONLY)) == -1)
message = "didn't open";
// Read in IFF File header
else
{
if (read(inHandle, &head, sizeof(FILE_head)) == sizeof(FILE_head))
{
// Is it a RIFF and WAVE?
if (!compareID(&Riff[0], &head.ID[0]) || !compareID(&Wave[0], &head.Type[0]))
{
message = "is not a WAVE file";
goto bad;
}
// Read in next chunk header
while (read(inHandle, &head, sizeof(CHUNK_head)) == sizeof(CHUNK_head))
{
// ============================ Is it a fmt chunk? ===============================
if (compareID(&Fmt[0], &head.ID[0]))
{
FORMAT format;
// Read in the remainder of chunk
if (read(inHandle, &format.wFormatTag, sizeof(FORMAT)) != sizeof(FORMAT)) break;
// Can't handle compressed WAVE files
if (format.wFormatTag != 1)
{
message = "compressed WAVE not supported";
goto bad;
}
WaveBits = (unsigned char)format.wBitsPerSample;
WaveRate = (unsigned short)format.dwSamplesPerSec;
WaveChannels = format.wChannels;
}
// ============================ Is it a data chunk? ===============================
else if (compareID(&Data[0], &head.ID[0]))
{
// Size of wave data is head.Length. Allocate a buffer and read in the wave data
if (!(WavePtr = (unsigned char *)malloc(head.Length)))
{
message = "won't fit in RAM";
goto bad;
}
if (read(inHandle, WavePtr, head.Length) != head.Length)
{
free(WavePtr);
break;
}
// Store size (in frames)
WaveSize = (head.Length * 8) / ((unsigned int)WaveBits * (unsigned int)WaveChannels);
close(inHandle);
return(0);
}
// ============================ Skip this chunk ===============================
else
{
if (head.Length & 1) ++head.Length; // If odd, round it up to account for pad byte
lseek(inHandle, head.Length, SEEK_CUR);
}
}
}
message = "is a bad WAVE file";
bad: close(inHandle);
}
printf("%s %s\n", fn, message);
return(1);
}
/********************** play_audio() **********************
* Plays the loaded waveform.
*
* NOTE: ALSA sound card's handle must be in the global
* "PlaybackHandle". A pointer to the wave data must be in
* the global "WavePtr", and its size of "WaveSize".
*/
static void play_audio(void)
{
register snd_pcm_uframes_t count, frames;
// Output the wave data
count = 0;
do
{
frames = snd_pcm_writei(PlaybackHandle, WavePtr + count, WaveSize - count);
// If an error, try to recover from it
if (frames < 0)
frames = snd_pcm_recover(PlaybackHandle, frames, 0);
if (frames < 0)
{
printf("Error playing wave: %s\n", snd_strerror(frames));
break;
}
// Update our pointer
count += frames;
} while (count < WaveSize);
// Wait for playback to completely finish
//if (count == WaveSize)
//snd_pcm_drain(PlaybackHandle);
}
/*********************** free_wave_data() *********************
* Frees any wave data we loaded.
*
* NOTE: A pointer to the wave data be in the global
* "WavePtr".
*/
static void free_wave_data(void)
{
if (WavePtr) free(WavePtr);
WavePtr = 0;
}
int main(int argc, char **argv)
{
// No wave data loaded yet
WavePtr = 0;
if (argc < 2)
printf("You must supply the name of a 16-bit mono WAVE file to play\n");
// Load the wave file
else if (!waveLoad(argv[1]))
{
register int err;
// Open audio card we wish to use for playback
if ((err = snd_pcm_open(&PlaybackHandle, &SoundCardPortName[0], SND_PCM_STREAM_PLAYBACK, 0)) < 0)
printf("Can't open audio %s: %s\n", &SoundCardPortName[0], snd_strerror(err));
else
{
switch (WaveBits)
{
case 8:
err = SND_PCM_FORMAT_U8;
break;
case 16:
err = SND_PCM_FORMAT_S16;
break;
case 24:
err = SND_PCM_FORMAT_S24;
break;
case 32:
err = SND_PCM_FORMAT_S32;
break;
}
// Set the audio card's hardware parameters (sample rate, bit resolution, etc)
if ((err = snd_pcm_set_params(PlaybackHandle, err, SND_PCM_ACCESS_RW_INTERLEAVED, WaveChannels, WaveRate, 1, 100000)) < 0)
printf("Can't set sound parameters: %s\n", snd_strerror(err));
// Play the waveform
else
play_audio();
int i;
usleep(10000);
play_audio();
play_audio();
// Close sound card
snd_pcm_close(PlaybackHandle);
}
}
// Free the WAVE data
free_wave_data();
return(0);
}
As I would like to play multiple sounds simultaneously, I started to try to play the same sound more than once, so I commented the following lines:
if (count == WaveSize)
snd_pcm_drain(PlaybackHandle);
in the play_audio function.
Unfortunately, that doesn't really works, because if I try to play the same sound more than once, it works, but, if I insert a long delay before I play the sound, nothing is played.
for instance, in the main function
play_audio();
usleep(10000);
play_audio();
play_audio();
works, and I can hear the same sound three times. But, if I use usleep(100000), I hear the sound only once.
Another problem is that it has to wait for the first sound to end before it starts to play the next one.
So, I'd like to be able to send more than one sound, and play several sounds at the same time. I would like to mix them manually (it's not really difficult). The main function will contain a while loop with some tests to determine which sound(s) need to be played.
I thought about putting play_audio in a thread and run it in an infinite loop, and have the main thread that modifies (mix, etc.) WavePtr.
I just don't really know if this is the right way, or if there is a more efficient method.
Any suggestions? Thanks.

Encoding FLOAT PCM to OGG using libav

I am currently trying to convert a raw PCM Float buffer to an OGG encoded file. I tried several library to do the encoding process and I finally chose libavcodec.
What I precisely want to do is get the float buffer ([-1;1]) provided by my audio library and turn it to a char buffer of encoded ogg data.
I managed to encode the float buffer to a buffer of encoded MP2 with this (proof of concept) code:
static AVCodec *codec;
static AVCodecContext *c;
static AVPacket pkt;
static uint16_t* samples;
static AVFrame* frame;
static int frameEncoded;
FILE *file;
int main(int argc, char *argv[])
{
file = fopen("file.ogg", "w+");
long ret;
avcodec_register_all();
codec = avcodec_find_encoder(AV_CODEC_ID_MP2);
if (!codec) {
fprintf(stderr, "codec not found\n");
exit(1);
}
c = avcodec_alloc_context3(NULL);
c->bit_rate = 256000;
c->sample_rate = 44100;
c->channels = 2;
c->sample_fmt = AV_SAMPLE_FMT_S16;
c->channel_layout = AV_CH_LAYOUT_STEREO;
/* open it */
if (avcodec_open2(c, codec, NULL) < 0) {
fprintf(stderr, "Could not open codec\n");
exit(1);
}
/* frame containing input raw audio */
frame = av_frame_alloc();
if (!frame) {
fprintf(stderr, "Could not allocate audio frame\n");
exit(1);
}
frame->nb_samples = c->frame_size;
frame->format = c->sample_fmt;
frame->channel_layout = c->channel_layout;
/* the codec gives us the frame size, in samples,
* we calculate the size of the samples buffer in bytes */
int buffer_size = av_samples_get_buffer_size(NULL, c->channels, c->frame_size,
c->sample_fmt, 0);
if (buffer_size < 0) {
fprintf(stderr, "Could not get sample buffer size\n");
exit(1);
}
samples = av_malloc(buffer_size);
if (!samples) {
fprintf(stderr, "Could not allocate %d bytes for samples buffer\n",
buffer_size);
exit(1);
}
/* setup the data pointers in the AVFrame */
ret = avcodec_fill_audio_frame(frame, c->channels, c->sample_fmt,
(const uint8_t*)samples, buffer_size, 0);
if (ret < 0) {
fprintf(stderr, "Could not setup audio frame\n");
exit(1);
}
}
void myLibraryCallback(float *inbuffer, unsigned int length)
{
for(int j = 0; j < (2 * length); j++) {
if(frameEncoded >= (c->frame_size *2)) {
int avret, got_output;
av_init_packet(&pkt);
pkt.data = NULL; // packet data will be allocated by the encoder
pkt.size = 0;
avret = avcodec_encode_audio2(c, &pkt, frame, &got_output);
if (avret < 0) {
fprintf(stderr, "Error encoding audio frame\n");
exit(1);
}
if (got_output) {
fwrite(pkt.data, 1, pkt.size, file);
av_free_packet(&pkt);
}
frameEncoded = 0;
}
samples[frameEncoded] = inbuffer[j] * SHRT_MAX;
frameEncoded++;
}
}
The code is really simple, I initialize libavencode the usual way, then my audio library sends me processed PCM FLOAT [-1;1] interleaved at 44.1Khz and the number of floats (usually 1024) in the inbuffer for each channel (2 for stereo). So usually, inbuffer contains 2048 floats.
That was easy since I just needed here to convert my PCM to 16P, both interleaved. Moreover it is possible to code a 16P sample on a single char.
Now I would like to apply this to OGG which needs a sample format of AV_SAMPLE_FMT_FLTP.
Since my native format is AV_SAMPLE_FMT_FLT, it should only be some desinterleaving. Which is really easy to do.
The points I don't get are:
How can you send a float buffer on a char buffer ? Do we treat them as-is (float* floatSamples = (float*) samples) ? If so, what means the sample number avcodec gives you ? Is it the number of floats or chars ?
How can you send datas on two buffers (one for left, one for right) when avcodec_fill_audio_frame only takes a (uint8_t*) parameter and not a (uint8_t**) for multiple channels ? Does-it completely change the previous sample code ?
I tried to find some answers myself and I made a LOT of experiments so far but I failed on theses points. Since there is a huge lack of documentation on these, I would be very grateful if you had answers.
Thank you !

How do I convert ADPCM to PCM using FFmpeg?

I have a video feed that sends me audio using the ADPCM codec. However, android only supports PCM format. How can I convert the ADPCM audio feed into a PCM audio feed?
The answer to this may be similar to the answer to this question.
I have successfully decoded the frame with this code:
int len = avcodec_decode_audio4(pAudioCodecCtx, pAudioFrame, &frameFinished, &packet);
Is the secret here to use a reverse encode function?
Here is what I have so far in my audio decode function:
<!-- language: c -->
if(packet_queue_get(env, javaThread, pAudioPacketQueue, &packet, 1) < 0) {
LOGE("audio - after get packet failed");
return;
}
LOGD("Dequeued audio packet");
// calculate frame size
int frameSize;
if (pPcmAudioCodecCtx->frame_size) {
frameSize = pPcmAudioCodecCtx->frame_size;
} else {
/* if frame_size is not set, the number of samples must be
* calculated from the buffer size */
int64_t nb_samples = (int64_t)AUDIO_PCM_OUTBUFF_SIZE * 8 /
(av_get_bits_per_sample(pPcmAudioCodecCtx->codec_id) *
pPcmAudioCodecCtx->channels);
frameSize = nb_samples;
}
int pcmBytesPerSample = av_get_bytes_per_sample(pPcmAudioCodecCtx->sample_fmt);
int pcmFrameBytes = frameSize * pcmBytesPerSample * pPcmAudioCodecCtx->channels;
uint8_t *pDataStart = packet.data;
while(packet.size > 0) {
int len = avcodec_decode_audio4(pAudioCodecCtx, pAudioFrame, &frameFinished, &packet);
LOGD("Decoded ADPCM frame");
if (len < 0) {
LOGE("Error while decoding audio");
return;
}
if (frameFinished) {
// store frame data in FIFO buffer
uint8_t *inputBuffer = pAudioFrame->data[0];
int inputBufferSize = pAudioFrame->linesize[0];
av_fifo_generic_write(fifoBuffer, inputBuffer, inputBufferSize, NULL);
LOGD("Added ADPCM frame to FIFO buffer");
// check if fifo buffer has enough data for a PCM frame
while (av_fifo_size(fifoBuffer) >= pcmFrameBytes) {
LOGI("PCM frame data in FIFO buffer");
// read frame's worth of data from FIFO buffer
av_fifo_generic_read(fifoBuffer, pAudioPcmOutBuffer, pcmFrameBytes, NULL);
LOGD("Read data from FIFO buffer into pcm frame");
avcodec_get_frame_defaults(pPcmAudioFrame);
LOGD("Got frame defaults");
pPcmAudioFrame->nb_samples = pcmFrameBytes / (pPcmAudioCodecCtx->channels *
pcmBytesPerSample);
avcodec_fill_audio_frame(pPcmAudioFrame, pPcmAudioCodecCtx->channels,
pPcmAudioCodecCtx->sample_fmt,
pAudioPcmOutBuffer, pcmFrameBytes, 1);
LOGD("Filled frame audio with data");
// fill audio play buffer
int dataSize = pPcmAudioFrame->linesize[0];
LOGD("Data to output: %d", dataSize);
jbyteArray audioPlayBuffer = (jbyteArray) env->GetObjectField(ffmpegCtx, env->GetFieldID(cls, "audioPlayBuffer", "[B"));
jbyte *bytes = env->GetByteArrayElements(audioPlayBuffer, NULL);
memcpy(bytes, pPcmAudioFrame->data[0], dataSize);
env->ReleaseByteArrayElements(audioPlayBuffer, bytes, 0);
LOGD("Copied data into Java array");
env->CallVoidMethod(player, env->GetMethodID(playerCls, "updateAudio", "(I)V"), dataSize);
}
It turns out that the audio_decode_ functions return 16 bit PCM format, and that I just didn't know how to access it properly.
Here is the altered code inside the packet loop that plays the audio based on avcodec_decode_audio4.
int len = avcodec_decode_audio4(pAudioCodecCtx, pAudioFrame, &frameFinished, &packet);
if (len < 0) {
LOGE("Error while decoding audio");
return;
}
if (frameFinished) {
int planeSize;
uint8_t *pcmBuffer = pAudioFrame->extended_data[0];
int dataSize = av_samples_get_buffer_size(&planeSize, pAudioCodecCtx->channels,
pAudioFrame->nb_samples,
pAudioCodecCtx->sample_fmt, 1);
// fill audio play buffer
jbyteArray audioPlayBuffer = (jbyteArray) env->GetObjectField(ffmpegCtx, env->GetFieldID(cls, "audioPlayBuffer", "[B"));
jbyte *bytes = env->GetByteArrayElements(audioPlayBuffer, NULL);
memcpy(bytes, pcmBuffer, dataSize);
env->ReleaseByteArrayElements(audioPlayBuffer, bytes, 0);
env->CallVoidMethod(player, env->GetMethodID(playerCls, "updateAudio", "(I)V"), dataSize);
}
You can see sample code at http://ffmpeg.org/doxygen/trunk/doc_2examples_2decoding_encoding_8c-example.html
See audio_encode_example function.

Resources