Capture image with V4L2 on jetson TX2 - image-capture

To explain my process, find below a diagram:
I am working on computed tomography scanner. I use jetson TX2 for image acquisition and pre-processing.
From the jetson, I control the turn table and the camera. The camera is the FSM-IMX304m. I need to access the raw pointer. For that, I need to control the camera using V4L2 (we advise not use libargus to access raw pointer, because it is store in the ISP and the ISP compress data .. Can you confirm it ?). My first problem is about the documentation about v4l2, I didn't find a clear documentation for the C++ API .. I need to control:
exposure time;
gain;
function to clear the buffer.
I found a sample on internet, see how V4L2 works :
#include <iostream>
#include <stdio.h>
#include <stdlib.h>
#include <linux/ioctl.h>
#include <linux/types.h>
#include <linux/v4l2-common.h>
#include <linux/v4l2-controls.h>
#include <linux/videodev2.h>
#include <fcntl.h>
#include <unistd.h>
#include <sys/ioctl.h>
#include <sys/mman.h>
#include <string.h>
#include <fstream>
#include <string>
using namespace std;
int main() {
// 1. Open the device
int fd; // A file descriptor to the video device
fd = open("/dev/video0",O_RDWR);
if(fd < 0){
perror("Failed to open device, OPEN");
return 1;
}
// 2. Ask the device if it can capture frames
v4l2_capability capability;
if(ioctl(fd, VIDIOC_QUERYCAP, &capability) < 0){
// something went wrong... exit
perror("Failed to get device capabilities, VIDIOC_QUERYCAP");
return 1;
}
// 3. Set Image format
v4l2_format imageFormat;
imageFormat.type = V4L2_BUF_TYPE_VIDEO_CAPTURE;
imageFormat.fmt.pix.width = 1024;
imageFormat.fmt.pix.height = 1024;
imageFormat.fmt.pix.pixelformat = V4L2_PIX_FMT_MJPEG;
imageFormat.fmt.pix.field = V4L2_FIELD_NONE;
// tell the device you are using this format
if(ioctl(fd, VIDIOC_S_FMT, &imageFormat) < 0){
perror("Device could not set format, VIDIOC_S_FMT");
return 1;
}
// 4. Request Buffers from the device
v4l2_requestbuffers requestBuffer = {0};
requestBuffer.count = 1; // one request buffer
requestBuffer.type = V4L2_BUF_TYPE_VIDEO_CAPTURE; // request a buffer which we can use for capturing frames
requestBuffer.memory = V4L2_MEMORY_MMAP;
if(ioctl(fd, VIDIOC_REQBUFS, &requestBuffer) < 0){
perror("Could not request buffer from device, VIDIOC_REQBUFS");
return 1;
}
// 5. Query the buffer to get raw data ie. ask for the you requested buffer
// and allocate memory for it
v4l2_buffer queryBuffer = {0};
queryBuffer.type = V4L2_BUF_TYPE_VIDEO_CAPTURE;
queryBuffer.memory = V4L2_MEMORY_MMAP;
queryBuffer.index = 0;
if(ioctl(fd, VIDIOC_QUERYBUF, &queryBuffer) < 0){
perror("Device did not return the buffer information, VIDIOC_QUERYBUF");
return 1;
}
// use a pointer to point to the newly created buffer
// mmap() will map the memory address of the device to
// an address in memory
char* buffer = (char*)mmap(NULL, queryBuffer.length, PROT_READ | PROT_WRITE, MAP_SHARED,
fd, queryBuffer.m.offset);
memset(buffer, 0, queryBuffer.length);
// 6. Get a frame
// Create a new buffer type so the device knows which buffer we are talking about
v4l2_buffer bufferinfo;
memset(&bufferinfo, 0, sizeof(bufferinfo));
bufferinfo.type = V4L2_BUF_TYPE_VIDEO_CAPTURE;
bufferinfo.memory = V4L2_MEMORY_MMAP;
bufferinfo.index = 0;
// Activate streaming
int type = bufferinfo.type;
if(ioctl(fd, VIDIOC_STREAMON, &type) < 0){
perror("Could not start streaming, VIDIOC_STREAMON");
return 1;
}
/***************************** Begin looping here *********************/
// Queue the buffer
if(ioctl(fd, VIDIOC_QBUF, &bufferinfo) < 0){
perror("Could not queue buffer, VIDIOC_QBUF");
return 1;
}
// Dequeue the buffer
if(ioctl(fd, VIDIOC_DQBUF, &bufferinfo) < 0){
perror("Could not dequeue the buffer, VIDIOC_DQBUF");
return 1;
}
// Frames get written after dequeuing the buffer
cout << "Buffer has: " << (double)bufferinfo.bytesused / 1024
<< " KBytes of data" << endl;
// Write the data out to file
ofstream outFile;
outFile.open("webcam_output.jpeg", ios::binary| ios::app);
int bufPos = 0, outFileMemBlockSize = 0; // the position in the buffer and the amount to copy from
// the buffer
int remainingBufferSize = bufferinfo.bytesused; // the remaining buffer size, is decremented by
// memBlockSize amount on each loop so we do not overwrite the buffer
char* outFileMemBlock = NULL; // a pointer to a new memory block
int itr = 0; // counts thenumber of iterations
while(remainingBufferSize > 0) {
bufPos += outFileMemBlockSize; // increment the buffer pointer on each loop
// initialise bufPos before outFileMemBlockSize so we can start
// at the beginning of the buffer
outFileMemBlockSize = 1024; // set the output block size to a preferable size. 1024 :)
outFileMemBlock = new char[sizeof(char) * outFileMemBlockSize];
// copy 1024 bytes of data starting from buffer+bufPos
memcpy(outFileMemBlock, buffer+bufPos, outFileMemBlockSize);
outFile.write(outFileMemBlock,outFileMemBlockSize);
// calculate the amount of memory left to read
// if the memory block size is greater than the remaining
// amount of data we have to copy
if(outFileMemBlockSize > remainingBufferSize)
outFileMemBlockSize = remainingBufferSize;
// subtract the amount of data we have to copy
// from the remaining buffer size
remainingBufferSize -= outFileMemBlockSize;
// display the remaining buffer size
cout << itr++ << " Remaining bytes: "<< remainingBufferSize << endl;
delete outFileMemBlock;
}
// Close the file
outFile.close();
/******************************** end looping here **********************/
// end streaming
if(ioctl(fd, VIDIOC_STREAMOFF, &type) < 0){
perror("Could not end streaming, VIDIOC_STREAMOFF");
return 1;
}
close(fd);
return 0;
}
On the jetson, the code compile perfectly, but I can't run the code. It is blocked at this step :
// Dequeue the buffer
if(ioctl(fd, VIDIOC_DQBUF, &bufferinfo) < 0){
perror("Could not dequeue the buffer, VIDIOC_DQBUF");
return 1;
}
It is like the code is blocked in an endless loop. I have tested the code on my personal computer which runs Ubuntu 18.04, and the sample works well.

I do not have this sensor, but I assume that:
Your pixel format is incorrectly set imageFormat.fmt.pix.pixelformat = V4L2_PIX_FMT_MJPEG;
Most likely there should be 12-bit raw data from this sensor V4L2_PIX_FMT_Y12 or one of these options (Mono8/10/12/16, Bayer8/10/12/16, RGB8, YUV422, YUV411).
You can view the available formats in the Linux kernel here https://elixir.bootlin.com/linux/v4.9.237/source/include/uapi/linux/videodev2.h#L499
Check the documentation for your sensor.
Since Nvidia developers have extended the v4l2 subsystem, you need to use the following controls to adjust exposure and gain: TEGRA_CAMERA_CID_EXPOSURE, TEGRA_CAMERA_CID_GAIN. See file tegra-v4l2-camera.h
And also check the sensor controls:
v4l2-ctl --list-ctrls
....
gain 0x009a2009 (int64): min = 0 max = 480 step = 1 default = 0 value = 0 flags = slider
exposure 0x009a200a (int64): min = 28 max = 1000000 step = 1 default = 27879 value = 28 flags = slider
.....
Also examples of receiving raw data from the camera can be seen in examples from Nvidia
https://docs.nvidia.com/jetson/l4t-multimedia/mmapi_build.html

Related

Alsa buffer underrun

I am trying to write random noise to to a device and allow my loop to sleep when I have written enough data. My understanding is that for each call to snd_pcm_writei I am writing 162 bytes (81 frames) which at 8khz rate and 16bit format it should be enough audio for ~10ms. I have verified that alsa does tell me I have written 81 frames.
I would expect that I can then sleep for a short amount of time before waking up and pushing the next 10 ms worth of data. However when I sleep for any amount - even a single ms - I start to get buffer underrun errors.
Obviously I have made an incorrect assumption somewhere. Can anyone point me to what I may be missing? I have removed most error checking to shorten the code - but there are no errors initializing the alsa system on my end. I would like to be able to push 10ms of audio and sleep (even for 1 ms) before pushing the next 10ms.
#include <alsa/asoundlib.h>
#include <spdlog/spdlog.h>
int main(int argc, char **argv) {
snd_pcm_t* handle;
snd_pcm_hw_params_t* hw;
unsigned int rate = 8000;
unsigned long periodSize = rate / 100; //period every 10 ms
int err = snd_pcm_open(&handle, "default", SND_PCM_STREAM_PLAYBACK, 0);
snd_pcm_hw_params_malloc(&hw);
snd_pcm_hw_params_any(handle, hw);
snd_pcm_hw_params_set_access(handle, hw, SND_PCM_ACCESS_RW_INTERLEAVED);
snd_pcm_hw_params_set_format(handle, hw, SND_PCM_FORMAT_S16_LE);
snd_pcm_hw_params_set_rate(handle, hw, rate, 0);
snd_pcm_hw_params_set_channels(handle, hw, 1);
int dir = 1;
snd_pcm_hw_params_set_period_size_near(handle, hw, &periodSize, &dir);
snd_pcm_hw_params(handle, hw);
snd_pcm_uframes_t frames;
snd_pcm_hw_params_get_period_size(hw, &frames, &dir);
int size = frames * 2; // two bytes a sample
char* buffer = (char*)malloc(size);
unsigned int periodTime;
snd_pcm_hw_params_get_period_time(hw,&periodTime, &dir);
snd_pcm_hw_params_free(hw);
snd_pcm_prepare(handle);
char* randomNoise = new char[size];
for(int i = 0; i < size; i++)
randomNoise[i] = random() % 0xFF;
while(true) {
err = snd_pcm_writei(handle, randomNoise, size/2);
if(err > 0) {
spdlog::info("Write {} frames", err);
} else {
spdlog::error("Error write {}\n", snd_strerror(err));
snd_pcm_recover(handle, err, 0);
continue;
}
usleep(1000); // <---- This is what causes the buffer underrun
}
}
Try to put in /etc/pulse/daemon.conf :
default-fragments = 5
default-fragment-size-msec = 2
and restart linux.
What I don't understand is why you write a buffer of size "size" to the device, and in the approximate calculations of time you rely on the "periodSize" declared by you. Then write a buffer with the size "periodSize" to the device.

sndio sio_onmove not calling back.

I'm trying to write a fullduplex test that copies audio in to audio out. sio_onmove does not get called. I have no idea why. Here's my code so far:
#include <stdio.h>
#include <stdlib.h>
#include <sndio.h>
unsigned char buf[0xffff];
struct sio_hdl *hdl;
void cb(void *arg, int delta) {
int l;
printf("call %d\n", delta);
for(;;) {
l = sio_read(hdl, buf, delta);
if(l==0) break;
sio_write(hdl, buf, l);
}
}
int main(void) {
int m, i;
struct sio_par par;
struct sio_cap cap;
hdl = sio_open("rsnd/0", SIO_PLAY | SIO_REC , 1);
sio_getcap(hdl, &cap);
sio_initpar( &par);
par.bits = cap.enc[0].bits;
par.bps = cap.enc[0].bps;
par.sig = cap.enc[0].sig;
par.le = cap.enc[0].le;
par.msb = cap.enc[0].msb;
par.rchan=cap.rchan[0];
par.pchan=cap.pchan[0];
par.rate =cap.rate[0];
par.appbufsz = 1024;
sio_setpar(hdl, &par);
sio_onmove(hdl, cb, NULL);
sio_start(hdl);
for(;;)
sleep(1);
}
I'm initializing rsnd/0 for recording and play back. The parameters I'm initializing from a getcap call. I'm then setting cb as the callback for onmove. I then start audio. From there I loop forever doing nothing
The sio_onmove() call-back is called either from sio_revents() if non-blocking i/o is used or from blocking sio_read() or sio_write().
As above program calls sleep(1) instead, the call-back is never called.
AFAIU, to do the full-duplex test, you could use blocking i/o (set to 0 last argument of the sio_open() function) and do the following steps:
call sio_initpar() to initialize a sio_par structure, as you do
set your preferred parameters in the sio_par structure
call sio_setpar() to submit them to the device. devices exposed through the server (ex. "snd/0") will accept any parameters, while raw devices (ex. "rsnd/0") pick something close to whatever the hardware supports.
call sio_getpar() to get the parameters the device accepted, this is needed to get the device buffer size
possibly check if they are usable by your program
call sio_start()
prime the play buffer by writing par.bufsz samples with sio_write(). This corresponds to: par.bufsz * par.pchan * par.bps bytes.
At this stage, device starts and you could do the main-loop as with the following pseudo-code:
unsigned char *data;
size_t n, todo, blksz;
blksz = par.round * par.rchan * par.bps;
for (;;) {
/* read one block */
data = buf;
todo = blksz;
while (todo > 0) {
n = sio_read(hdl, data, todo);
if (n == 0)
errx(1, "failed");
todo -= n;
data += n;
}
/* write one block */
n = sio_write(hdl, buf, blksz);
if (n != blksz)
errx(1, "failed");
}
The sio_onmove() call-back is not needed for pure audio programs. It's only useful to synchronize non-audio events (ex video, midi messages) to the audio stream.

Does OpenAL-Soft have an upper limit on the number of sources?

I'm using OpenAL-Soft for a project, and right now I'm trying to decide whether I need to implement OpenAL source pooling.
Source pooling is somewhat cumbersome (I need to write code to "allocate" sources, as well as somehow decide when they can be "freed"), but necessary if the number of sources that can be generated by OpenAL is limited.
Since OpenAL-Soft is a software implementation of the OpenAL API, I wonder if the number of sources it can generate is actually limited by the underlying hardware. Theoretically, since all mixing is done in software, there might be no need to actually use one hardware channel per source.
However, I'm not sure about it. How should I proceed?
It appears that OpenAL-Soft indeed does have an upper limit on the number of sources, which can be defined in a config file. The default seems to be 256. It makes sense to limit the number of sources because of the associated CPU and memory costs. Looks like I'll end up implementing a source pool after all.
I just took a peek at its header ... did not see anything pop out.
Here is working code which synthesizes then renders audio buffer data ... you could play with seeing if it accommodates your necessary number of sources
// gcc -o openal_play_wed openal_play_wed.c -lopenal -lm
#include <stdio.h>
#include <stdlib.h> // gives malloc
#include <math.h>
#ifdef __APPLE__
#include <OpenAL/al.h>
#include <OpenAL/alc.h>
#elif __linux
#include <AL/al.h>
#include <AL/alc.h>
#endif
ALCdevice * openal_output_device;
ALCcontext * openal_output_context;
ALuint internal_buffer;
ALuint streaming_source[1];
int al_check_error(const char * given_label) {
ALenum al_error;
al_error = alGetError();
if(AL_NO_ERROR != al_error) {
printf("ERROR - %s (%s)\n", alGetString(al_error), given_label);
return al_error;
}
return 0;
}
void MM_init_al() {
const char * defname = alcGetString(NULL, ALC_DEFAULT_DEVICE_SPECIFIER);
openal_output_device = alcOpenDevice(defname);
openal_output_context = alcCreateContext(openal_output_device, NULL);
alcMakeContextCurrent(openal_output_context);
// setup buffer and source
alGenBuffers(1, & internal_buffer);
al_check_error("failed call to alGenBuffers");
}
void MM_exit_al() {
ALenum errorCode = 0;
// Stop the sources
alSourceStopv(1, & streaming_source[0]); // streaming_source
int ii;
for (ii = 0; ii < 1; ++ii) {
alSourcei(streaming_source[ii], AL_BUFFER, 0);
}
// Clean-up
alDeleteSources(1, &streaming_source[0]);
alDeleteBuffers(16, &streaming_source[0]);
errorCode = alGetError();
alcMakeContextCurrent(NULL);
errorCode = alGetError();
alcDestroyContext(openal_output_context);
alcCloseDevice(openal_output_device);
}
void MM_render_one_buffer() {
/* Fill buffer with Sine-Wave */
// float freq = 440.f;
float freq = 100.f;
float incr_freq = 0.1f;
int seconds = 4;
// unsigned sample_rate = 22050;
unsigned sample_rate = 44100;
double my_pi = 3.14159;
size_t buf_size = seconds * sample_rate;
short * samples = malloc(sizeof(short) * buf_size);
printf("\nhere is freq %f\n", freq);
int i=0;
for(; i<buf_size; ++i) {
samples[i] = 32760 * sin( (2.f * my_pi * freq)/sample_rate * i );
freq += incr_freq;
// incr_freq += incr_freq;
// freq *= factor_freq;
if (100.0 > freq || freq > 5000.0) {
incr_freq *= -1.0f;
}
}
/* upload buffer to OpenAL */
alBufferData( internal_buffer, AL_FORMAT_MONO16, samples, buf_size, sample_rate);
al_check_error("populating alBufferData");
free(samples);
/* Set-up sound source and play buffer */
// ALuint src = 0;
// alGenSources(1, &src);
// alSourcei(src, AL_BUFFER, internal_buffer);
alGenSources(1, & streaming_source[0]);
alSourcei(streaming_source[0], AL_BUFFER, internal_buffer);
// alSourcePlay(src);
alSourcePlay(streaming_source[0]);
// ---------------------
ALenum current_playing_state;
alGetSourcei(streaming_source[0], AL_SOURCE_STATE, & current_playing_state);
al_check_error("alGetSourcei AL_SOURCE_STATE");
while (AL_PLAYING == current_playing_state) {
printf("still playing ... so sleep\n");
sleep(1); // should use a thread sleep NOT sleep() for a more responsive finish
alGetSourcei(streaming_source[0], AL_SOURCE_STATE, & current_playing_state);
al_check_error("alGetSourcei AL_SOURCE_STATE");
}
printf("end of playing\n");
/* Dealloc OpenAL */
MM_exit_al();
} // MM_render_one_buffer
int main() {
MM_init_al();
MM_render_one_buffer();
}

PIC18F String to Bluetooth to Tera Term

I am trying to output a string from the PIC's USART and have it display on Tera Term. I am using the:
PIC18F4331
Sparkfun Bluesmirf RN-42
MPLAB v8.85
Tera Term
I've been working at this code for a couple of days and I am not seeing a single response. A couple of things that I think may be causing the issue is the baud rate and/or not having an interrupt routine. But is there a need for an interrupt if I am only attempting to transmit? Please, can someone guide me? Also, when using printf, I am seeing a response through the bluetooth but in strange symbolic form. For example, รพรพรพ.
The code is a modification of one found online.
// Libraries
#include <p18f4331.h>
#include <stdio.h>
// Configuations
#pragma config OSC = XT
#pragma config WDTEN = OFF
#pragma config PWRTEN = OFF
#pragma config FCMEN = OFF
#pragma config IESO = OFF
#pragma config BOREN = ON
#pragma config BORV = 27
#pragma config WDPS = 128
#pragma config T1OSCMX = ON
#pragma config PWMPIN = ON
#pragma config MCLRE = ON
#pragma config LVP = OFF
#pragma config STVREN = OFF
#pragma config PWM4MX = RD5
// Definitions
#define _XTAL_FREQ 4000000
#define BAUDRATE 9600
void EUSART(void)
{
TRISC = 0b10000000;
SPBRG = 25;
TXSTAbits.CSRC = 0; // Baud Rate Generated Externally
TXSTAbits.TX9 = 0; // 8-Bit Transmission
TXSTAbits.TXEN = 1; // Transmit Enabled
TXSTAbits.SYNC = 0; // Asynchronous Mode
TXSTAbits.BRGH = 1; // High Baud Rate
TXSTAbits.TRMT = 0; // Transmit Shift Register When TSR Is Full
RCSTAbits.SPEN = 1; // Serial Port Enabled
RCSTAbits.RX9 = 0; // 8-Bit Reception
RCSTAbits.CREN = 1; // Enables Receive
}
void SendByteSerially(unsigned char Byte) // Writes a character to the serial port
{
while(!PIR1bits.TXIF) ; // wait for previous transmission to finish
TXREG = Byte;
}
unsigned char ReceiveByteSerially(void) // Reads a character from the serial port
{
while(!PIR1bits.RCIF) continue; // Wait for transmission to receive
return RCREG;
}
void SendStringSerially(const rom unsigned char* st)
{
while(*st) SendByteSerially(*st++);
}
void delayMS(unsigned int x)
{
unsigned char y;
for(;x > 0; x--) for(y=0; y< 82;y++);
}
void main(void)
{
unsigned char SerialData;
EUSART();
SendStringSerially("Hello World");
while(1)
{
SerialData = ReceiveByteSerially();
SendByteSerially(SerialData);
delayMS(1000);
}
}
You are using a PIC18, be sure that BRG16 equals 0 since you're using BRGH
BAUDCTL.BRG16 = 0;
because SPBRGH16 is the higher byte of SPBRG, and that may change the baudrate value of your USART.
Plus, be sure you're in the PIR1 bank. In MPLAB, that would be
banksel PIR1; //Not sure if there's an ending coma
My two functions to transmit via UART when properly initialized ( In MikroC ) :
void vTx232 (UC ucSend)
{
STATUS.RP0 = PIR1; //Sure we're in PIR1
while (PIR1.TXIF == 0);//While last TX not done
TXREG = ucSend; //Input param into TXREG
}
void vTxString(UC *ucpString)
{
while (*ucpString!= 0x00) //While string not at its end
{
vTx232(*ucpString); //Send string character
ucpString++; //Increm string pointer
}
}

ffmpeg/libavcodec memory management

The libavcodec documentation is not very specific about when to free allocated data and how to free it. After reading through documentation and examples, I've put together the sample program below. There are some specific questions inlined in the source but my general question is, am I freeing all memory properly in the code below? I realize the program below doesn't do any cleanup after errors -- the focus is on final cleanup.
The testfile() function is the one in question.
extern "C" {
#include "libavcodec/avcodec.h"
#include "libavformat/avformat.h"
#include "libswscale/swscale.h"
}
#include <cstdio>
using namespace std;
void AVFAIL (int code, const char *what) {
char msg[500];
av_strerror(code, msg, sizeof(msg));
fprintf(stderr, "failed: %s\nerror: %s\n", what, msg);
exit(2);
}
#define AVCHECK(f) do { int e = (f); if (e < 0) AVFAIL(e, #f); } while (0)
#define AVCHECKPTR(p,f) do { p = (f); if (!p) AVFAIL(AVERROR_UNKNOWN, #f); } while (0)
void testfile (const char *filename) {
AVFormatContext *format;
unsigned streamIndex;
AVStream *stream = NULL;
AVCodec *codec;
SwsContext *sws;
AVPacket packet;
AVFrame *rawframe;
AVFrame *rgbframe;
unsigned char *rgbdata;
av_register_all();
// load file header
AVCHECK(av_open_input_file(&format, filename, NULL, 0, NULL));
AVCHECK(av_find_stream_info(format));
// find video stream
for (streamIndex = 0; streamIndex < format->nb_streams && !stream; ++ streamIndex)
if (format->streams[streamIndex]->codec->codec_type == AVMEDIA_TYPE_VIDEO)
stream = format->streams[streamIndex];
if (!stream) {
fprintf(stderr, "no video stream\n");
exit(2);
}
// initialize codec
AVCHECKPTR(codec, avcodec_find_decoder(stream->codec->codec_id));
AVCHECK(avcodec_open(stream->codec, codec));
int width = stream->codec->width;
int height = stream->codec->height;
// initialize frame buffers
int rgbbytes = avpicture_get_size(PIX_FMT_RGB24, width, height);
AVCHECKPTR(rawframe, avcodec_alloc_frame());
AVCHECKPTR(rgbframe, avcodec_alloc_frame());
AVCHECKPTR(rgbdata, (unsigned char *)av_mallocz(rgbbytes));
AVCHECK(avpicture_fill((AVPicture *)rgbframe, rgbdata, PIX_FMT_RGB24, width, height));
// initialize sws (for conversion to rgb24)
AVCHECKPTR(sws, sws_getContext(width, height, stream->codec->pix_fmt, width, height, PIX_FMT_RGB24, SWS_FAST_BILINEAR, NULL, NULL, NULL));
// read all frames fromfile
while (av_read_frame(format, &packet) >= 0) {
int frameok = 0;
if (packet.stream_index == (int)streamIndex)
AVCHECK(avcodec_decode_video2(stream->codec, rawframe, &frameok, &packet));
av_free_packet(&packet); // Q: is this necessary or will next av_read_frame take care of it?
if (frameok) {
sws_scale(sws, rawframe->data, rawframe->linesize, 0, height, rgbframe->data, rgbframe->linesize);
// would process rgbframe here
}
// Q: is there anything i need to free here?
}
// CLEANUP: Q: am i missing anything / doing anything unnecessary?
av_free(sws); // Q: is av_free all i need here?
av_free_packet(&packet); // Q: is this necessary (av_read_frame has returned < 0)?
av_free(rgbframe);
av_free(rgbdata);
av_free(rawframe); // Q: i can just do this once at end, instead of in loop above, right?
avcodec_close(stream->codec); // Q: do i need av_free(codec)?
av_close_input_file(format); // Q: do i need av_free(format)?
}
int main (int argc, char **argv) {
if (argc != 2) {
fprintf(stderr, "usage: %s filename\n", argv[0]);
return 1;
}
testfile(argv[1]);
}
Specific questions:
Is there anything I need to free in the frame processing loop; or will libav take care of memory management there for me?
Is av_free the correct way to free an SwsContext?
The frame loop exits when av_read_frame returns < 0. In that case, do I still need to av_free_packet when it's done?
Do I need to call av_free_packet every time through the loop or will av_read_frame free/reuse the old AVPacket automatically?
I can just av_free the AVFrames at the end of the loop instead of reallocating them each time through, correct? It seems to be working fine, but I'd like to confirm that it's working because it's supposed to, rather than by luck.
Do I need to av_free(codec) the AVCodec or do anything else after avcodec_close on the AVCodecContext?
Do I need to av_free(format) the AVFormatContext or do anything else after av_close_input_file?
I also realize that some of these functions are deprecated in current versions of libav. For reasons that are not relevant here, I have to use them.
Those functions are not just deprecated, they've been removed some time ago. So you should really consider upgrading.
Anyway, as for your questions:
1) no, nothing more to free
2) no, use sws_freeContext()
3) no, if av_read_frame() returns an error then the packet does not contain any valid data
4) yes you have to free the packet after you're done with it and before next av_read_frame() call
5) yes, it's perfectly valid
6) no, the codec context itself is allocated by libavformat so av_close_input_file() is
responsible for freeing it. So nothing more for you to do.
7) no, av_close_input_file() frees the format context so there should be nothing more for you to do.

Resources