I have set up the resizer in single shot mode to decode data using the user pointer interface in dvsdk_4_02_00_06.
It appears to be working, but eventually the video output freezes due to the thread getting stuck.It never returns with an error, just hangs. I've searched on the forums for some ideas, and I found some posts referencing a clock divider for the resizer module. When I adjust the clock divider to slow down the clock, I can improve the reliability, but the frame rate goes down, as I would expect.
DM368 does not have a resizer on Rx path. We utilized the IPIPE resizer for TX (i.e. encoder and PIP) and Rx(decoder) path by making IPIPE to single shot mode..
a) We are introducing a small delay of 5 ms in between calls to IPIPE resizer. i.e.
while()
{
Transmit(Tx Path)
camera--->CCDC----->mem----->ipipe--|-->mem----->channel 1----->Encoder
|--->mem----->channel2----->PIP(display local image)
5 ms sleep
Receiver(Rx Path)
Decoder o/p---->mem---->ipipe---->memory
5 ms sleep
}
It seems that IPIPE hangs if we do not introduce the sleep between forward and reverse path. Please note that IPIPE is configured for Tx and Rx path for every frame and ipipe i/p and o/p formats are different between Tx and Rx path. Because of the sleep the frame rate is reduced.
Why is it required to put sleep before performing operations?
Can it be avoided? How?
Related
I have a Raspberry Pi managing and writing to an FPGA. I use the SPI bus and some GPIO's to get the data over the link. The SPI write happens in bursts of variable length - from a few tens of bytes up to about 8kB.
The FPGA SPI receive code has a timeout which I initially set to around 12ms (which was a guesstimated value). This is almost always OK, but today I found out that very occasional timeouts on the FPGA seem to be caused by the fact that RPi will occasionally pause for about 15ms in the middle of sending a byte.
The RPi python driver code looks pretty much like this:
for b in byte_array:
send_byte()
where send_byte() is a simple function that calls the SPI byte write function in the GPIO library, with some checking of a BUSY line and retrying.
Generally, bytes go out a few hundred microseconds apart, so this sudden pause is odd. I am thinking it is probably caused by some sort of context switch - but the Pi is not doing much else (and running the stock Linux distro).
None of this is a big problem, I can easily increase the timeout with no problems. But I am curious. What causes a chip running Linux at several GHz to suddenly stop doing the only thing it is being asked to do for about 15ms - which is an eternity in this context?
If it WAS a problem - what could I do about it?
I have a very interesting problem.
I am running custom movie player based on NDK/C++/CMake toolchain that opens streaming URL (mp4, H.264 & stereo audio). In order to restart from given position, player opens stream, buffers frames to some length and then seeks to new position and start decoding and playing. This works fine all the times except if we power-cycle the device and follow the same steps.
This was reproduced on few version of the software (plugin build against android-22..26) and hardware (LG G6, G5 and LeEco). This issue does not happen if you keep app open for 10 mins.
I am looking for possible areas of concern. I have played with decode logic (it is based on the approach described as synchronous processing using buffers).
Edit - More Information (4/23)
I modified player to pick a stream and then played only video instead of video+audio. This resulted in constant starvation resulting in buffering. This appears to have changed across android version (no fix data here). I do believe that I am running into decoder starvation. Previously, I had set timeouts of 0 for both AMediaCodec_dequeueInputBuffer and AMediaCodec_dequeueOutputBuffer, which I changed on input side to 1000 and 10000 but does not make much difference.
My player is based on NDK/C++ interface to MediaCodec, CMake build passes -DANDROID_ABI="armeabi-v7a with NEON" and -DANDROID_NATIVE_API_LEVEL="android-22" \ and C++_static.
Anyone can share what timeouts they have used and found success with it or anything that would help avoid starvation or resulting buffering?
This is solved for now. Starvation was not caused from decoding perspective but images were consumed in faster pace as clock value returned were not in sync. I was using clock_gettime method with CLOCK_MONOTONIC clock id, which is recommended way but it was always faster for first 5-10 mins of restarting device. This device only had Wi-Fi connection. Changing clock id to CLOCK_REALTIME ensures correct presentation of images and no starvation.
I'm using the hwmon/mxc_mma8451.c module to access an accelerometer. Using /sys/devices/virtual/input/input0/poll I can change the polling rate to some degree... if I set a larger millisecond value the polling becomes slower. However, I cannot seem to get below around 30ms per poll, despite the device driver source apparently allowing as low as 1ms per poll. The accelerometer itself supports 800Hz sample rate, so that is not the bottleneck. When I write a value of 1 to the above file, I see each sample occurs either 30ms or 60ms from the previous sample, so it is not even consistent. However, even 30ms is unacceptably slow at it is only 33Hz.
The kernel source for the module clearly shows that I should be able to use a value of 1:
#define POLL_INTERVAL_MIN 1
#define POLL_INTERVAL_MAX 500
#define POLL_INTERVAL 100 /* msecs */
...
mma8451_idev->poll_interval = POLL_INTERVAL;
mma8451_idev->poll_interval_min = POLL_INTERVAL_MIN;
mma8451_idev->poll_interval_max = POLL_INTERVAL_MAX;
I'm not familiar with exactly how Linux does this kind of polling, but this system has a 10ms tick, so even if sampling with ticks, why is it taking 3 or 6 ticks per sample and nothing else? Is there some kernel parameter somewhere else that is throttling how fast polling can occur?
Linux kernel version is 3.14.28 for IMX28 (ARM) if that makes any difference. This is the version available for the device in question, so I can't just up and use a different/newer one.
I recently designed a Sound recorder on a mac using AudioUnits. It was designed to behave like a video security system, recording continuously, with a graphics display of power levels for playback browsing.
I've noticed that every 85 minutes distortion appears for 3 minutes. After a day of elimination it appears that the sound acquisition that occurs before callback is called uses a circular buffer, and the callback's audioUnitRender function extracts from this buffer but with a slightly slower speed, which eventually causes the internal buffer write to wrap around and catch up with audioUnitRender reads. The duplex operation test shows the latency ever increasing, and after 85 minutes you hear about 200-300ms of latency and the noise begins as the render buffer frame has a combination of buffer segments at end and beginning of buffer, i.e long and short latencies. as the pointers drift apart the noise disappears and you hear clean audio with original short latency, then it repeats again 85 mins later. Even with low impact callback processing this still happens. I've seen some posts regarding latency but none regarding clashes, has anyone seen this?
osx 10.9.5, xcode 6.1.1
code details:-
//modes 1=playback, 2=record, 3=both
AudioComponentDescription outputcd = {0}; // 10.6 version
outputcd.componentType = kAudioUnitType_Output;
outputcd.componentSubType = kAudioUnitSubType_HALOutput; //allows duplex
outputcd.componentManufacturer = kAudioUnitManufacturer_Apple;
AudioComponent comp = AudioComponentFindNext (NULL, &outputcd);
if (comp == NULL) {printf ("can't get output unit");exit (-1);}
CheckError (AudioComponentInstanceNew(comp, au),"Couldn't open component for outputUnit");
//tell input bus that its's input, tell output it's an output
if(mode==1 || mode==3) r=[self setAudioMode:*au :0];//play
if(mode==2 || mode==3) r=[self setAudioMode:*au :1];//rec
// register render callback
if(mode==1 || mode==3) [self setCallBack:*au :0];
if(mode==2 || mode==3) [self setCallBack:*au :1];
// if(mode==2 || mode==3) [self setAllocBuffer:*au];
// get default stream, change amt of channels
AudioStreamBasicDescription audioFormat;
UInt32 k=sizeof(audioFormat);
r= AudioUnitGetProperty(*au,
kAudioUnitProperty_StreamFormat,
kAudioUnitScope_Output,
1,
&audioFormat,
&k);
audioFormat.mChannelsPerFrame=1;
r= AudioUnitSetProperty(*au,
kAudioUnitProperty_StreamFormat,
kAudioUnitScope_Output,
1,
&audioFormat,
k);
//start
CheckError (AudioUnitInitialize(outputUnit),"Couldn't initialize output unit");
//record callback
OSStatus RecProc(void *inRefCon,
AudioUnitRenderActionFlags *ioActionFlags,
const AudioTimeStamp *inTimeStamp,
UInt32 inBusNumber,
UInt32 inNumberFrames,
AudioBufferList * ioData)
{
myView * mv2=(__bridge myView*)inRefCon;
AudioBuffer buffer,buffer2;
OSStatus status;
buffer.mDataByteSize = inNumberFrames *4 ;// buffer size
buffer.mNumberChannels = 1; // one channel
buffer.mData =mv2->rdata;
buffer2.mDataByteSize = inNumberFrames *4 ;// buffer size
buffer2.mNumberChannels = 1; // one channel
buffer2.mData =mv2->rdata2;
AudioBufferList bufferList;
bufferList.mNumberBuffers = 2;
bufferList.mBuffers[0] = buffer;
bufferList.mBuffers[1] = buffer2;
status = AudioUnitRender(mv2->outputUnit, ioActionFlags, inTimeStamp, inBusNumber, inNumberFrames, &bufferList);
[mv2 recproc :mv->rdata :mv->rdata2 :inNumberFrames];
return noErr;
}
You seem to be using the HAL output unit for pulling input. There might not be a guarantee that the input device and output device sample rates are exactly locked. Any slow slight drift in the sample rate of either device could eventually cause a buffer underflow or overflow.
One solution might be to find and set an input device for a separate input audio unit instead of depending on the default output unit. Try a USB mic, for instance.
According to this article https://www.native-instruments.com/forum/threads/latency-drift-problem-on-macbook.175551/ this problem appears to be a usb audio driver bug in maverick. I didn't find a kext replacement solution anywhere.
After making a sonar type tester (1 cycle 22khz square wave click every 600 ms to speaker, display selected recorded frame number after click) and could see the 3 to 4 samples drift per second along with the distortion/latency drift reset experience after 1.5 hrs, I decided to look around and find how to access the buffer pointers to stabilise the latency drift, but also no luck.
Also api latency queries show no changes as it drifts.
I did find that you could reset the latency with audiounitstop then audiounitstart (same thread), but it worked only if only one audiounit bus system wide was active. Research also showed that the latency could be reset if you toggle the hardware device sample-rate in Audio Midi Setup. this is a bit aggressive and would be uncomfortable for some.
My design toggled the nominalsamplerate (AudioObjectSetPropertyData with kAudioDevicePropertyNominalSampleRate) every 60 minutes (48000 then back to 44100), with delay by way of waiting for change notification through a callback.
This cause a 2 second void in audio input and output every hour. Safari playing a youtube video would mute, and cause a 1-2 second video freeze during this time . VLC showed the same but video remained smooth during 2 second silence.
Like I said, it wouldn't work for all, but I chose system wide 2 second mute every hour over a recording that has 3 minutes of fuzzy audio every 1.5 hrs. Its been posted that a yosemite upgrade fixes this, although some have also found crackling after going up to yosemite.
I want to capture audio on Linux with low latency in a program I'm writing.
I've run some experiments using the ALSA API, using snd_pcm_readi() to
capture sound, then immediately using snd_pcm_writei() to play it back.
I've tried playing with the number of frames captured, and the buffer size,
but I don't seem to be able to get the latency down to less than a second
or so.
Am I better off using PulseAudio or JACK? Can those be used to play the
captured audio?
To reduce capture latency, reduce the period size of the capture device.
To reduce playback latency, reduce the buffer size of the playback device.
Jack can play the captured audio (just connect the input ports to the output ports), but you still have to configure its periods/buffers.
Also see Relation between period size of speaker and mic and Recording from ALSA - understanding memory mapping.
I've doing some work on low latency audio programming,
My experience is, first, your capture buffer should be small, like 10ms period buffer. (let's assuming you're using 512 frame buffer, and 48000 sample rate).
Then, you should config your Output device start_threshold to at least 2 * frame size ( 1 * frame size if your don't have much process of recorded data).
For record device, like CL. said, use a relative small period size is better, but not too small to avoid too much irq.
Also, you can change your process schedule to FIFO schedule.
Then, hopefully, you will get about 20ms total latency.
I believe you should at first ensure that you are running a Linux kernel which actually allows you to achieve low typical latency.
There are several kernel compile-time configuration options which you might look into:
CONFIG_HZ_1000
CONFIG_IRQ_FORCED_THREADING
CONFIG_PREEMPT
CONFIG_PREEMPT_RT_FULL (available only with RT patch)
Apart from that, there are more things you can do in order to optimize your audio latency in Linux. Some starting reference points can be found there:
http://wiki.linuxaudio.org/wiki/real_time_info