I am looking for an approach to fetch frames from Multiple v4l2 cameras using gstreamer at the best possible framerate . Would initiating multiple gstreamer pipelines using different threads created using pthread library work? Through command line running many pipelines results in fps drop . Would multithreading on a gpu have any effect on the fps stability ?
Related
I´m using openCV in python to load a video stream from a camera. I need to do multiple processing jobs on this stream, so, for instance, I might want to find objects in the image, do edge detection, color changes, etc, all on the same stream. I´d like to do it in parallel in many processes. The easiest solution would be to pickle the image frames and send them to all the processes, but for a high quality video this can be very costly.
I would like to read a frame, store this frame in memory with pyarrow and then have every process access this same frame in memory to do its trick. Then read another frame, etc. Couple of problems: i) how to access the frame from all processes with pyarrow (I understand from the docs that this should be possible, could not figure how); ii) how to make sure that all processes are done with the frame before overwriting it with another frame.
Thanks!
Plasma might be a good place to start for sharing the data.
Image replacement/deletion with distributed workers. There isn't one answer to this and any solution will have tradeoffs. You might try using something like celery as a starting point.
I am considering different ways to stream a massive number of live videos to the screen in linux/X11, using multiple independent processes.
I started the project initially with openGL/GLX and openGL textures, but that was a dead end. The reason: "context switching". It turns out that (especially nvidia) performs poorly when several (independent multi-)processes are manipulating at fast pace textures, using multiple contexts. This results in crashes, freezes, etc.
( see the following thread: https://lists.freedesktop.org/archives/nouveau/2017-February/027286.html )
I finally turned into Xvideo and it seems to work very nicely. My initial tests show that Xvideo handles video dumping ~ 10 times more effectively than openGL and does not crash. One can demonstrate this running ~ 10 vlc clients with 720p#25fps and trying both Xvideo and OpenGL output (remember to put all fullscreen).
However, I am suspecting that Xvideo uses, under the hood, openGL, so let's see if I am getting this right ..
Both Xvideo and GLX are extension modules of X11, but:
(A) Dumping video through Xvideo:
XVideo considers the whole screen as a device port and manipulates it directly (it has these god-like powers, being an extension to X11)
.. so it only needs a single context from the graphics driver. Lets call it context 1.
Process 1 requests Xvideo services for a certain window .. Xvideo manages it into a certain portion of the screen, using context 1.
Process 2 requests Xvideo services for a certain window .. Xvideo manages it into a certain portion of the screen, using context 1.
(B) Dumping video "manually" through GLX and openGL texture dumping:
Process 1 requests a context from glx, gets context 1 and starts dumping textures with it.
Process 2 requests a context from glx, gets context 2 and starts dumping textures with it.
Am I getting this right?
Is there any way to achieve, using openGL directly, situation (A) ?
.. one might have to drop GLX completely, which starts to be a bit hard-core.
It's been a while but I finally got it sorted out, using OpenGL textures and multithreading. This seems to be the optimal way:
https://elsampsa.github.io/valkka-core/html/process_chart.html
(disclaimer: I did that)
How can I increase opencv video FPS in Linux on Intel atom? The video seems lagging when processing with opencv libraries.
Furthermore, i m trying to execute a program/file with opencv
system(/home/file/image.jpg);
however, it shows Access Denied.
There are several things you can do to improve performance. Using OpenGL, GPUs, and even just disabling certain functions within OpenCV. When you capture video you can also change the FPS default which is sometimes set low. If you are getting access denied on that file I would check the permissions, but without setting the full error it is hard to figure out.
First is an example of disabling conversion and the second is setting the desired FPS. I think these defines are changed in OpenCV 3 though.
cap.set(CV_CAP_PROP_CONVERT_RGB , false);
cap.set(CV_CAP_PROP_FPS , 60);
From your question, it seems you have a problem that your frame buffer is collecting a lot of frames which you are not able to clear out before reaching to the real-time frame. i.e. a frame capture now, is processed several seconds later. Am I correct in understanding?
In this case, I'd suggest couple of things,
Use a separate thread to grab the frames from VideoCapture and then push these frames into a queue of a limited size. Of course this will lead to missing frames, but if you are interested in real time processing then this cost is often justified.
If you are using OOP, then I may suggest using a separate thread for each object, as this significantly speeds up the processing. You can see several fold increase depending on the application and functions used.
I'm using a custom board running imx6q processor, and a tlv320aic3x audio codec.
Everything works ok after some bring-up, but I'm trying to improve the audio driver: whether I'm doing playback or capture - both playback and capture related amplifiers are switched on.
This causes side effects like noise in speakers when I'm capturing audio, and wastes power.
To solve this, I'm trying to define the data paths correctly in the driver, but I keep failing.
I find it hard to find resources on-line explaining how to code an ALSA driver using the ALSA predefined macros that exists in the Kernel.
I've searched http://www.alsa-project.org/, linux docs, and few other sources...
And to my questions:
Is there any decent tutorial out there? I'm specifically interested in DAPM and usage of control names.
Is it possible to "re-program" all driver data paths from userspace?
Is DAPM sufficient for decent power management? Or should I use userspace to switch on/off power from unused paths in the codec between playbacks and captures?
Just to be clear: in user space using the standard driver, I am able to do playback, capture and control mixers, switches, etc... However I'm trying to achieve better automatic power management.
Thanks
We're currently building a chain of linux tools to do some realtime encoding for video broadcast purposes. In order to archieve this, we created a program in C++ that spawns some ffmpeg decoder processes (for both audio and video), pipe this output to the encoders (ffmpeg & mpeg2enc) through fifo's, and then pipe the encoded output to our muxer which caches a few MB of data and then outputs the muxed file through an ASI output card.
In debian 5, this setup works flawlessly, and generally doesn't even create a high CPU load. On debian 6 and Ubuntu 10.04 however, the internal buffer of the muxer gradually decreases until it hits zero, after which frequent output hickups start to occur.
Using nice and ionice doesn't seem to fix this issue. I've also tried various custom kernel compile options (increased frequency, preemptation, etc) but this also doesn't seem to work.
Altough it might be possible that there has been serious regression in either ffmpeg or mpeg2enc, I'm guessing that the problem has to do with the way the new kernel/distro handles FIFO's.
Does anybody know what could be causing this problem? Or what have been recent changes in both Debian or it's kernel configuration (between version 5 and 6) and Ubuntu that could possibly caused this undesired behaviour?