Using pyarrow to stream an openCV image to multiple processes - python-3.x

I´m using openCV in python to load a video stream from a camera. I need to do multiple processing jobs on this stream, so, for instance, I might want to find objects in the image, do edge detection, color changes, etc, all on the same stream. I´d like to do it in parallel in many processes. The easiest solution would be to pickle the image frames and send them to all the processes, but for a high quality video this can be very costly.
I would like to read a frame, store this frame in memory with pyarrow and then have every process access this same frame in memory to do its trick. Then read another frame, etc. Couple of problems: i) how to access the frame from all processes with pyarrow (I understand from the docs that this should be possible, could not figure how); ii) how to make sure that all processes are done with the frame before overwriting it with another frame.
Thanks!

Plasma might be a good place to start for sharing the data.
Image replacement/deletion with distributed workers. There isn't one answer to this and any solution will have tradeoffs. You might try using something like celery as a starting point.

Related

Circular buffer filling up faster than AVAudioSourceNode render block can read data from it

I am experimenting with AVAudioSourceNode, having connected it to the mixer node for output to the speaker. I am a bit of a newbie to iOS and audio programming so I apologize if this question is ignorant or unclear, but I will do my best to explain.
In the AVAudioSourceNode render block, I am attempting to retrieve received stream data that has been stored in a circular buffer (e.g. I currently use a basic implementation of a FIFO buffer but am considering moving to a TPCircularBuffer). I check to see if the buffer has enough bytes for me to fill the audiobuffer with, and if so I grab those bytes for output; if not, I either wait, or take what I can and fill the missing bytes with zeros.
In debugging, it appears I am running into a situation where the circular buffer is filling up a lot faster than the render block makes the call to access to the buffer to retrieve data from it. And understandably, after running OK for a few instants, once the circular buffer is full (I'm not even certain how large I should realistically make it but I guess that's another question), the output becomes garbage.
It is as if the acts of filling the circular buffer with streaming data (and probably other tasks as well) are taking priority over the calls made within the render block. I thought that audio operations involving the audio nodes would automatically be prioritized but it may be that I haven't done what is needed to make this happen.
I have read these threads:
iOS - Streaming and receiving audio from a device to another ends in one only sending and the other only receiving
Synchronising with Core Audio Thread
which appear to raise similar issues in substance, but a little more current guidance and explanation for my level of understanding and situation would be helpful and very much appreciated!
For playing, the audio system will only ask for data at the specified sample rate. If you fill a circular buffer at faster than that sample rate for an extended period of time, then it will overflow.
So you have to make sure your sample generator or incoming data stream complies with the sample rate for which the audio system is configured, no more and no less (other than strictly bounded bursting or latency jitter). The circular buffer needs to sized large enough to cover the maximum burst sizes plus maximum latency jitter plus any pre-fill plus a safety margin.
Another possible bug is trying to do to much inside the render block callback. Thus Apple recommends not using any code that requires memory management or locks or semaphores inside real-time audio callbacks.

which is better to use in v4l2 framework, user pointer or mmap

After going through these links,
https://linuxtv.org/downloads/v4l-dvb-apis/uapi/v4l/userp.html
https://linuxtv.org/downloads/v4l-dvb-apis/uapi/v4l/mmap.html
I understood that there are two ways to create a buffer in v4l2 framework
Userpointer buffer: buffer will be created in user space.
Memory buffer: Buffer will be created in kernel space.
I have bit confused, which one to use while doing v4l2 driver development. I mean, which is better approach in terms of performance and handling buffer?
I will be using DMS-SG for data transfer in my hardware.
It depends.. on your requirements.
Case: Visualization of the video stream.
In this case, you might want to write the video data directly to memory that is accessible to the video driver, saving a copy operation. You will also get the shortest camera-to-display time. In this case, a user pointer would be the go to.
Case: Recording of the video stream.
In this case, you do not care about the timely delivery, but you do care about not missing frames. In this case, you can use memory mapped acquisition with multiple buffers.
Case: Single image acquisition for processing.
In this case, both timely delivery and missing frames are both less important, so you could use either method, but buffered operation will give the fastest acquisition time, since there is always a buffer with recent image data available.

Is it possible to record audio to variable/RAM in LiveCode?

Is it possible to record audio to a variable/RAM in LiveCode?
Normal recording requires to use a file, but I'm trying to figure out a way to not have to use the extra step of writing to disk, only to then read it from disk and send through sockets.
This is currently impossible and there is no good way to stream content from within LiveCode. When I tried to use video recording and sockets at the same time, I ran into a bug that caused LiveCode (Revolution at the time) to crash. Looking into the crash files, it appeared to me that using the recording routines and the socket routines at the same time caused a memory address conflict. After sending roughly 1000 recorded frames through a socket, Revolution would inevitably crash. To my best knowledge, this problem has never been fixed.
I would recommend dedicated software for streaming. A possibility might be VLC. You can use VLC from the command line, which means that you can set up a stream from within LiveCode, using the shell() function.

OpenCV FPS Optimisation

How can I increase opencv video FPS in Linux on Intel atom? The video seems lagging when processing with opencv libraries.
Furthermore, i m trying to execute a program/file with opencv
system(/home/file/image.jpg);
however, it shows Access Denied.
There are several things you can do to improve performance. Using OpenGL, GPUs, and even just disabling certain functions within OpenCV. When you capture video you can also change the FPS default which is sometimes set low. If you are getting access denied on that file I would check the permissions, but without setting the full error it is hard to figure out.
First is an example of disabling conversion and the second is setting the desired FPS. I think these defines are changed in OpenCV 3 though.
cap.set(CV_CAP_PROP_CONVERT_RGB , false);
cap.set(CV_CAP_PROP_FPS , 60);
From your question, it seems you have a problem that your frame buffer is collecting a lot of frames which you are not able to clear out before reaching to the real-time frame. i.e. a frame capture now, is processed several seconds later. Am I correct in understanding?
In this case, I'd suggest couple of things,
Use a separate thread to grab the frames from VideoCapture and then push these frames into a queue of a limited size. Of course this will lead to missing frames, but if you are interested in real time processing then this cost is often justified.
If you are using OOP, then I may suggest using a separate thread for each object, as this significantly speeds up the processing. You can see several fold increase depending on the application and functions used.

Writting hundrads of AVI files in different threads using OpenCV 2.2

I writting an application using OpenCV 2.2 under VC++. I am getting videos from different network streams and write frame by frame to AVI file each in separate thread. The video streams are in hundrads and my application writting hundrads of files to disk which is very heavy, can someone advise me the optimized way to do this
Thanks in advance
Oh dear. I hope you have plenty of RAM.
Writing multiple files is a real pain. The best you can do is to mitigate the write seeks by always writing as large a chunk of AVI-frames, (preferably a multiple of sector size), as reasonably possible. Maybe:
1) A 'FrameBuf' frame-buffer class. Create a shitload of *FrameBuf at startup and pool them on a producer-consumer queue.
2) A 'FrameVec' container class for multiple *FrameBuf instances. You may need to pool these as well.
3) A threadpool for writing the contents of a *FrameVec to the disk system. This will contain very few threads, possibly only one, for best disk-write performance with few seeks. Best make the number of threads configurable/changeable at runtime to optimize overall throughput. Best make it all configurable - depth of *FrameBuf pool, number of *FrameBuf in each *FrameVec - everything.
If possible, use an SSD. If the system has any 'quiet' time, it could move the accumulated avi's to a big spinner, or networked disks, to free up the SSD for the next 'busy' time.
When moving your various instances about, remember these mantras:
'Stack-objects, copy ctors, any template class with no * bad', and 'pointers, pools, pointer containers good'.
Good luck..

Resources