Linux Message Queue for OpenCV C++ - linux

I have a thread that gets new frames, 2 other threads that process the newly gotten image and 1 that prints the output based on the processing threads.
The program cycle goes,
>thread 3, print an output based on the previous outputs of the thread 0 and 1
>thread 0 get new image
>> thread 1, process image for color
>> thread 2, process image for haar cascade
going cyclically 3&0>1&2>3&0>1&2>
'>' indicates join before spawning the next set
How do I pass the opencv Mat between the threads 0 to 1&2?
Also how would I pass the data from threads 1&2 to thread 0?
I would guess a message queue system, how does one implement that?

I suspect I may get some abuse and down-votes for this because global variables are generally frowned upon, however I feel the situation is different in image-processing where:
the code is multi-threaded, and
the data structures (images) are large.
Here I think it is important to avoid expensive copying and "transmission" of data down sockets etc. when it is already available in memory that is shared and visible amongst threads.
So, in concrete terms, I would go for an array, or vector (according to your preference) of say 16 OpenCV Mats that is globally accessible.
Acquire into the first, notify the the next thread when that buffer is full, then acquire into the next. And so on.
As regards the notification, you have several options. The cleanest, and most modern is probably using condition variables letting each processing thread (not the acquiring thread) wait on a condvar for "buffer full". Next is probably POSIX message queues, though if you are porting to macOS later you may regret that as there is no support. Another, easily programmed method is to use sockets but just send a single byte that is the index into the global array of 16 Mats - that way there is no problem with incomplete, multi-byte reads on sockets. The processing threads then just sit in a loop doing blocking reads on the socket to know which buffer to process. You can also define a special index that means "quit".
Check the size of your images in terms of width x height x channels x bytes per channel and get a feel for how much memory a global vector of 16 Mats will need and be sure you have that available before using this strategy - you may be up against the wall with a Raspberry Pi for example.

Related

Using fences to clean-up command buffers and synchronizing swap chain images at the same time

Say I have a swap chain consisting of n images and I allow k "frames in flight". I ensure correct synchronization between vkAcquireNextImageKHR, vkQueueSubmit and vkQueuePresentKHR by a set of semaphores imageAvailableSemaphore and renderFinishedSemaphore and a fence imageInFlight like it is done in this tutorial:
imageAvailableSemaphores.resize(MAX_FRAMES_IN_FLIGHT);
renderFinishedSemaphores.resize(MAX_FRAMES_IN_FLIGHT);
inFlightFences.resize(MAX_FRAMES_IN_FLIGHT);
The fences are needed to ensure that we don't use the semaphores again before the GPU has completed consuming the corresponding image. So, this fence needs to be specified in vkQueueSubmit.
On the other hand, I'm creating command buffers independently of the "frames in flight". They are "one-time submit" command buffers. Hence, once submitted, I add them to a "to-be-deleted" list. I need to know when the GPU has finished execution of the command buffers in this list.
But I cannot specify another fence in vkQueueSubmit. How can I solve this problem?
I allow k "frames in flight"
Well, that's your answer. Each thread that is going to contribute command buffers for a "frame" should have some multiple of k command buffers. They should use them in a ring-buffer fashion. These command buffers should be created from a transient-allocating pool. When they pick the least-recently used CB from the ring buffer, they should reset it before recording into it.
You ensure that no thread tries to reset a CB that is still in use by not starting any work for the next frame until the kth frame in the past has completed (using a fence).
If for some reason you absolutely cannot tell your threads what k is up front, you're still going to have to tell them something. When you start work on the thread, you need to tell them how many frames are still in fight. This allows them to check the size of their ring buffer against this number of frames. If the number of elements in the ring buffer is less than the number of frames, then the oldest CB in the ring buffer is not in use. Otherwise, it will have to allocate a new CB from the pool and shove that into the ring buffer.
You can use Timeline Semaphores for this. You can read in depth about them here: https://www.khronos.org/blog/vulkan-timeline-semaphores
Instead of being signaled or not signaled timeline semaphores carry a specific value. The function interesting for you is vkGetSemaphoreCounterValue, which allows you to read the value of the semaphore without blocking.
To create a timeline semaphore you simply set the pNext value of your VkSemaphoreCreateInfo to a VkSemaphoreTypeCreateInfo looking like the following.
VkSemaphoreTypeCreateInfo
timelineCreateInfo{VK_STRUCTURE_TYPE_SEMAPHORE_TYPE_CREATE_INFO};
timelineCreateInfo.semaphoreType = VK_SEMAPHORE_TYPE_TIMELINE;
timelineCreateInfo.initialValue = 0;
The pNext value of your VkSubmitInfo needs to be set to a VkTimelineSemaphoreSubmitInfo.
VkTimelineSemaphoreSubmitInfo timelineInfo{VK_STRUCTURE_TYPE_TIMELINE_SEMAPHORE_SUBMIT_INFO};
timelineInfo.signalSemaphoreValueCount = 1;
timelineInfo.pSignalSemaphoreValues = &signalValue;
After the command buffer is done the value of the semaphore will be whatever value you set signalValue to. After that you can query for the value with:
uint64_t value;
vkGetSemaphoreCounterValue(device, semaphore, &value);
So assuming you set signalValue to 1 value here will be either 1 or 0 which is what we gave the semaphore as initial value in VkSemaphoreTypeCreateInfo. After that you can safely delete your one time command buffer.
Note: Timeline Semaphores are actually meant as a semi replacement of fences and binary semaphores and should be the main synchronization primitive you use. I think the only function that requires a binary semaphore is vkAcquireSwapchainImage.

Canonical way to "broadcast" data to multiple processes in linux?

I've got an application that needs to send a stream of data from one process to multiple readers, each of which needs to see its own copy of the stream. This is reasonably high-rate (100MB/s is not uncommon), so I'd like to avoid duplication if possible. In my ideal world, linux would have named pipes that supported multiple readers, with a fast path for the common single-reader case.
I'd like something that provides some measure of namespace isolation (eg: broadcasting on 127.0.0.1 is open to any process I believe...). Unix domain sockets don't support broadcast, and UDP is "unreliable" anyways (server will drop packets instead of blocking in my case). I supposed I could create a shared-memory segment and store the common buffers there, but that feels like reinventing the wheel. Is there a canonical way to do this in linux?
I supposed I could create a shared-memory segment and store the common buffers there, but that feels like reinventing the wheel. Is there a canonical way to do this in linux?
The short answer: No
The long answer: Yes [and you're on the right track]
I've had to do this before [for even higher speeds], so I had to research this. The following is what I came up with.
In the main process, create a pool of shared buffers [use SysV shm or private mmap as you chose]. Assign ID numbers to them (e.g. 1,2,3,...). Now there is a mapping from bufid to buffer memory address. To make this accessible to child processes, do this before you fork them. The children also inherit the shared memory mappings, so not much work
Now fork the children. Give them each a unique process id. You can just incrementally start with a number: 2,3,4,... [main is 1] or just use regular pids.
Open up a SysV msg channel (msgget et. al.). Again, if you do this in the main process before the fork, they are available to the children [IIRC].
Now here's how it works:
main finds an unused buffer and fills it. For each child, main sends an IPC message via msgsnd (on the single common IPC channel) where the message payload [mtext] is the bufid number. Each message has the standard header's mtype field set to the destination child's pid.
After doing this, main remembers the buffer as "in flight" and not yet reusable.
Each child does a msgrcv with the mtype set to its pid. It then extracts the bufid from mtext and processes the buffer. When it's done, it sends an IPC message [again on the same channel] with mtype set to main's pid with an mtext of the bufid it just processed.
main's loop does an non-blocking msgrcv, noting all "release" messages for a given bufid. When all children have released the buffer, it's put back on the buffer "free queue". In main's service loop, it may fill new buffers and send more messages as appropriate [intersperse with the waits].
The child then does an msgrcv and the cycle repeats.
So, we're using [large] shared memory buffers and short [a few bytes] bufid descriptor IPC messages.
Okay, so the question you may be asking: "Why SysV IPC for the comm channel?" [vs. multiple pipes or sockets].
You already know that a shared buffer avoids sending multiple copies of your data.
So, that's the way to go. But, why not send the above bufid messages across sockets or pipes [or shared queues, condition variables, mutexes, etc]?
The answer is speed and the wakeup characteristics of the target process.
For a highly realtime response, when main sends out the bufid messages, you want the target process [if it's been sleeping] to wake up immediately and start processing the buffer.
I examined the linux kernel source and the only mechanism that has that characteristic is SysV IPC. All others have a [scheduling] lag.
When process A does msgsnd on a channel that process B has done msgrcv on, three things will happen:
process B will be marked runnable by the scheduler.
[IIRC] B will be moved to the front of its scheduling queue
Also, more importantly, this then causes an immediate reschedule of all processes.
B will start right away [as opposed to next timer interrupt or when some other process just happens to sleep]. On a single core machine, A will be put to sleep and B will run in its stead.
Caveat: All my research was done a few years back before the CFS scheduler, but, I believe the above should still hold. Also, I was using the RT scheduler, which may be a possible option if CFS doesn't work as intended.
UPDATE:
Looking at the POSIX message queue source, I think that the same immediate-wakeup behavior you discussed with the System V queues is going on, which gives the added benefit of POSIX compatibility.
The timing semantics are possible [and desirable] so I wouldn't be surprised. But, SysV is actually more standard and ubiquitous than POSIX mqueues. And, there are some semantic differences [See below].
For timing, you can build a unit test program [just using msgs] with nsec timestamps. I used TSC stamps, but clock_gettime(CLOCK_REALTIME,...) might also work. Stamp departure time and arrival/wakeup time to see. Compare both SysV and mq
With either SysV or mq you may need to bump up the max # of msgs, max msg size, max # of queues via /proc/*. The default values are relatively small. If you don't, you may find tasks blocked waiting for a msg but master can't send one [is blocked] due to a msg queue maximum parameter being exceeded. I actually had such a bug, so I changed my code to bump up these values [it was running as root] during startup. So, you may need to do this as an RC boot script (or whatever the [atrocious ;-)] systemd equivalent is)
I looked at using mq to replace SysV in my own code. It didn't have the same semantics for a many-to-one return-to-free-pool msg. In my original answer, I had forgotten to mention that two msg queues are needed: master-to-children (e.g. work-to-do) and children-to-master (e.g. returning a now available buffer).
I had several different types of buffers (e.g. compressed video, compressed audio, uncompressed video, uncompressed audio) that had varying types and struct descriptors.
Also, multiple different buffer queues as these buffers got passed from thread to thread [different processing stages].
With SysV you can use a single msg queue for multiple buffer lists/queues, the buffer list ID is the msg mtype. A child msgrcv waits with mtype set to the ID value. The master waits on the return-to-free msg queue with mtype of 0.
mq* requires a separate mqd_t for each ID because it doesn't allow a wait on a msg subtype.
msgrcv allows IPC_NOWAIT on each call, but to get the same effect with mq_receive you have to open the queue with O_NONBLOCK or use the timed version. This gets used during the "shutdown" or "restart" phase (e.g. send a msg to children that no more data will arrive and they should terminate [or reconfigure, etc.]). The IPC_NOWAIT is handy for "draining" a queue during program startup [to get rid of stale messages from a prior invocation] or drain stale messages from a prior configuration during operation.
So, instead of just two SysV msg queues to handle an arbitrary number of buffer lists, you'll need a separate mqd_t for each buffer list/type.

If one thread writes to a location and another thread is reading, can the second thread see the new value then the old?

Start with x = 0. Note there are no memory barriers in any of the code below.
volatile int x = 0
Thread 1:
while (x == 0) {}
print "Saw non-zer0"
while (x != 0) {}
print "Saw zero again!"
Thread 2:
x = 1
Is it ever possible to see the second message, "Saw zero again!", on any (real) CPU? What about on x86_64?
Similarly, in this code:
volatile int x = 0.
Thread 1:
while (x == 0) {}
x = 2
Thread 2:
x = 1
Is the final value of x guaranteed to be 2, or could the CPU caches update main memory in some arbitrary order, so that although x = 1 gets into a CPU's cache where thread 1 can see it, then thread 1 gets moved to a different cpu where it writes x = 2 to that cpu's cache, and the x = 2 gets written back to main memory before x = 1.
Yes, it's entirely possible. The compiler could, for example, have just written x to memory but still have the value in a register. One while loop could check memory while the other checks the register.
It doesn't happen due to CPU caches because cache coherency hardware logic makes the caches invisible on all CPUs you are likely to actually use.
Theoretically, the write race you talk about could happen due to posted write buffering and read prefetching. Miraculous tricks were used to make this impossible on x86 CPUs to avoid breaking legacy code. But you shouldn't expect future processors to do this.
Leaving aside for a second tricks done by the compiler (even ones allowed by language standards), I believe you're asking how the micro-architecture could behave in such scenario. Keep in mind that the code would most likely expand into a busy wait loop of cmp [x] + jz or something similar, which hides a load inside it. This means that [x] is likely to live in the cache of the core running thread 1.
At some point, thread 2 would come and perform the store. If it resides on a different core, the line would first be invalidated completely from the first core. If these are 2 threads running on the same physical core - the store would immediately affect all chronologically younger loads.
Now, the most likely thing to happen on a modern out-of-order machine is that all the loads in the pipeline at this point would be different iterations of the same first loop (since any branch predictor facing so many repetitive "taken" resolution is likely to assume the branch will continue being taken, until proven wrong), so what would happen is that the first load to encounter the new value modified by the other thread will cause the matching branch to simply flush the entire pipe from all younger operations, without the 2nd loop ever having a chance to execute.
However, it's possible that for some reason you did get to the 2nd loop (let's say the predictor issue a not-taken prediction just at the right moment when the loop condition check saw the new value) - in this case, the question boils down to this scenario:
Time -->
----------------------------------------------------------------
thread 1
cmp [x],0 execute
je ... execute (not taken)
...
cmp [x],0 execute
jne ... execute (not taken)
Can_We_Get_Here:
...
thread2
store [x],1 execute
In other words, given that most modern CPUs may execute instructions out of order, can a younger load be evaluated before an older one to the same address, allowing the store (from another thread) to change the value so it may be observed inconsistently by the loads.
My guess is that the above timeline is quite possible given the nature of out-of-order execution engines today, as they simply arbitrate and perform whatever operation is ready. However, on most x86 implementations there are safeguards to protect against such a scenario, since the memory ordering rules strictly say -
8.2.3.2 Neither Loads Nor Stores Are Reordered with Like Operations
Such mechanisms may detect this scenario and flush the machine to prevent the stale/wrong values becoming visible. So The answer is - no, it should not be possible, unless of course the software or the compiler change the nature of the code to prevent the hardware from noticing the relation. Then again, memory ordering rules are sometimes flaky, and i'm not sure all x86 manufacturers adhere to the exact same wording, but this is a pretty fundamental example of consistency, so i'd be very surprised if one of them missed it.
The answer seems to be, "this is exactly the job of the CPU cache coherency." x86 processors implement the MESI protocol, which guarantee that the second thread can't see the new value then the old.

Delphi threads and variables

So this is my question , threads are so confusing for me , let's say I have 5 threads , and 50 or 100 or more sites. So as far as I've learned about threads , I can make constructor create (link:string) and start new threads with different links , but than I wold need to make as much threads as the number of links I need to parse.So how can I make variable link shared between threads , so when thread one downloads link listbox1.items[0] it tells others that number 0 is downloaded and next thread should ask what link should I download and get answer listbox1.items[1] and so on until they download all links when they should terminate.
Can anyone provide me with simple example of how can this be done. Threads are killing me :(
You could have a thread-safe list of URLs to process, and a static-sized pool of worker threads each taking an unprocessed URL from the list at a time, processing it (downloading and parsing) and adding any found new URLs to the list, in a loop, as long as there are any unprocessed items in the list. Keep finished URLs in the list, only mark them as done, to avoid recursion.
Sounds like you simply need to set up a critical section.
This needs to be set up around the code segment which reads the next URL. To do this you would typically place a semaphore at the start of the code so that only one thread can enter it at any time. The semaphore is reset at the end of the code. As each new thread sees the URL list has expired, then it terminates.
Typically semaphores are boolean, but they can be integers for example if you want to allow a specific number of threads to enter the region at any time.
In your case you can simply set up a global boolean variable (visible to all threads), say "fSemaphore".
At the start of the region, the thread checks the flag. If it is false it sets it to true and enters the region (to get the next URL).
If it is true, then it loops - e.g. repeat sleep(0) until (not fSemaphore).
When it exits the region it set fSemaphore := False;
Obviously you need to make sure you guard against a possible infinite loop scenario...
Define a 'TURI' class for the request URI, result, error message and anything else needed for the web query except for the component to be used for the URI access. Descending from TObject shoud be fine. Create, initialize 100 of these and push them on a producer-consumer queue, (TObjectQueue, TCriticalSection and a semaphore should do fine). Hang a few TThreads off the queue that loop around and process the TURI instances until the queue is empty, whereupon they block.
You do not say what action you need taking with the processed TURI's - they will need freeing somewhere. If you wish to notify the main thread, PostMessage the completed URI's and free them in the message-handler.
Terminate the threads? Sure, if you really have to, then queue up some object that signals them to commit suicide, (a NIL maybe - the threads can check for 'assigned' just after popping the queue). When doing something like this, I oftem just leave the threads lying around even if I don't need to process any more URI during the app run - it's not worth the typing of terminating them.
Sadly, the Delphi examples and, I'm afraid, many textbooks, dont' get much further than suspend/resume control of threads, (don't do it), and 'TThread.Synchronize', 'TThread.WaitFor' and 'TThread.OnTerminate'. If you get a textbook like that, take it outside and burn it - you will learn next-to-nothing good.

multithreading: how to process data in a vector, while the vector is being populated?

I have a single-threaded linux app which I would like to make parallel. It reads a data file, creates objects, and places them in a vector. Then it calls a compute-intensive method (.5 second+) on each object. I want to call the method in parallel with object creation. While I've looked at qt and tbb, I am open to other options.
I planned to start the thread(s) while the vector was empty. Each one would call makeSolids (below), which has a while loop that would run until interpDone==true and all objects in the vector have been processed. However, I'm a n00b when it comes to threading, and I've been looking for a ready-made solution.
QtConcurrent::map(Iter begin,Iter end,function()) looks very easy, but I can't use it on a vector that's changing in size, can I? And how would I tell it to wait for more data?
I also looked at intel's tbb, but it looked like my main thread would halt if I used parallel_for or parallel_while. That stinks, since their memory manager was recommended (open cascade's mmgt has poor performance when multithreaded).
/**intended to be called by a thread
\param start the first item to get from the vector
\param skip how many to skip over (4 for 4 threads)
*/
void g2m::makeSolids(uint start, uint incr) {
uint curr = start;
while ((!interpDone) || (lineVector.size() > curr)) {
if (lineVector.size() > curr) {
if (lineVector[curr]->isMotion()) {
((canonMotion*)lineVector[curr])->setSolidMode(SWEPT);
((canonMotion*)lineVector[curr])->computeSolid();
}
lineVector[curr]->setDispMode(BEST);
lineVector[curr]->display();
curr += incr;
} else {
uio::sleep(); //wait a little bit for interp
}
}
}
EDIT: To summarize, what's the simplest way to process a vector at the same time that the main thread is populating the vector?
Firstly, to benefit from threading you need to find similarly slow tasks for each thread to do. You said your per-object processing takes .5s+, how long does your file reading / object creation take? It could easily be a tenth or a thousandth of that time, in which case your multithreading approach is going to produce neglegible benefit. If that's the case, (yes, I'll answer your original question soon incase it's not) then think about simultaneously processing multiple objects. Given your processing takes quite a while, the thread creation overhead isn't terribly significant, so you could simply have your main file reading/object creation thread spawn a new thread and direct it at the newly created object. The main thread then continues reading/creating subsequent objects. Once all objects are read/created, and all the processing threads launched, the main thread "joins" (waits for) the worker threads. If this will create too many threads (thousands), then put a limit on how far ahead the main thread is allowed to get: it might read/create 10 objects then join 5, then read/create 10, join 10, read/create 10, join 10 etc. until finished.
Now, if you really want the read/create to be in parallel with the processing, but the processing to be serialised, then you can still use the above approach but join after each object. That's kind of weird if you're designing this with only this approach in mind, but good because you can easily experiment with the object processing parallelism above as well.
Alternatively, you can use a more complex approach that just involves the main thread (that the OS creates when your program starts), and a single worker thread that the main thread must start. They should be coordinated using a mutex (a variable ensuring mutually-exclusive, which means not-concurrent, access to data), and a condition variable which allows the worker thread to efficiently block until the main thread has provided more work. The terms - mutex and condition variable - are the standard terms in the POSIX threading that Linux uses, so should be used in the explanation of the particular libraries you're interested in. Summarily, the worker thread waits until the main read/create thread broadcasts it a wake-up signal indicating another object is ready for processing. You may want to have a counter with index of the last fully created, ready-for-processing object, so the worker thread can maintain it's count of processed objects and move along the ready ones before once again checking the condition variable.
It's hard to tell if you have been thinking about this problem deeply and there is more than you are letting on, or if you are just over thinking it, or if you are just wary of threading.
Reading the file and creating the objects is fast; the one method is slow. The dependency is each consecutive ctor depends on the outcome of the previous ctor - a little odd - but otherwise there are no data integrity issues so there doesn't seem to be anything that needs to be protected by mutexes and such.
Why is this more complicated than something like this (in crude pseudo-code):
while (! eof)
{
readfile;
object O(data);
push_back(O);
pthread_create(...., O, makeSolid);
}
while(x < vector.size())
{
pthread_join();
x++;
}
If you don't want to loop on the joins in your main then spawn off a thread to wait on them by passing a vector of TIDs.
If the number of created objects/threads is insane, use a thread pool. Or put a counter is the creation loop to limit the number of threads that can be created before running ones are joined.
#Caleb: quite -- perhaps I should have emphasized active threads. The GUI thread should always be considered one.

Resources