How to synchronize multiple threads to one? - multithreading

I have a multithreaded application where I want to allow all but one of the threads to run synchronously. However, when a specific thread wakes up I need the rest of the threads to block.
My Current implementation is:
void ManyBackgroundThreadsDoingWork()
{
AquireMutex(mutex);
DoTheBackgroundWork();
ReleaseTheMutex(mutex);
}
void MainThread()
{
AquireMutex(mutex);
DoTheMainThreadWork();
ReleaseTheMutex(mutex);
}
This works, in that it does indeed keep the background threads from operating inside the critical block while the main thread is doing its work. However, There is a lot of contention for the mutex amongst the background threads even when they don't necessarily need it. The main thread runs intermittently and the background threads are able to run concurrently with each other, just not with the main thread.
What i've effectively done is reduced a multithreaded architecture to a single threaded one using locks... which is silly. What I really want is an architecture that is multithreaded for most of the time, but then waits while a small operation completes and goes back to being multithreaded.
Edit: An explanation of the problem.
What I have is an application that displays multiple video feeds coming from pcie capture cards. The pcie capture card driver issues callbacks on threads it manages into what is effectively the ManyBackgroundThreadsDoingWork function. In this function I copy the captured video frames into buffers for rendering. The main thread is the render thread that runs intermittently. The copy threads need to block during the render to prevent tearing of the video.
My initial approach was to simply do double buffering but that is not really an option as the capture card driver won't allow me to buffer frames without pushing the frames through system memory. The technique being used is called "DirectGMA" from AMD that allows the capture card to push video frames directly into the GPU memory. The only method for synchronization is to put a glFence and mutex around the actual rendering as the capture card will be continuously streaming data to the GPU Memory. The driver offers no indication of when a frame transfer completes. The callback supplies enough information for me to know that a frame is ready to be transferred at which point I trigger the transfer. However, I need to block transfers during the scene render to prevent tearing and artifacts in the video. The technique described above is the suggested technique from the pcie card manufacturer. The technique, however, breaks down when you want more than one video playing at a time. Thus, the question.

You need a lock that supports both shared and exclusive locking modes, sometimes called a readers/writer lock. This permits multiple threads to get read (shared) locks until one thread requests an exclusive (write) lock.

Related

Choice between thread and process

I am designing a program in C++ running in Ubuntu Linux and developed with Eclipse.
This program is implemented by a infinite cycle (main loop), communicating with several interfaces (GUI, RS485 etc.).
One of the tasks of the main cycle is to read information from a DSP via I2C interface every millisecods. Every reading operation through I2C takes around 0.5 msec.
My idea is to have a separate process or thread to manage the interface to the I2C, while the main controller would simply read the data that are written by the separate thread. This avoids keeping blocked the main cycle for long time.
The question is: is it better to implement this separate process as a separate program or as a thread?
My idea is that a thread would be much better because in this way it can use the same memory space of the main program and read/write directly the same memory buffer. The only care to be taken is to use a mutex to access safely the shared memory buffer.
With a separate program instead the communication would need shared memory (shmget) or sockets, pipes etc.
So the choice seems even too easy, but I wish to be sure I am not missing any details. In which cases it would be better to use a separate program instead of a thread?

Which are blocking Vulkan functions?

In Vulkan, it is recommended to break the API calls into separate threads for better throughput. I am unsure which category of calls are the computationally expensive one which would cause the thread to block, and thus should be used asynchronously.
As I see it, these are the potential calls/family-of-calls that could take a long time to execute.
vkAcquireImageKHR()
vkQueueSubmit()
vkQueuePresentKHR()
memcpy into mapped memory
vkBegin/EndCommandBuffer
vkCmd* calls for drawing and compute
But, the more I think about them, the more it seems that most would be fairly cheap to call. I'll explain my rational, which is probably flawed.
vkAcquireImageKHR()
This could block, if you choose a timeout. But, it's likely that a sufficiently optimized app would call this function with a 0 timeout, and just do other work if the image is not yet available. So, this function can be made instant. There's no need to wait, if the app is smart enough.
vkQueueSubmit()
This function takes a fence, which will be signaled when the GPU has finished executing the command buffers. So, it doesn't actually wait around while the GPU performs the work. I'm assuming this function is the one that starts the physical movement of the command buffer data to the GPU, but I'm assuming that it tell the hardware to read from some memory location, and then the function returns as quickly as possible. So, it wouldn't wait around while the command buffers get sent to the GPU.
vkQueuePresentKHR()
Signal to the GPU to send some image to the window/monitor. It doesn't have to wait for much, does it?
memcpy into mapped memory
This is probably slow.
vkCmd* calls
This family of calls is the one I'm most unsure about. When I read about threads and Vulkan, it's usually these calls that get put onto the threads. But, what are these calls doing, really? Are they building some opcode buffer, made up of some ints and pointers, to be sent to the GPU? If so, that should be extremely fast. The actual work would be carrying out the operations described by those opcodes.
Define "block". The traditional definition of "block"ing is to wait on some internal synchronization, and thereby taking longer than would strictly be necessary for the operation. Doing a memcpy is not doing any synchronization; it's just copying data.
So you don't seem to be concerned about "block"ing; you're merely talking about what operations are expensive.
vkQueueSubmit does not block. But that doesn't mean it's not expensive. It is not "tell[ing] the hardware to read from some memory location" Just look at its interface. It doesn't take a single command buffer; it takes an arbitrary number of them, which are grouped into batches, with each batch waiting on semaphores before execution, signaling semaphores after execution, and the whole operation signaling a fence.
You cannot reasonably expect an implementation of such a thing to merely copy some pointers around.
And that doesn't even get into issues of different types of command buffers. Submitting SIMULTANEOUS_USE command buffers may require creating temporary copies of its buffered data, so that different batches can contain the same command buffer.
Now obviously, vkQueueSubmit is going to return well before any of the work it submits actually gets executed. But don't make the mistake of thinking that it's free to ship work off to the GPU. The Vulkan specification takes time out in a note to directly tell you not to call the function any more frequently than you can get away with:
Submission can be a high overhead operation, and applications should attempt to batch work together into as few calls to vkQueueSubmit as possible.
The reason to present on the same thread that submitted the CBs that generates the image being presented is not because any of those operations are necessarily slow. It's for simple pragmatism; these three operations (acquire, submit, present) must happen in order. And the simplest and easiest way to ensure that is to do them on the same thread.
You cannot submit work that renders to a swapchain image until you have acquired it. Therefore, either you do it on the same thread, or you have to have some inter-thread communication pipe to tell the thread waiting to build the primary CB what the acquired image is. The two processes cannot overlap.
Unlike acquire, present is a queue operation. And both vkQueueSubmit and vkQueuePresent require that access to their VkQueue parameters must be "externally synchoronized". That of course means that you cannot call them both from different threads, on the same VkQueue, at the same time. So if you tried to do these in parallel, you'd need a mutex or something to synchronize CPU access to the VkQueue.
Whereas if you do them on the same thread, there's no need.
Additionally, in order to present an image, you must provide a semaphore that the present will wait on. This semaphore will get signaled by the batch that generates data for the image. Vulkan requires semaphore signal/wait pairs to be ordered; you cannot perform a queue operation that waits on a semaphore until the operation that signals that semaphore has been submitted. Therefore, either you do it on the same thread in sequence, or you use some inter-thread communication pipe to tell whatever thread is waiting to present the image that the submit operation that renders to it has been issued.
So what is to be gained by splitting these operations up onto different threads? They have to happen in sequence, so you may as well do them in sequence the easiest way that exists: on the same thread.
While timeline semaphores now allow you to call the present function before submitting the work that increments the semaphore counter, you still can't call them on separate threads (without synchronization) because they affect the same queue. So you may as well issue them on the same thread (though not necessarily in acquire, submit, present order).
Ultimately, it's not clear what the point of this exercise is. Yes, an individual vkCmd* call will be pretty fast. So what? In a real scene, you will be calling these functions thousands of times per frame. Spreading them evenly across 4 cores saves you ~4x the performance.

Vulkan Queue Synchronization in Multithreading

In my application it is imperative that "state" and "graphics" are processed in separate threads. So for example, the "state" thread is only concerned with updating object positions, and the "graphics" thread is only concerned with graphically outputting the current state.
For simplicity, let's say that the entirety of the state data is contained within a single VkBuffer. The "state" thread creates a Compute Pipeline with a Storage Buffer backed by the VkBuffer, and periodically vkCmdDispatchs to update the VkBuffer.
Concurrently, the "graphics" thread creates a Graphics Pipeline with a Uniform Buffer backed by the same VkBuffer, and periodically draws/vkQueuePresentKHRs.
Obviously there must be some sort of synchronization mechanism to prevent the "graphics" thread from reading from the VkBuffer whilst the "state" thread is writing to it.
The only idea I have is to employ the usage of a host mutex fromvkQueueSubmit to vkWaitForFences in both threads.
I want to know, is there perhaps some other method that is more efficient or is this considered to be OK?
Try using semaphores. They are used to synchronize operations solely on the GPU, which is much more optimal than waiting in the app and submitting work after previous work is fully processed.
When You submit work You can provide a semaphore which gets signaled when this work is finished. When You submit another work You can provide the same semaphore on which the second batch should wait. Processing of the second batch will start automatically when the semaphore gets signaled (this semaphore is also automatically unsignaled and can be reused).
(I think there are some constraints on using semaphores, associated with queues. I will update the answer later when I confirm this but they should be sufficient for Your purposes.
[EDIT] There are constraints on using semaphores but it shouldn't affect You - when You use a semaphore as a wait semaphore during submission, no other queue can wait on the same semaphore.)
There are also events in Vulkan which can be used for similar purposes but their use is a little bit more complicated.
If You really need to synchronize GPU and Your application, use fences. They are signaled in a similar way as semaphores. But You can check their state on the app side and You need to manually unsignal them before You can use then again.
[EDIT]
I've added an image that more or less shows what I think You should do. One thread calculates state and with each submission adds a semaphore to the top of the list (or a ring buffer as #NicolasBolas wrote). This semaphore gets signaled when the submission is finished (it is provided in pSignalSemaphores during "compute" batch submission).
Second thread renders Your scene. It manages it's own list of semaphores similarly to the compute thread. But when You want to render things, You need to be sure that compute thread finished calculations. That's why You need to take the latest "compute" semaphore and wait on it (provide it in pWaitSemaphores during "render" batch submission). When You submit rendering commands, compute thread can't start and modify the data because it may influence the results of a rendering. So compute thread also needs to wait until the most recent rendering is done. That's why compute thread also needs to provide a wait semaphore (the most recent "rendering" semaphore).
You just need to synchronize submissions. Rendering thread cannot start when a compute threads submits commands and vice versa. That's why adding semaphores to the lists (and taking semaphores from the list) should be synchronized. But this has nothing to do with Vulkan. Probably some mutex will be helpful (for example a C++-ish std::lock_guard<std::mutex>). But this synchronization is a problem only when You have a single buffer.
Another thing is what to do with old semaphores from both lists. You cannot directly check what is their state and You cannot directly unsignal them. The state of semaphores can be checked by using additional fences provided with each submission. You don't wait on them but from time to time check if a given fence is signaled and, if it is, You can destroy old semaphore (as You cannot unsignal it from the application) or You can make an empty submission, with no command buffers, and use that semaphore as a wait semaphore. This way the semaphore will be unsignaled and You can reuse it. But I don't know which solution is more optimal: destroying old and creating new semaphores, or unsignaling them with empty submissions.
When You have a single buffer, a one-element list/ring is probably enough. But more optimal solution would have some kind of a ping-pong set of buffers - You read data from one buffer, but store results in another buffer. And in the next step You swap them. That's why in the image above, the lists of semaphores (rings) may have more elements depending on Your setup. The more independent buffers and semaphores in the lists (of course to some reasonable count), the best performance You will get as You reduce time wasted on waiting. But this complicates Your code and it may also increase a lag (rendering thread gets data that is a bit older than the data currently processed by the compute thread). So You may need to balance performance, code complexity and a rendering lag.
How you do this depends on two factors:
Whether you want to dispatch the compute operation on the same queue as its corresponding graphics operation.
The ratio of compute operations to their corresponding graphics operations.
#2 is the most important part.
Even though they are generated in separate threads, there must be at least some idea that the graphics operation is being fed by a particular compute operation (otherwise, how would the graphics thread know where the data is to read from?). So, how do you do that?
At the end of the day, that part has nothing to do with Vulkan. You need to use some inter-thread communication mechanism to allow the graphics thread to ask, "which compute task's data should I be using?"
Typically, this would be done by having the compute thread add every compute operation it does to some kind of circular buffer (thread-safe of course. And non-locking). When the graphics thread goes to decide where to read its data from, it asks the circular buffer for the most recently added compute operation.
In addition to the "where to read its data from" information, this would also provide the graphics thread with an appropriate Vulkan synchronization primitive to use to synchronize its command buffer(s) with the compute operation's CB.
If the compute and graphics operations are being dispatched on the same queue, then this is pretty simple. There doesn't have to actually be a synchronization primitive. So long as the graphics CBs are issued after the compute CBs in the batch, all the graphics CBs need is to have a vkCmdPipelineBarrier at the front which waits on all memory operations from the compute stage.
srcStageMask would be STAGE_COMPUTE_SHADER_BIT, with dstStageMask being, well, pretty much everything (you could narrow it down, but it won't matter, since at the very least your vertex shader stage will need to be there).
You would need a single VkMemoryBarrier in the pipeline barrier. It's srcAccessMask would be SHADER_WRITE_BIT, while the dstAccessMask would be however you intend to read it. If the compute operations wrote some vertex data, you need VERTEX_ATTRIBUTE_READ_BIT. If they wrote some uniform buffer data, you need UNIFORM_READ_BIT. And so on.
If you're dispatching these operations on separate queues, that's where you need an actual synchronization object.
There are several problems:
You cannot detect if a Vulkan semaphore has been signaled by user code. Nor can you set a semaphore to the unsignaled state by user code. Nor can you reasonably submit a batch that has a semaphore in it that is currently signaled and nobody's waiting on it. You can do the latter, but it won't do the right thing.
In short, you can never submit a batch that signals a semaphore unless you are certain that some process is going to wait for it.
You cannot issue a batch that waits on a semaphore, unless a batch that signals it is "pending execution". That is, your graphics thread cannot vkQueueSubmit its batch until it is certain that the compute queue has submitted its signaling batch.
So what you have to do is this. When the graphics queue goes to get its compute data, this must send a signal to the compute thread to add a semaphore to its next submit call. When the graphics thread submits its graphics operation, it then waits on that semaphore.
But to ensure proper ordering, the graphics thread cannot submit its operation until the compute thread has submitted the semaphore signaling operation. That requires a CPU-synchronization operation of some form. It could be as simple as the graphics thread polling an atomic variable set by the compute thread.

Can single thread do everything that multithread can do?

My 1st Question: As per the title.
I am asking this because I came across a StackExchange question: What can multiple threads do that a single thread cannot?
In one of the solutions given in that link states that whatever multithread can do, it can be done by single thread as well.
However I don't think this is true. My argument is this: When we build a simple chat program with socket programming and run it via the command console. If the chat program is single threaded. The chat program is actually half-duplex. Which means we cannot listen and talk concurrently and each time only a party can talk and the other have to listen. In order for both parties to be able to talk and receive message concurrently, we have to implement it with multithreads.
My 2nd Question: Is my argument correct? Or did I miss out some points here, and therefore a single thread still can do everything multithread does?
Let's consider the computer as a whole, and more precisely that you chat application is bound with the kernel (or the whole os) as a piece we would call "the software".
Now consider that this "software" runs on a single core (say a i386).
Then you can figure out that, even if you wrote your chat application using threads (which is probably quite overkill), the software as a whole runs on a single CPU core, which means that at a very moment it performs one single thing even if there seem to be parallel things happening.
This is nothing more but a Turing machine (using a single tape) https://en.wikipedia.org/wiki/Turing_machine
The parallelism is an illusion caused by the kernel because it can switch between task fast enough. Just like a film seems to be continuous picture on screen, when actually there are just 24 images per seconds, and this is enough to fool our brain.
So I would say that anything a multithreaded program does, a single threaded could do.
Nevertheless, now we all use multi-core CPUs which can be seen at a certain point as running on multiple computers at the same time (parallel computing), thus you can probably find software that works on multi core and that would not run on a single threaded one.
A good example are device drivers (in kernel). If you have a poor implementation, on non preemptive kernel, you can create a busy loop that waits for an event indefinitely. This usually deadlock on single core (you prevent the kernel to schedule to another task, thus you prevent the event to be sent). But this can work on multi core as the event is usually eventually sent by the other thread running on an other core (hopefully).
I want to amend the existing answer (+1):
You absolutely can run multiple parallel IOs on a single thread. An IO is nothing more but a kernel data structure. When you start the IO the OS talks to the hardware and tells it to do something. Then, the CPU is free to do whatever it wants. The hardware calls back into the OS when it's done. It issues an interrupt which hijacks a CPU core to process the completion notification.
This is called async IO and all OS'es provide it.
In fact this is how socket programs with many connections run. They use async IO to multiplex high amounts of connections onto a small pool of threads.
The core reason why this argument is incorrect is subtle. While it's true that with only a single thread, or single core, or single network interface, that particular component can only be handling a send or a receive at any given time, if it's not the critical path, it does not make sense to describe the overall system as half duplex.
Consider a network link that is full-duplex and takes 1ms to move a chunk of data from one end to the other. Now imagine we have a device that puts data on the link or removes data from the link but cannot do both at the same time. So long as it takes much less than 1ms to process a send or a receive, this single file path that data in both directions must go through does not somehow make the link half-duplex. There will still be data moving in both directions at the same time.
In any realistic chat application, the CPU will not be the limiting factor. So it's inability to do more than one thing at a time can't make the system half-duplex. There can still be data moving in both directions at the same time.
For a typical chat application under typical load, the behavior of the system will not be significantly different whether implementation uses a single thread or has multiple threads with infinite CPU resources. The CPU just won't be the limiting factor.

interesting thread synchronization problem

I am trying to come up with a synchronization model for the following scenario:
I have a GUI thread that is responsible for CPU intensive animation + blocking I/O. The GUI thread retrieves images from the network (puts them in a shared buffer) , these images are processed (CPU intensive operation..done by a worker thread) and then these images are animated ( again CPU intensive..done by the GUI thread).
The processing of images is done by a worker thread..it retrieves images from the shared buffer processes them and puts them in an output buffer.
There is only once CPU and the GUI thread should not get scheduled out while it is animating the images (the animation has to be really smooth). This means that the work thread should get the CPU only when the GUI thread is waiting for I/O operation to complete.
How do i go about achieving this? This looks like a classic producer consumer problem...but i am not quite sure how i can guarantee that the animation will be as smooth as possible ( i am open to using more threads).
I would like to use QThreads (Qt framework) for platform independence but i can consider pthreads for more control ( as currently we are only aiming for linux).
Any ideas?
EDIT:
i guess the problems boils down to one thing..how do i ensure that the animation thread is not interrupted while it is animating the images ( the animation runs when the user goes from one page to the other..all the images in the new page are animated before shown in their proper place..this is a small operation but it must be really smooth).The worker thread can only run when the animation is over..
Just thinking out loud here, but it sounds like you have two compute-intensive tasks, animation and processing, and you want animation to always have priority over processing. If that is correct then maybe instead of having these tasks in separate threads you could have a single thread that handles both animation and processing.
For instance, the thread could have two task-queues, one for animation jobs and one for processing jobs, and it only starts a job from the processing queue when the animation queue is empty. But, this will only work well if each individual processing job is relatively small and/or interruptible at arbitrary positions (otherwise animation jobs will get delayed, which is not what you want).
The first big question is: Do I really need threads? Qt 's event system and network objects make it easy to not having the technical burden of threads and all the snags that comes with it.
Have a look at alternative ways to address issues here and here. These techniques great if you are sticking to pure Qt code and do not depend on a 3rd party library. If you must use a 3rd party lib that does blocking calls then sure, you can use threads.
Here is an example of a consumer producer.
Also have a look at Advanced Qt Programming: Creating Great Software with C++ and Qt 4
My advice is to start without threads and see how it fares. You can always refactor to threads after. So, best is to design your objects/architecture without too much coupling.
If you want you can post some code to give more context.

Resources