Efficient consumer thread with multiple producers - multithreading

I am trying to make a producer/consumer thread situation more efficient by skipping expensive event operations if necessary with something like:
//cas(variable, compare, set) is atomic compare and swap
//queue is already lock free
running = false
// dd item to queue – producer thread(s)
if(cas(running, false, true))
{
// We effectively obtained a lock on signalling the event
add_to_queue()
signal_event()
}
else
{
// Most of the time if things are busy we should not be signalling the event
add_to_queue()
if(cas(running, false, true))
signal_event()
}
...
// Process queue, single consumer thread
reset_event()
while(1)
{
wait_for_auto_reset_event() // Preferably IOCP
for(int i = 0; i &lt SpinCount; ++i)
process_queue()
cas(running, true, false)
if(queue_not_empty())
if(cas(running, false, true))
signal_event()
}
Obviously trying to get these things correct is a little tricky(!) so is the above pseudo code correct? A solution that signals the event more than is exactly needed is ok but not one that does so for every item.

This falls into the sub-category of "stop messing about and go back to work" known as "premature optimisation". :-)
If the "expensive" event operations are taking up a significant portion of time, your design is wrong, and rather than use a producer/consumer you should use a critical section/mutex and just do the work from the calling thread.
I suggest you profile your application if you are really concerned.
Updated:
Correct answer:
Producer
ProducerAddToQueue(pQueue,pItem){
EnterCriticalSection(pQueue->pCritSec)
if(IsQueueEmpty(pQueue)){
SignalEvent(pQueue->hEvent)
}
AddToQueue(pQueue, pItem)
LeaveCriticalSection(pQueue->pCritSec)
}
Consumer
nCheckQuitInterval = 100; // Every 100 ms consumer checks if it should quit.
ConsumerRun(pQueue)
{
while(!ShouldQuit())
{
Item* pCurrentItem = NULL;
EnterCriticalSection(pQueue-pCritSec);
if(IsQueueEmpty(pQueue))
{
ResetEvent(pQueue->hEvent)
}
else
{
pCurrentItem = RemoveFromQueue(pQueue);
}
LeaveCriticalSection(pQueue->pCritSec);
if(pCurrentItem){
ProcessItem(pCurrentItem);
pCurrentItem = NULL;
}
else
{
// Wait for items to be added.
WaitForSingleObject(pQueue->hEvent, nCheckQuitInterval);
}
}
}
Notes:
The event is a manual-reset event.
The operations protected by the critical section are quick. The event is only set or reset when the queue transitions to/from empty state. It has to be set/reset within the critical section to avoid a race condition.
This means the critical section is only held for a short time. so contention will be rare.
Critical sections don't block unless they are contended. So context switches will be rare.
Assumptions:
This is a real problem not homework.
Producers and consumers spend most of their time doing other stuff, i.e. getting the items ready for the queue, processing them after removing them from the queue.
If they are spending most of the time doing the actual queue operations, you shouldn't be using a queue. I hope that is obvious.

Went thru a bunch of cases, can't see an issue. But it's kinda complicated. I thought maybe you would have an issue with queue_not_empty / add_to_queue racing. But looks like the post-dominating CAS in both paths covers this case.
CAS is expensive (not as expensive as signal). If you expect skipping the signal to be common, I would code the CAS as follows:
bool cas(variable, old_val, new_val) {
if (variable != old_val) return false
asm cmpxchg
}
Lock-free structures like this is the stuff that Jinx (the product I work on) is very good at testing. So you might want to use an eval license to test the lock-free queue and signal optimization logic.
Edit: maybe you can simplify this logic.
running = false
// add item to queue – producer thread(s)
add_to_queue()
if (cas(running, false, true)) {
signal_event()
}
// Process queue, single consumer thread
reset_event()
while(1)
{
wait_for_auto_reset_event() // Preferably IOCP
for(int i = 0; i &lt SpinCount; ++i)
process_queue()
cas(running, true, false) // this could just be a memory barriered store of false
if(queue_not_empty())
if(cas(running, false, true))
signal_event()
}
Now that the cas/signal are always next to each other they can be moved into a subroutine.

Why not just associate a bool with the event? Use cas to set it to true, and if the cas succeeds then signal the event because the event must have been clear. The waiter can then just clear the flag before it waits
bool flag=false;
// producer
add_to_queue();
if(cas(flag,false,true))
{
signal_event();
}
// consumer
while(true)
{
while(queue_not_empty())
{
process_queue();
}
cas(flag,true,false); // clear the flag
if(queue_is_empty())
wait_for_auto_reset_event();
}
This way, you only wait if there are no elements on the queue, and you only signal the event once for each batch of items.

I believe, you want to achieve something like in this question:
WinForms Multithreading: Execute a GUI update only if the previous one has finished. It is specific on C# and Winforms, but the structure may well apply for you.

Related

How to implement a re-usable thread barrier with std::atomic

I have N threads performing various task and these threads must be regularly synchronized with a thread barrier as illustrated below with 3 thread and 8 tasks. The || indicates the temporal barrier, all threads have to wait until the completion of 8 tasks before starting again.
Thread#1 |----task1--|---task6---|---wait-----||-taskB--| ...
Thread#2 |--task2--|---task5--|-------taskE---||----taskA--| ...
Thread#3 |-task3-|---task4--|-taskG--|--wait--||-taskC-|---taskD ...
I couldn’t find a workable solution, thought the little book of Semaphores http://greenteapress.com/semaphores/index.html was inspiring. I came up with a solution using std::atomic shown below which “seems” to be working using three std::atomic.
I am worried about my code breaking down on corner cases hence the quoted verb. So can you share advise on verification of such code? Do you have a simpler fool proof code available?
std::atomic<int> barrier1(0);
std::atomic<int> barrier2(0);
std::atomic<int> barrier3(0);
void my_thread()
{
while(1) {
// pop task from queue
...
// and execute task
switch(task.id()) {
case TaskID::Barrier:
barrier2.store(0);
barrier1++;
while (barrier1.load() != NUM_THREAD) {
std::this_thread::yield();
}
barrier3.store(0);
barrier2++;
while (barrier2.load() != NUM_THREAD) {
std::this_thread::yield();
}
barrier1.store(0);
barrier3++;
while (barrier3.load() != NUM_THREAD) {
std::this_thread::yield();
}
break;
case TaskID::Task1:
...
}
}
}
Boost offers a barrier implementation as an extension to the C++11 standard thread library. If using Boost is an option, you should look no further than that.
If you have to rely on standard library facilities, you can roll your own implementation based on std::mutex and std::condition_variable without too much of a hassle.
class Barrier {
int wait_count;
int const target_wait_count;
std::mutex mtx;
std::condition_variable cond_var;
Barrier(int threads_to_wait_for)
: wait_count(0), target_wait_count(threads_to_wait_for) {}
void wait() {
std::unique_lock<std::mutex> lk(mtx);
++wait_count;
if(wait_count != target_wait_count) {
// not all threads have arrived yet; go to sleep until they do
cond_var.wait(lk,
[this]() { return wait_count == target_wait_count; });
} else {
// we are the last thread to arrive; wake the others and go on
cond_var.notify_all();
}
// note that if you want to reuse the barrier, you will have to
// reset wait_count to 0 now before calling wait again
// if you do this, be aware that the reset must be synchronized with
// threads that are still stuck in the wait
}
};
This implementation has the advantage over your atomics-based solution that threads waiting in condition_variable::wait should get send to sleep by your operating system's scheduler, so you don't block CPU cores by having waiting threads spin on the barrier.
A few words on resetting the barrier: The simplest solution is to just have a separate reset() method and have the user ensure that reset and wait are never invoked concurrently. But in many use cases, this is not easy to achieve for the user.
For a self-resetting barrier, you have to consider races on the wait count: If the wait count is reset before the last thread returned from wait, some threads might get stuck in the barrier. A clever solution here is to not have the terminating condition depend on the wait count variable itself. Instead you introduce a second counter, that is only increased by the thread calling the notify. The other threads then observe that counter for changes to determine whether to exit the wait:
void wait() {
std::unique_lock<std::mutex> lk(mtx);
unsigned int const current_wait_cycle = m_inter_wait_count;
++wait_count;
if(wait_count != target_wait_count) {
// wait condition must not depend on wait_count
cond_var.wait(lk,
[this, current_wait_cycle]() {
return m_inter_wait_count != current_wait_cycle;
});
} else {
// increasing the second counter allows waiting threads to exit
++m_inter_wait_count;
cond_var.notify_all();
}
}
This solution is correct under the (very reasonable) assumption that all threads leave the wait before the inter_wait_count overflows.
With atomic variables, using three of them for a barrier is simply overkill that only serves to complicate the issue. You know the number of threads, so you can simply atomically increment a single counter every time a thread enters the barrier, and then spin until the counter becomes greater or equal to N. Something like this:
void barrier(int N) {
static std::atomic<unsigned int> gCounter = 0;
gCounter++;
while((int)(gCounter - N) < 0) std::this_thread::yield();
}
If you don't have more threads than CPU cores and a short expected waiting time, you might want to remove the call to std::this_thread::yield(). This call is likely to be really expensive (more than a microsecond, I'd wager, but I haven't measured it). Depending on the size of your tasks, this may be significant.
If you want to do repeated barriers, just increment the N as you go:
unsigned int lastBarrier = 0;
while(1) {
switch(task.id()) {
case TaskID::Barrier:
barrier(lastBarrier += processCount);
break;
}
}
I would like to point out that in the solution given by #ComicSansMS ,
wait_count should be reset to 0 before executing cond_var.notify_all();
This is because when the barrier is called a second time the if condition will always fail, if wait_count is not reset to 0.

Consumer/Producer with order and constraint on consumed items

I have the following scenario
I am writing a server that process files (jobs)
a file has a "prefix" and a time
the files should be processed according to time (older file first) but also take into account the prefix (files with same prefix can't be processed concurrently)
I have a thread (Task with Timer) that watches over a directory and adds files to a "queue" (producer)
I have several consumers that take the file from "queue" (consumer) - they should conform to the above rules.
the job of each task is kept in some list (this indicates the constraints)
There are several consumers, the number of consumers is determined at startup.
One of the requirement is to be able to gracefully stop the consumers (either immediately or let ongoing processes to finish).
I did something along this line:
while (processing)
{
//limits number of concurrent tasks
_processingSemaphore.Wait(queueCancellationToken);
//Take next job when available or wait for cancel signal
currentwork = workQueue.Take(taskCancellationToken);
//check that it can actually process this work
if (CanProcess(currnetWork)
{
var task = CreateTask(currentwork)
task.ContinueWith((t) => { //release processing slot });
}
else
//release slot, return job? something else?
}
The cancellation tokens sources are in the caller code and can be cancelled. There are two in order to be able to stop queuing while not cancelling running tasks.
I tired to implement the "queue" as BlockingCollection wrapping a "safe" SortedSet. The general idea work (ordering by time) except the case in which I need to find a new job that matches the constraint. If I return the job to the queue and try to take again I will get the same one.
It is possible to take jobs from the queue until I find a proper one and then returning the "illegal" jobs back but this may cause issues with other consumers processing out of order jobs
Another alternative is to pass a simple collection and a way to lock it and just lock and do a simple search according to current constraints. Again, this means writing code that will possibly not be thread-safe.
Any other suggestion / pointers / data structures that can help?
I think Hans is right: if you already have a thread-safe SortedSet (that implements IProducerConsumerCollection, so it can be used in BlockingCollection), then all you need is to put only files that can be processed right now into the collection. If you finish a file which makes another file available for processing, add the other file to the collection at this point, not earlier.
I would have implemented your requirement(s) with TPL Dataflow. Look at the way you could implement the Producer-Consumer pattern with it. I believe this will meet all the requirements you have (including cancellation on the consumers).
EDIT (for those that do not like to read documentation, but who does...)
Here is an example of how you could implement the requirements with TPL Dataflow. The beauty of this implementation is that consumers are not bound to a single thread and only uses a pool thread when it needs to process data.
static void Main(string[] args)
{
BufferBlock<string> source = new BufferBlock<string>();
var cancellation = new CancellationTokenSource();
LinkConsumer(source, "A", cancellation.Token);
LinkConsumer(source, "B", cancellation.Token);
LinkConsumer(source, "C", cancellation.Token);
// Link an action that will process source values that are not processed by other
source.LinkTo(new ActionBlock<string>((s) => Console.WriteLine("Default action")));
while (cancellation.IsCancellationRequested == false)
{
ConsoleKey key = Console.ReadKey(true).Key;
switch (key)
{
case ConsoleKey.Escape:
cancellation.Cancel();
break;
default:
Console.WriteLine("Posted value {0} on thread {1}.", key, Thread.CurrentThread.ManagedThreadId);
source.Post(key.ToString());
break;
}
}
source.Complete();
Console.WriteLine("Done.");
Console.ReadLine();
}
private static void LinkConsumer(ISourceBlock<string> source, string prefix, CancellationToken token)
{
// Link a consumer that will buffer and process all input of the specified prefix
var consumer = new ActionBlock<string>(new Action<string>(Process), new ExecutionDataflowBlockOptions() { MaxDegreeOfParallelism = 1, SingleProducerConstrained = true, CancellationToken = token, TaskScheduler = TaskScheduler.Default });
var linkDisposable = source.LinkTo(consumer, (p) => p == prefix);
// Dispose the link (remove the link) when cancellation is requested.
token.Register(linkDisposable.Dispose);
}
private static void Process(string arg)
{
Console.WriteLine("Processed value {0} in thread {1}", arg, Thread.CurrentThread.ManagedThreadId);
// Simulate work
Thread.Sleep(500);
}

thread synchronization: making sure function gets called in order

I'm writing a program in which I need to make sure a particular function is called is not being executed in more than one thread at a time.
Here I've written some simplified pseudocode that does exactly what is done in my real program.
mutex _enqueue_mutex;
mutex _action_mutex;
queue _queue;
bool _executing_queue;
// called in multiple threads, possibly simultaneously
do_action() {
_enqueue_mutex.lock()
object o;
_queue.enqueue(o);
_enqueue_mutex.unlock();
execute_queue();
}
execute_queue() {
if (!executing_queue) {
_executing_queue = true;
enqueue_mutex.lock();
bool is_empty = _queue.isEmpty();
_enqueue_mutex.lock();
while (!is_empty) {
_action_mutex.lock();
_enqueue_mutex.lock();
object o = _queue.dequeue();
is_empty = _queue.isEmpty();
_enqueue_mutex.unlock();
// callback is called when "o" is done being used by "do_stuff_to_object_with_callback" also, this function doesn't block, it is executed on its own thread (hence the need for the callback to know when it's done)
do_stuff_to_object_with_callback(o, &some_callback);
}
_executing_queue = false;
}
}
some_callback() {
_action_mutex.unlock();
}
Essentially, the idea is that _action_mutex is locked in the while loop (I should say that lock is assumed to be blocking until it can be locked again), and expected to be unlocked when the completion callback is called (some_callback in the above code).
This, does not seem to be working though. What happens is if the do_action is called more than once at the same time, the program locks up. I think it might be related to the while loop executing more than once simultaneously, but I just cant see how that could be the case. Is there something wrong with my approach? Is there a better approach?
Thanks
A queue that is not specifically designed to be multithreaded (multi-producer multi-consumer) will need to serialize both eneueue and dequeue operations using the same mutex.
(If your queue implementation has a different assumption, please state it in your question.)
The check for _queue.isEmpty() will also need to be protected, if the dequeue operation is prone to the Time of check to time of use problem.
That is, the line
object o = _queue.dequeue();
needs to be surrounded by _enqueue_mutex.lock(); and _enqueue_mutex.unlock(); as well.
You probably only need a single mutex for the queue. Also once you've dequeued the object, you can probably process it outside of the lock. This will prevent calls to do_action() from hanging too long.
mutex moo;
queue qoo;
bool keepRunning = true;
do_action():
{
moo.lock();
qoo.enqueue(something);
moo.unlock(); // really need try-finally to make sure,
// but don't know which language we are using
}
process_queue():
{
while(keepRunning)
{
moo.lock()
if(!qoo.isEmpty)
object o = qoo.dequeue();
moo.unlock(); // again, try finally needed
haveFunWith(o);
sleep(50);
}
}
Then Call process_queue() on it's own thread.

How can I accomplish ThreadPool.Join?

I am writing a windows service that uses ThreadPool.QueueUserWorkItem(). Each thread is a short-lived task.
When the service is stopped, I need to make sure that all the threads that are currently executing complete. Is there some way of waiting until the queue clears itself?
You could create an event (e.g. ManualResetEvent) in each thread, and keep it in a synchronised list (using the lock construct). Set the event or remove it from the list when the task is finished.
When you want to join, you can use WaitHandle.WaitAll (MSDN documentation) to wait for all the events to be signalled.
It's a hack, but I can't see how to reduce it to anything simpler!
Edit: additionally, you could ensure that no new events get posted, then wait a couple of seconds. If they are indeed short-lived, you'll have no problem. Even simpler, but more hacky.
Finally, if it's just a short amount of time, the service won't exit until all threads have died (unless they are background threads); so if it's a short amount of time, the service control manager won't mind a second or so - you can just leave them to expire - in my experience.
The standard pattern for doing this is to use a counter which holds the number of pending work items and one ManualResetEvent that is signalled when the counter reaches zero. This is generally better than using a WaitHandle for each work item as that does not scale very well when there are a lot of simultaneous work items. Plus, some of the static WaitHandle method only accept a maximum of 64 instances anyway.
// Initialize to 1 because we are going to treat the current thread as
// a work item as well. This is to avoid a race that could occur when
// one work item gets queued and completed before the next work item
// is queued.
int count = 1;
var finished = new ManualResetEvent(false);
try
{
while (...)
{
Interlocked.Increment(ref counter);
ThreadPool.QueueUserWorkItem(
delegate(object state)
{
try
{
// Your task goes here.
}
finally
{
// Decrement the counter to indicate the work item is done.
if (Interlocked.Decrement(ref count) == 0)
{
finished.Set();
}
}
});
}
}
finally
{
// Decrement the counter to indicate the queueing thread is done.
if (Interlocked.Decrement(ref count) == 0)
{
finished.Set();
}
}
finished.WaitOne();

Question about thread synchronisation

i have a question about thread situation.
Suppose i have 3 threads :producer,helper and consumer.
the producer thread is in running state(and other two are in waiting state)and when its done it calls invoke,but the problem it has to invoke only helper thread not consumer,then how it can make sure that after it releases resources are to be fetched by helper thread only and then by consumer thread.
thanks in advance
Or have you considered, sometimes having separate threads is more of a problem than a solution?
If you really want the operations in one thread to be strictly serialized with the operations in another thread, perhaps the simpler solution is to discard the second thread and structure the code so the first thread does the operations in the order desired.
This may not always be possible, but it's something to bear in mind.
You could have, for instance, two mutexes (or whatever you are using): one for producer and helper, and other for producer and consumer
Producer:
//lock helper
while true
{
//lock consumer
//do stuff
//release and invoke helper
//wait for helper to release
//lock helper again
//unlock consumer
//wait consumer
}
The others just lock and unlock normally.
Another possible approach (maybe better) is using a mutex for producer / helper, and other helper / consumer; or maybe distribute this helper thread tasks between the other two threads. Could you give more details?
The helper thread is really just a consumer/producer thread itself. Write some code for the helper like you would for any other consumer to take the result of the producer. Once that's complete write some code for the helper like you would for any other producer and hook it up to your consumer thread.
You might be able to use queues to help you with this with locks around them.
Producer works on something, produces it, and puts it on the helper queue.
Helper takes it, does something with it, and then puts it on the consumer queue.
Consumer take its, consumes it, and goes on.
Something like this:
Queue<MyDataType> helperQ, consumerQ;
object hqLock = new object();
object cqLock = new object();
// producer thread
private void ProducerThreadFunc()
{
while(true)
{
MyDataType data = ProduceNewData();
lock(hqLock)
{
helperQ.Enqueue(data);
}
}
}
// helper thread
private void HelperThreadFunc()
{
while(true)
{
MyDataType data;
lock(hqLock)
{
data = helperQ.Dequeue();
}
data = HelpData(data);
lock(cqLock)
{
consumerQ.Enqueue(data);
}
}
}
// consumer thread
private void ConsumerThreadFunc()
{
while(true)
{
MyDataType data;
lock(cqLock)
{
data = consumerQ.Dequeue();
}
Consume(data);
}
}
NOTE: You will need to add more logic to this example to make sure usable. Don't expect it to work as-is. Mainly, use signals for one thread to let the other know that data is available in its queue (or as a worst case poll the size of the queue to make sure it is greater than 0 , if it is 0, then sleep -- but the signals are cleaner and more efficient).
This approach would let you process data at different rates (which can lead to memory issues).

Resources