In my project, I'm trying to synchronize multiple threads using the condition variables. Each thread needs to sleep for a given amount of time for each movement. For example, after Thread A changes the data and then it must sleep for 15 seconds. I accomplished this timed sleeping with pthread_cond_timedwait() since the behavior of sleep() is unpredicted and not stable.
However, I have to wake up all threads before time expires in some cases. Suppose that Thread A is currently sleeping for 15 seconds and relative time is at second 7. When I use pthread_cond_broadcast() or pthread_cond_signal() to wakeup Thread A before 15 second expires, it does not reflect. Is there any explanation related to this issue? Can I signal the timed condition variable before its time expires?
Pseudocode is like following:
vector<pthread_cond_t> cond_list;
void signal_control_function() {
// this function keeps track of time and send wake-up signals by looping
if (time == SIGNAL_TIME) {
for (int i = 0; i < cond_list.size(); i++) {
pthread_cond_broadcast(&cond_list[i]);
}
}
}
void* thread_function(void* arg) {
pthread_cond_t cv;
cond_list.pus_back(cv);
for (int i = 0; i < 100; i++) {
// make change
// set up time variable
pthread_cond_timedwait(&cv, &mutex, &time)
}
}
Thanks.
Related
I am attempting to design a system in which I'll have multiple threads. All these threads share a common variable (which is atomic) at a time and when each thread starts it increments that variable and before that thread ends it decrements that variable. I would like a thread to block if the current value of that variable is greater than a specific limit value. A simple psuedocode is this:
std::atomic<int> var;
int limit = 3; //limit value
void doWork() //This method is run as multiple thread simultaneously
{
if(var.load() < limit)
{
//block until this condition is met
}
//increment
var++;
//Do some work
.....
//Decrement before leaving
var--
}
Now two approaches come to my mind. The first approach is to sleep while var.load < limit but this looks sub-optimal. The second approach is to use conditional_variable. I am using that approach below but I believe I have two issues here
The first thread will block on cv.wait(lk, []{return var.load() <
limit});
Multiple threads wont be doing work only one will do work
while others will be blocked.
I would appreciate it if someone can suggest on how I can remove these defects from the code below or if there is a better approach ?
std::condition_variable cv;
std::mutex cv_m;
std::atomic<int> var;
int limit = 3; //limit value
void doWork()
{
std::unique_lock<std::mutex> lk(cv_m);
std::cerr << "Waiting... \n";
cv.wait(lk, []{
return var.load() < limit
});
//Increment var
var++;
//Do some Work
...
//Decrement Work
var--
cv.notify_all();
}
The documentation on GitHub has a section on multithreaded benchmarks, however, it requires placing the multithreaded code inside the benchmark definition, and the library would invoke this code with multiple threads itself.
I want to benchmark a function that creates threads inside. I am interested to optimize the multithreaded part only, so I want to benchmark that part alone. Thus I want to pause the timer while the function's sequential code is running or the internal threads are being created / destroyed and do setup / teardown.
Use the thread barrier synchronization primitive to wait until all threads have been created, or finished setup, etc. This solution uses boost::barrier, but one could also use std::barrier since C++20, or implement a custom barrier. Be careful if implementing yourself as it's easy to screw up, but this answer seems to have it right.
Pass benchmark::State & state to your function and your threads to pause / unpause when needed.
#include <thread>
#include <vector>
#include <benchmark/benchmark.h>
#include <boost/thread/barrier.hpp>
void work() {
volatile int sum = 0;
for (int i = 0; i < 100'000'000; i++) {
sum += i;
}
}
static void thread_routine(boost::barrier& barrier, benchmark::State& state, int thread_id) {
// do setup here, if needed
barrier.wait(); // wait until each thread is created
if (thread_id == 0) {
state.ResumeTiming();
}
barrier.wait(); // wait until the timer is started before doing the work
// do some work
work();
barrier.wait(); // wait until each thread completes the work
if (thread_id == 0) {
state.PauseTiming();
}
barrier.wait(); // wait until the timer is stopped before destructing the thread
// do teardown here, if needed
}
void f(benchmark::State& state) {
const int num_threads = 1000;
boost::barrier barrier(num_threads);
std::vector<std::thread> threads;
threads.reserve(num_threads);
for (int i = 0; i < num_threads; i++) {
threads.emplace_back(thread_routine, std::ref(barrier), std::ref(state), i);
}
for (std::thread& thread : threads) {
thread.join();
}
}
static void BM_AlreadyMultiThreaded(benchmark::State& state) {
for (auto _ : state) {
state.PauseTiming();
f(state);
state.ResumeTiming();
}
}
BENCHMARK(BM_AlreadyMultiThreaded)->Iterations(10)->Unit(benchmark::kMillisecond)->MeasureProcessCPUTime(); // NOLINT(cert-err58-cpp)
BENCHMARK_MAIN();
On my machine, this code outputs (skipping the header):
---------------------------------------------------------------------------------------------
Benchmark Time CPU Iterations
---------------------------------------------------------------------------------------------
BM_AlreadyMultiThreaded/iterations:10/process_time 1604 ms 200309 ms 10
If I comment out all the state.PauseTimer() / state.ResumeTimer(), it outputs:
---------------------------------------------------------------------------------------------
Benchmark Time CPU Iterations
---------------------------------------------------------------------------------------------
BM_AlreadyMultiThreaded/iterations:10/process_time 1680 ms 200102 ms 10
I consider the 80 ms of real time / 200 ms of CPU time difference to be statistically significant, rather than noise, which supports the hypothesis that this example works correct.
I have a thread class with two functions:
using namespace System::Threading;
public ref class SecThr
{
public:
int j;
void wralot()
{
for (int i=0; i<=400000;i++)
{
j=i;
if ((i%10000)==0)
Console::WriteLine(j);
}
}
void setalot()
{
Monitor::Enter(this);
for (int i=0; i<=400000;i++)
{
j=7;
}
Monitor::Exit(this);
}
};
int main(array<System::String ^> ^args)
{
Console::WriteLine("Hello!");
SecThr^ thrcl=gcnew SecThr;
Thread^ o1=gcnew Thread(gcnew ThreadStart(thrcl, &SecThr::wralot));
Thread^ o2=gcnew Thread(gcnew ThreadStart(thrcl, &SecThr::setalot));
o2->Start();
o1->Start();
o1->Join();
o2->Join();
So, for locking "setalot" function i've used MOnitor::Enter-Exit block. But the output is like i'm running only one thread with function "wralot"
0
10000
20000
30000
40000
7 //here's "setalot" func
60000
e t.c.
Why all the output data is not changed to const (7) by "setalot" func
I think you've misunderstood what Monitor::Enter does. It's only a cooperative lock - and as wralot doesn't try to obtain the lock at all, the actions of setalot don't affect it. It's not really clear why you expected to get a constant output of 7 due to setalot - if wralot did try to obtain the lock, it would just mean that one of them would "win" and the other would have to wait. If wralot had to wait, there'd be no output while setalot was running, and then wralot would go do its thing - including setting j to i on every iteration.
So basically, both threads are starting, and setalot very quickly sets j to 7 a lot of times... that's likely to complete in the twinkling of an eye in computer terms, compared with a Console.WriteLine call. I suggest you add a call to Console::WriteLine at the end of setalot so you can see when it's finished. Obviously after that, it's completely irrelevant - which is why you're seeing 60000, 70000 etc.
Using pthreads in linux 2.6.30 I am trying to send a single signal which will cause multiple threads to begin execution. The broadcast seems to only be received by one thread. I have tried both pthread_cond_signal and pthread cond_broadcast and both seem to have the same behavior. For the mutex in pthread_cond_wait, I have tried both common mutexes and separate (local) mutexes with no apparent difference.
worker_thread(void *p)
{
// setup stuff here
printf("Thread %d ready for action \n", p->thread_no);
pthread_cond_wait(p->cond_var, p->mutex);
printf("Thread %d off to work \n", p->thread_no);
// work stuff
}
dispatch_thread(void *p)
{
// setup stuff
printf("Wakeup, everyone ");
pthread_cond_broadcast(p->cond_var);
printf("everyone should be working \n");
// more stuff
}
main()
{
pthread_cond_init(cond_var);
for (i=0; i!=num_cores; i++) {
pthread_create(worker_thread...);
}
pthread_create(dispatch_thread...);
}
Output:
Thread 0 ready for action
Thread 1 ready for action
Thread 2 ready for action
Thread 3 ready for action
Wakeup, everyone
everyone should be working
Thread 0 off to work
What's a good way to send signals to all the threads?
First off, you should have the mutex locked at the point where you call pthread_cond_wait(). It's generally a good idea to hold the mutex when you call pthread_cond_broadcast(), as well.
Second off, you should loop calling pthread_cond_wait() while the wait condition is true. Spurious wakeups can happen, and you must be able to handle them.
Finally, your actual problem: you are signaling all threads, but some of them aren't waiting yet when the signal is sent. Your main thread and dispatch thread are racing your worker threads: if the main thread can launch the dispatch thread, and the dispatch thread can grab the mutex and broadcast on it before the worker threads can, then those worker threads will never wake up.
You need a synchronization point prior to signaling where you wait to signal till all threads are known to be waiting for the signal. That, or you can keep signaling till you know all threads have been woken up.
In this case, you could use the mutex to protect a count of sleeping threads. Each thread grabs the mutex and increments the count. If the count matches the count of worker threads, then it's the last thread to increment the count and so signals on another condition variable sharing the same mutex to the sleeping dispatch thread that all threads are ready. The thread then waits on the original condition, which causes it release the mutex.
If the dispatch thread wasn't sleeping yet when the last worker thread signals on that condition, it will find that the count already matches the desired count and not bother waiting, but immediately broadcast on the shared condition to wake workers, who are now guaranteed to all be sleeping.
Anyway, here's some working source code that fleshes out your sample code and includes my solution:
#include <stdio.h>
#include <pthread.h>
#include <err.h>
static const int num_cores = 8;
struct sync {
pthread_mutex_t *mutex;
pthread_cond_t *cond_var;
int thread_no;
};
static int sleeping_count = 0;
static pthread_cond_t all_sleeping_cond = PTHREAD_COND_INITIALIZER;
void *
worker_thread(void *p_)
{
struct sync *p = p_;
// setup stuff here
pthread_mutex_lock(p->mutex);
printf("Thread %d ready for action \n", p->thread_no);
sleeping_count += 1;
if (sleeping_count >= num_cores) {
/* Last worker to go to sleep. */
pthread_cond_signal(&all_sleeping_cond);
}
int err = pthread_cond_wait(p->cond_var, p->mutex);
if (err) warnc(err, "pthread_cond_wait");
printf("Thread %d off to work \n", p->thread_no);
pthread_mutex_unlock(p->mutex);
// work stuff
return NULL;
}
void *
dispatch_thread(void *p_)
{
struct sync *p = p_;
// setup stuff
pthread_mutex_lock(p->mutex);
while (sleeping_count < num_cores) {
pthread_cond_wait(&all_sleeping_cond, p->mutex);
}
printf("Wakeup, everyone ");
int err = pthread_cond_broadcast(p->cond_var);
if (err) warnc(err, "pthread_cond_broadcast");
printf("everyone should be working \n");
pthread_mutex_unlock(p->mutex);
// more stuff
return NULL;
}
int
main(void)
{
pthread_mutex_t mutex = PTHREAD_MUTEX_INITIALIZER;
pthread_cond_t cond_var = PTHREAD_COND_INITIALIZER;
pthread_t worker[num_cores];
struct sync info[num_cores];
for (int i = 0; i < num_cores; i++) {
struct sync *p = &info[i];
p->mutex = &mutex;
p->cond_var = &cond_var;
p->thread_no = i;
pthread_create(&worker[i], NULL, worker_thread, p);
}
pthread_t dispatcher;
struct sync p = {&mutex, &cond_var, num_cores};
pthread_create(&dispatcher, NULL, dispatch_thread, &p);
pthread_exit(NULL);
/* not reached */
return 0;
}
I'm trying to write a simple thread pool program in pthread. However, it seems that pthread_cond_signal doesn't block, which creates a problem. For example, let's say I have a "producer-consumer" program:
pthread_cond_t my_cond = PTHREAD_COND_INITIALIZER;
pthread_mutex_t my_cond_m = PTHREAD_MUTEX_INITIALIZER;
void * liberator(void * arg)
{
// XXX make sure he is ready to be freed
sleep(1);
pthread_mutex_lock(&my_cond_m);
pthread_cond_signal(&my_cond);
pthread_mutex_unlock(&my_cond_m);
return NULL;
}
int main()
{
pthread_t t1;
pthread_create(&t1, NULL, liberator, NULL);
// XXX Don't take too long to get ready. Otherwise I'll miss
// the wake up call forever
//sleep(3);
pthread_mutex_lock(&my_cond_m);
pthread_cond_wait(&my_cond, &my_cond_m);
pthread_mutex_unlock(&my_cond_m);
pthread_join(t1, NULL);
return 0;
}
As described in the two XXX marks, if I take away the sleep calls, then main() may stall because it has missed the wake up call from liberator(). Of course, sleep isn't a very robust way to ensure that either.
In real life situation, this would be a worker thread telling the manager thread that it is ready for work, or the manager thread announcing that new work is available.
How would you do this reliably in pthread?
Elaboration
#Borealid's answer kind of works, but his explanation of the problem could be better. I suggest anyone looking at this question to read the discussion in the comments to understand what's going on.
In particular, I myself would amend his answer and code example like this, to make this clearer. (Since Borealid's original answer, while compiled and worked, confused me a lot)
// In main
pthread_mutex_lock(&my_cond_m);
// If the flag is not set, it means liberator has not
// been run yet. I'll wait for him through pthread's signaling
// mechanism
// If it _is_ set, it means liberator has been run. I'll simply
// skip waiting since I've already synchronized. I don't need to
// use pthread's signaling mechanism
if(!flag) pthread_cond_wait(&my_cond, &my_cond_m);
pthread_mutex_unlock(&my_cond_m);
// In liberator thread
pthread_mutex_lock(&my_cond_m);
// Signal anyone who's sleeping. If no one is sleeping yet,
// they should check this flag which indicates I have already
// sent the signal. This is needed because pthread's signals
// is not like a message queue -- a sent signal is lost if
// nobody's waiting for a condition when it's sent.
// You can think of this flag as a "persistent" signal
flag = 1;
pthread_cond_signal(&my_cond);
pthread_mutex_unlock(&my_cond_m);
Use a synchronization variable.
In main:
pthread_mutex_lock(&my_cond_m);
while (!flag) {
pthread_cond_wait(&my_cond, &my_cond_m);
}
pthread_mutex_unlock(&my_cond_m);
In the thread:
pthread_mutex_lock(&my_cond_m);
flag = 1;
pthread_cond_broadcast(&my_cond);
pthread_mutex_unlock(&my_cond_m);
For a producer-consumer problem, this would be the consumer sleeping when the buffer is empty, and the producer sleeping when it is full. Remember to acquire the lock before accessing the global variable.
I found out the solution here. For me, the tricky bit to understand the problem is that:
Producers and consumers must be able to communicate both ways. Either way is not enough.
This two-way communication can be packed into one pthread condition.
To illustrate, the blog post mentioned above demonstrated that this is actually meaningful and desirable behavior:
pthread_mutex_lock(&cond_mutex);
pthread_cond_broadcast(&cond):
pthread_cond_wait(&cond, &cond_mutex);
pthread_mutex_unlock(&cond_mutex);
The idea is that if both the producers and consumers employ this logic, it will be safe for either of them to be sleeping first, since the each will be able to wake the other role up. Put it in another way, in a typical producer-consumer sceanrio -- if a consumer needs to sleep, it's because a producer needs to wake up, and vice versa. Packing this logic in a single pthread condition makes sense.
Of course, the above code has the unintended behavior that a worker thread will also wake up another sleeping worker thread when it actually just wants to wake the producer. This can be solved by a simple variable check as #Borealid suggested:
while(!work_available) pthread_cond_wait(&cond, &cond_mutex);
Upon a worker broadcast, all worker threads will be awaken, but one-by-one (because of the implicit mutex locking in pthread_cond_wait). Since one of the worker threads will consume the work (setting work_available back to false), when other worker threads awake and actually get to work, the work will be unavailable so the worker will sleep again.
Here's some commented code I tested, for anyone interested:
// gcc -Wall -pthread threads.c -lpthread
#include <stdio.h>
#include <pthread.h>
#include <stdlib.h>
#include <assert.h>
pthread_cond_t my_cond = PTHREAD_COND_INITIALIZER;
pthread_mutex_t my_cond_m = PTHREAD_MUTEX_INITIALIZER;
int * next_work = NULL;
int all_work_done = 0;
void * worker(void * arg)
{
int * my_work = NULL;
while(!all_work_done)
{
pthread_mutex_lock(&my_cond_m);
if(next_work == NULL)
{
// Signal producer to give work
pthread_cond_broadcast(&my_cond);
// Wait for work to arrive
// It is wrapped in a while loop because the condition
// might be triggered by another worker thread intended
// to wake up the producer
while(!next_work && !all_work_done)
pthread_cond_wait(&my_cond, &my_cond_m);
}
// Work has arrived, cache it locally so producer can
// put in next work ASAP
my_work = next_work;
next_work = NULL;
pthread_mutex_unlock(&my_cond_m);
if(my_work)
{
printf("Worker %d consuming work: %d\n", (int)(pthread_self() % 100), *my_work);
free(my_work);
}
}
return NULL;
}
int * create_work()
{
int * ret = (int *)malloc(sizeof(int));
assert(ret);
*ret = rand() % 100;
return ret;
}
void * producer(void * arg)
{
int i;
for(i = 0; i < 10; i++)
{
pthread_mutex_lock(&my_cond_m);
while(next_work != NULL)
{
// There's still work, signal a worker to pick it up
pthread_cond_broadcast(&my_cond);
// Wait for work to be picked up
pthread_cond_wait(&my_cond, &my_cond_m);
}
// No work is available now, let's put work on the queue
next_work = create_work();
printf("Producer: Created work %d\n", *next_work);
pthread_mutex_unlock(&my_cond_m);
}
// Some workers might still be waiting, release them
pthread_cond_broadcast(&my_cond);
all_work_done = 1;
return NULL;
}
int main()
{
pthread_t t1, t2, t3, t4;
pthread_create(&t1, NULL, worker, NULL);
pthread_create(&t2, NULL, worker, NULL);
pthread_create(&t3, NULL, worker, NULL);
pthread_create(&t4, NULL, worker, NULL);
producer(NULL);
pthread_join(t1, NULL);
pthread_join(t2, NULL);
pthread_join(t3, NULL);
pthread_join(t4, NULL);
return 0;
}