thread::join() blocks when it shouldn't - multithreading

To understand how to use atomics in C++11, I tried to following code snippet:
#include <iostream>
#include <thread>
#include <atomic>
using namespace std;
struct solution {
atomic<bool> alive_;
thread thread_;
solution() : thread_([this] {
alive_ = true;
while (alive_);
}) { }
~solution() {
alive_ = false;
thread_.join();
}
};
int main() {
constexpr int N = 1; // or 2
for (int i = 0; i < N; ++i) {
solution s;
}
cout << "done" << endl;
}
If N equals 1, the output is done. However, if I set it to 2, the main thread blocks at thread::join(). Why do you think we don't see done when N > 1?
Note: If I use the following constructor:
solution() : alive_(true), thread_([this] {
while (alive_);
}) { }
it prints done for any value of N.

If you don't initialise alive_ and only set it once the thread starts, the following interleaving of execution is possible:
MAIN: s::solution()
MAIN: s.thread_(/*your args*/)
MAIN: schedule(s.thread_) to run
thread: waiting to start
MAIN: s::~solution()
MAIN: s.alive_ = false
thread: alive_ = true
MAIN: s.thread_.join()
thread: while(alive_) {}

atomic<bool> is, by default, initialized with false on Visual Studio (its initial value is undefined by standard).
So, the following sequence of events might happen:
A solution object is created, alive_ is initialized to false and thread_ is created (but not run).
The solution object is destroyed, the destructor runs and sets alive_ to false, then waits for thread_ to end (thread has not done anything)
thread_ runs, sets alive_ to true, and then loops forever (because the main thread is waiting for it to terminate).

Related

Threads executing an operation a second

I'm trying to create a concurrent code where I execute a function per second, this function prints a character and waits a second on that thread. The behaviour I expect is to print each character after another but this doesn't happen, instead, it prints all of the characters of the inner loop execution. I'm not sure if this is somewhat related to an I/O operation or whatnot.
I've also tried to create an array of threads where each thread are created on the execution of the inner loop but the behaviour repeats, even if not calling join(). What might be wrong with the code?
The following code is what I've tried to do, and I used a clock to see if it was waiting the correct amount of time
#include <iostream>
#include <thread>
#include <chrono>
#include <string>
void print_char();
int main() {
using Timer = std::chrono::high_resolution_clock;
using te = std::chrono::duration<double>;
using s = std::chrono::seconds;
te interval;
for (int i = 0; i < 100; i++) {
auto a = Timer::now();
for (int j = 0; j < i; j++) {
std::thread t(print_char);
t.join();
}
auto b = Timer::now();
interval = b-a;
std::cout << std::chrono::duration_cast<s>(interval).count();
std::cout << std::endl;
}
return 0;
}
void print_char() {
std::cout << "*";
std::this_thread::sleep_for(std::chrono::seconds(1));
}
The behaviour I expect is to print each character after another but this doesn't happen, instead, it prints all of the characters of the inner loop execution.
You need to flush the output stream in order to see the text:
std::cout << "*" << std::flush;
std::endl contains call to std::flush in it and this is why you see whole lines to be displayed once the inner loop is complete. Your threads do add the '*' characters once per second, you just don't see them being added until the stream is flushed.
Consider the code
std::thread t(print_char);
t.join();
The first line creates and start a thread. The second line immediately wait for the thread to end. That makes your program serial and not parallel. In fact, it's no different than calling the function directly instead of creating the thread.
If you want to have the thread operate in parallel and independently from your main thread, you should have the loop in the thread function itself instead. Perhaps something like
std::atomic<bool> keep_running = true;
void print_char() {
while (keep_running) {
std::cout << "*";
std::this_thread::sleep_for(std::chrono::seconds(1));
}
}
Then in the main function you just create the thread, and do something else until you want the thread to end.
std::thread t(print_char);
// Do something else...
keep_running = false;
t.join();
In regard to your current code, it's really no different than
for (int i = 0; i < 100; i++) {
auto a = Timer::now();
for (int j = 0; j < i; j++) {
print_char();
}
auto b = Timer::now();
interval = b-a;
std::cout << std::chrono::duration_cast<s>(interval).count();
std::cout << std::endl;
}

thread sync using mutex and condition variable

I'm trying to implement an multi-thread job, a producer and a consumer, and basically what I want to do is, when consumer finishes the data, it notifies the producer so that producer provides new data.
The tricky part is, in my current impl, producer and consumer both notifies each other and waits for each other, I don't know how to implement this part correctly.
For example, see the code below,
mutex m;
condition_variable cv;
vector<int> Q; // this is the queue the consumer will consume
vector<int> Q_buf; // this is a buffer Q into which producer will fill new data directly
// consumer
void consume() {
while (1) {
if (Q.size() == 0) { // when consumer finishes data
unique_lock<mutex> lk(m);
// how to notify producer to fill up the Q?
...
cv.wait(lk);
}
// for-loop to process the elems in Q
...
}
}
// producer
void produce() {
while (1) {
// for-loop to fill up Q_buf
...
// once Q_buf is fully filled, wait until consumer asks to give it a full Q
unique_lock<mutex> lk(m);
cv.wait(lk);
Q.swap(Q_buf); // replace the empty Q with the full Q_buf
cv.notify_one();
}
}
I'm not sure this the above code using mutex and condition_variable is the right way to implement my idea,
please give me some advice!
The code incorrectly assumes that vector<int>::size() and vector<int>::swap() are atomic. They are not.
Also, spurious wakeups must be handled by a while loop (or another cv::wait overload).
Fixes:
mutex m;
condition_variable cv;
vector<int> Q;
// consumer
void consume() {
while(1) {
// Get the new elements.
vector<int> new_elements;
{
unique_lock<mutex> lk(m);
while(Q.empty())
cv.wait(lk);
new_elements.swap(Q);
}
// for-loop to process the elems in new_elements
}
}
// producer
void produce() {
while(1) {
vector<int> new_elements;
// for-loop to fill up new_elements
// publish new_elements
{
unique_lock<mutex> lk(m);
Q.insert(Q.end(), new_elements.begin(), new_elements.end());
cv.notify_one();
}
}
}
Maybe that is close to what you want to achive. I used 2 conditional variables to notify producers and consumers between each other and introduced variable denoting which turn is now:
#include <ctime>
#include <condition_variable>
#include <iostream>
#include <mutex>
#include <queue>
#include <thread>
template<typename T>
class ReaderWriter {
private:
std::vector<std::thread> readers;
std::vector<std::thread> writers;
std::condition_variable readerCv, writerCv;
std::queue<T> data;
std::mutex readerMutex, writerMutex;
size_t noReaders, noWriters;
enum class Turn { WRITER_TURN, READER_TURN };
Turn turn;
void reader() {
while (1) {
{
std::unique_lock<std::mutex> lk(readerMutex);
while (turn != Turn::READER_TURN) {
readerCv.wait(lk);
}
std::cout << "Thread : " << std::this_thread::get_id() << " consumed " << data.front() << std::endl;
data.pop();
if (data.empty()) {
turn = Turn::WRITER_TURN;
writerCv.notify_one();
}
}
}
}
void writer() {
while (1) {
{
std::unique_lock<std::mutex> lk(writerMutex);
while (turn != Turn::WRITER_TURN) {
writerCv.wait(lk);
}
srand(time(NULL));
int random_number = std::rand();
data.push(random_number);
std::cout << "Thread : " << std::this_thread::get_id() << " produced " << random_number << std::endl;
turn = Turn::READER_TURN;
}
readerCv.notify_one();
}
}
public:
ReaderWriter(size_t noReadersArg, size_t noWritersArg) : noReaders(noReadersArg), noWriters(noWritersArg), turn(ReaderWriter::Turn::WRITER_TURN) {
}
void run() {
int noReadersArg = noReaders, noWritersArg = noWriters;
while (noReadersArg--) {
readers.emplace_back(&ReaderWriter::reader, this);
}
while (noWritersArg--) {
writers.emplace_back(&ReaderWriter::writer, this);
}
}
~ReaderWriter() {
for (auto& r : readers) {
r.join();
}
for (auto& w : writers) {
w.join();
}
}
};
int main() {
ReaderWriter<int> rw(5, 5);
rw.run();
}
Here's a code snippet. Since the worker treads are already synchronized, requirement of two buffers is ruled out. So a simple queue is used to simulate the scenario:
#include "conio.h"
#include <iostream>
#include <thread>
#include <mutex>
#include <queue>
#include <atomic>
#include <condition_variable>
using namespace std;
enum state_t{ READ = 0, WRITE = 1 };
mutex mu;
condition_variable cv;
atomic<bool> running;
queue<int> buffer;
atomic<state_t> state;
void generate_test_data()
{
const int times = 5;
static int data = 0;
for (int i = 0; i < times; i++) {
data = (data++) % 100;
buffer.push(data);
}
}
void ProducerThread() {
while (running) {
unique_lock<mutex> lock(mu);
cv.wait(lock, []() { return !running || state == WRITE; });
if (!running) return;
generate_test_data(); //producing here
lock.unlock();
//notify consumer to start consuming
state = READ;
cv.notify_one();
}
}
void ConsumerThread() {
while (running) {
unique_lock<mutex> lock(mu);
cv.wait(lock, []() { return !running || state == READ; });
if (!running) return;
while (!buffer.empty()) {
auto data = buffer.front(); //consuming here
buffer.pop();
cout << data << " \n";
}
//notify producer to start producing
if (buffer.empty()) {
state = WRITE;
cv.notify_one();
}
}
}
int main(){
running = true;
thread producer = thread([]() { ProducerThread(); });
thread consumer = thread([]() { ConsumerThread(); });
//simulating gui thread
while (!getch()){
}
running = false;
producer.join();
consumer.join();
}
Not a complete answer, though I think two condition variables could be helpful, one named buffer_empty that the producer thread will wait on, and another named buffer_filled that the consumer thread will wait on. Number of mutexes, how to loop, and so on I cannot comment on, since I'm not sure about the details myself.
Accesses to shared variables should only be done while holding the
mutex that protects it
condition_variable::wait should check a condition.
The condition should be a shared variable protected by the mutex that you pass to condition_variable::wait.
The way to check the condition is to wrap the call to wait in a while loop or use the 2-argument overload of wait (which is
equivalent to the while-loop version)
Note: These rules aren't strictly necessary if you truly understand what the hardware is doing. However, these problems get complicated quickly when with simple data structures, and it will be easier to prove that your algorithm is working correctly if you follow them.
Your Q and Q_buf are shared variables. Due to Rule 1, I would prefer to have them as local variables declared in the function that uses them (consume() and produce(), respectively). There will be 1 shared buffer that will be protected by a mutex. The producer will add to its local buffer. When that buffer is full, it acquires the mutex and pushes the local buffer to the shared buffer. It then waits for the consumer to accept this buffer before producing more data.
The consumer waits for this shared buffer to "arrive", then it acquires the mutex and replaces its empty local buffer with the shared buffer. Then it signals to the producer that the buffer has been accepted so it knows to start producing again.
Semantically, I don't see a reason to use swap over move, since in every case one of the containers is empty anyway. Maybe you want to use swap because you know something about the underlying memory. You can use whichever you want and it will be fast and work the same (at least algorithmically).
This problem can be done with 1 condition variable, but it may be a little easier to think about if you use 2.
Here's what I came up with. Tested on Visual Studio 2017 (15.6.7) and GCC 5.4.0. I don't need to be credited or anything (it's such a simple piece), but legally I have to say that I offer no warranties whatsoever.
#include <thread>
#include <vector>
#include <mutex>
#include <condition_variable>
#include <chrono>
std::vector<int> g_deliveryBuffer;
bool g_quit = false;
std::mutex g_mutex; // protects g_deliveryBuffer and g_quit
std::condition_variable g_producerDeliver;
std::condition_variable g_consumerAccepted;
// consumer
void consume()
{
// local buffer
std::vector<int> consumerBuffer;
while (true)
{
if (consumerBuffer.empty())
{
std::unique_lock<std::mutex> lock(g_mutex);
while (g_deliveryBuffer.empty() && !g_quit) // if we beat the producer, wait for them to push to the deliverybuffer
g_producerDeliver.wait(lock);
if (g_quit)
break;
consumerBuffer = std::move(g_deliveryBuffer); // get the buffer
}
g_consumerAccepted.notify_one(); // notify the producer that the buffer has been accepted
// for-loop to process the elems in Q
// ...
consumerBuffer.clear();
// ...
}
}
// producer
void produce()
{
std::vector<int> producerBuffer;
while (true)
{
// for-loop to fill up Q_buf
// ...
producerBuffer = { 0, 1, 2, 3, 4, 5, 6, 7, 8, 9 };
// ...
// once Q_buf is fully filled, wait until consumer asks to give it a full Q
{ // scope is for lock
std::unique_lock<std::mutex> lock(g_mutex);
g_deliveryBuffer = std::move(producerBuffer); // ok to push to deliverybuffer. it is guaranteed to be empty
g_producerDeliver.notify_one();
while (!g_deliveryBuffer.empty() && !g_quit)
g_consumerAccepted.wait(lock); // wait for consumer to signal for more data
if (g_quit)
break;
// We will never reach this point if the buffer is not empty.
}
}
}
int main()
{
// spawn threads
std::thread consumerThread(consume);
std::thread producerThread(produce);
// for for 5 seconds
std::this_thread::sleep_for(std::chrono::seconds(5));
// signal that it's time to quit
{
std::lock_guard<std::mutex> lock(g_mutex);
g_quit = true;
}
// one of the threads may be sleeping
g_consumerAccepted.notify_one();
g_producerDeliver.notify_one();
consumerThread.join();
producerThread.join();
return 0;
}

How to prematurely kill std::async threads before they are finished *without* using a std::atomic_bool?

I have a function that takes a callback, and used it to do work on 10 separate threads. However, it is often the case that not all of the work is needed. For example, if the desired result is obtained on the third thread, it should stop all work being done on of the remaining alive threads.
This answer here suggests that it is not possible unless you have the callback functions take an additional std::atomic_bool argument, that signals whether the function should terminate prematurely.
This solution does not work for me. The workers are spun up inside a base class, and the whole point of this base class is to abstract away details of multithreading. How can I do this? I am anticipating that I will have to ditch std::async for something more involved.
#include <iostream>
#include <future>
#include <vector>
class ABC{
public:
std::vector<std::future<int> > m_results;
ABC() {};
~ABC(){};
virtual int callback(int a) = 0;
void doStuffWithCallBack();
};
void ABC::doStuffWithCallBack(){
// start working
for(int i = 0; i < 10; ++i)
m_results.push_back(std::async(&ABC::callback, this, i));
// analyze results and cancel all threads when you get the 1
for(int j = 0; j < 10; ++j){
double foo = m_results[j].get();
if ( foo == 1){
break; // but threads continue running
}
}
std::cout << m_results[9].get() << " <- this shouldn't have ever been computed\n";
}
class Derived : public ABC {
public:
Derived() : ABC() {};
~Derived() {};
int callback(int a){
std::cout << a << "!\n";
if (a == 3)
return 1;
else
return 0;
};
};
int main(int argc, char **argv)
{
Derived myObj;
myObj.doStuffWithCallBack();
return 0;
}
I'll just say that this should probably not be a part of a 'normal' program, since it could leak resources and/or leave your program in an unstable state, but in the interest of science...
If you have control of the thread loop, and you don't mind using platform features, you could inject an exception into the thread. With posix you can use signals for this, on Windows you would have to use SetThreadContext(). Though the exception will generally unwind the stack and call destructors, your thread may be in a system call or other 'non-exception safe place' when the exception occurs.
Disclaimer: I only have Linux at the moment, so I did not test the Windows code.
#if defined(_WIN32)
# define ITS_WINDOWS
#else
# define ITS_POSIX
#endif
#if defined(ITS_POSIX)
#include <signal.h>
#endif
void throw_exception() throw(std::string())
{
throw std::string();
}
void init_exceptions()
{
volatile int i = 0;
if (i)
throw_exception();
}
bool abort_thread(std::thread &t)
{
#if defined(ITS_WINDOWS)
bool bSuccess = false;
HANDLE h = t.native_handle();
if (INVALID_HANDLE_VALUE == h)
return false;
if (INFINITE == SuspendThread(h))
return false;
CONTEXT ctx;
ctx.ContextFlags = CONTEXT_CONTROL;
if (GetThreadContext(h, &ctx))
{
#if defined( _WIN64 )
ctx.Rip = (DWORD)(DWORD_PTR)throw_exception;
#else
ctx.Eip = (DWORD)(DWORD_PTR)throw_exception;
#endif
bSuccess = SetThreadContext(h, &ctx) ? true : false;
}
ResumeThread(h);
return bSuccess;
#elif defined(ITS_POSIX)
pthread_kill(t.native_handle(), SIGUSR2);
#endif
return false;
}
#if defined(ITS_POSIX)
void worker_thread_sig(int sig)
{
if(SIGUSR2 == sig)
throw std::string();
}
#endif
void init_threads()
{
#if defined(ITS_POSIX)
struct sigaction sa;
sigemptyset(&sa.sa_mask);
sa.sa_flags = 0;
sa.sa_handler = worker_thread_sig;
sigaction(SIGUSR2, &sa, 0);
#endif
}
class tracker
{
public:
tracker() { printf("tracker()\n"); }
~tracker() { printf("~tracker()\n"); }
};
int main(int argc, char *argv[])
{
init_threads();
printf("main: starting thread...\n");
std::thread t([]()
{
try
{
tracker a;
init_exceptions();
printf("thread: started...\n");
std::this_thread::sleep_for(std::chrono::minutes(1000));
printf("thread: stopping...\n");
}
catch(std::string s)
{
printf("thread: exception caught...\n");
}
});
printf("main: sleeping...\n");
std::this_thread::sleep_for(std::chrono::seconds(2));
printf("main: aborting...\n");
abort_thread(t);
printf("main: joining...\n");
t.join();
printf("main: exiting...\n");
return 0;
}
Output:
main: starting thread...
main: sleeping...
tracker()
thread: started...
main: aborting...
main: joining...
~tracker()
thread: exception caught...
main: exiting...

Readers / writer implementation using std::atomic (mutex free)

Below is an attempt at a multiple reader / writer shared data which uses std::atomics and busy waits instead of mutex and condition variables to synchronize between readers and writers. I am puzzled as to why the asserts in there are being hit. I'm sure there's a bug somewhere in the logic, but I'm not certain where it is.
The idea behind the implementation is that threads that read are spinning until the writer is done writing. As they enter the read function they increase m_numReaders count and as they are waiting for the writer they increase the m_numWaiting count.
The idea is that the m_numWaiting should then always be smaller or equal to m_numReaders, provided m_numWaiting is always incremented after m_numReaders and decremented before m_numReaders.
There shouldn't be a case where m_numWaiting is bigger than m_numReaders (or I'm not seeing it) since a reader always increments the reader counter first and only sometimes increments the waiting counter and the waiting counter is always decremented first.
Yet, this seems to be whats happening because the asserts are being hit.
Can someone point out the logic error, if you see it?
Thanks!
#include <iostream>
#include <thread>
#include <assert.h>
template<typename T>
class ReadWrite
{
public:
ReadWrite() : m_numReaders(0), m_numWaiting(0), m_writing(false)
{
m_writeFlag.clear();
}
template<typename functor>
void read(functor& readFunc)
{
m_numReaders++;
std::atomic<bool>waiting(false);
while (m_writing)
{
if(!waiting)
{
m_numWaiting++; // m_numWaiting should always be increased after m_numReaders
waiting = true;
}
}
assert(m_numWaiting <= m_numReaders);
readFunc(&m_data);
assert(m_numWaiting <= m_numReaders); // <-- These asserts get hit ?
if(waiting)
{
m_numWaiting--; // m_numWaiting should always be decreased before m_numReaders
}
m_numReaders--;
assert(m_numWaiting <= m_numReaders); // <-- These asserts get hit ?
}
//
// Only a single writer can operate on this at any given time.
//
template<typename functor>
void write(functor& writeFunc)
{
while (m_writeFlag.test_and_set());
// Ensure no readers present
while (m_numReaders);
// At this point m_numReaders may have been increased !
m_writing = true;
// If a reader entered before the writing flag was set, wait for
// it to finish
while (m_numReaders > m_numWaiting);
writeFunc(&m_data);
m_writeFlag.clear();
m_writing = false;
}
private:
T m_data;
std::atomic<int64_t> m_numReaders;
std::atomic<int64_t> m_numWaiting;
std::atomic<bool> m_writing;
std::atomic_flag m_writeFlag;
};
int main(int argc, const char * argv[])
{
const size_t numReaders = 2;
const size_t numWriters = 1;
const size_t numReadWrites = 10000000;
std::thread readThreads[numReaders];
std::thread writeThreads[numWriters];
ReadWrite<int> dummyData;
auto writeFunc = [&](int* pData) { return; }; // intentionally empty
auto readFunc = [&](int* pData) { return; }; // intentionally empty
auto readThreadProc = [&]()
{
size_t numReads = numReadWrites;
while (numReads--)
{
dummyData.read(readFunc);
}
};
auto writeThreadProc = [&]()
{
size_t numWrites = numReadWrites;
while (numWrites--)
{
dummyData.write(writeFunc);
}
};
for (std::thread& thread : writeThreads) { thread = std::thread(writeThreadProc);}
for (std::thread& thread : readThreads) { thread = std::thread(readThreadProc);}
for (std::thread& thread : writeThreads) { thread.join();}
for (std::thread& thread : readThreads) { thread.join();}
}

Keeping number of threads constant with pthread in C

I tried to find a solution in order to keep the number of working threads constant under linux in C using pthreads, but I seem to be unable to fully understand what's wrong with the following code:
#include <stdio.h>
#include <stdlib.h>
#include <pthread.h>
#define MAX_JOBS 50
#define MAX_THREADS 5
pthread_mutex_t mutex1 = PTHREAD_MUTEX_INITIALIZER;
int jobs = MAX_JOBS;
int worker = 0;
int counter = 0;
void *functionC() {
pthread_mutex_lock(&mutex1);
worker++;
counter++;
printf("Counter value: %d\n",counter);
pthread_mutex_unlock(&mutex1);
// Do something...
sleep(4);
pthread_mutex_lock(&mutex1);
jobs--;
worker--;
printf(" >>> Job done: %d\n",jobs);
pthread_mutex_unlock(&mutex1);
}
int main(int argc, char *argv[]) {
int i=0, j=0;
pthread_t thread[MAX_JOBS];
// Create threads if the number of working threads doesn't exceed MAX_THREADS
while (1) {
if (worker > MAX_THREADS) {
printf(" +++ In queue: %d\n", worker);
sleep(1);
} else {
//printf(" +++ Creating new thread: %d\n", worker);
pthread_create(&thread[i], NULL, &functionC, NULL);
//printf("%d",worker);
i++;
}
if (i == MAX_JOBS) break;
}
// Wait all threads to finish
for (j=0;j<MAX_JOBS;j++) {
pthread_join(thread[j], NULL);
}
return(0);
}
A while (1) loop keeps creating threads if the number of working threads is under a certain threshold. A mutex is supposed to lock the critical sections every time the global counter of the working threads is incremented (thread creation) and decremented (job is done). I thought it could work fine and for the most part it does, but weird things happen...
For instance, if I comment (as it is in this snippet) the printf //printf(" +++ Creating new thread: %d\n", worker); the while (1) seems to generate a random number (18-25 in my experience) threads (functionC prints out "Counter value: from 1 to 18-25"...) at a time instead of respecting the IF condition inside the loop. If I include the printf the loop seems to behave "almost" in the right way... This seems to hint that there's a missing "mutex" condition that I should add to the loop in main() to effectively lock the thread when MAX_THREADS is reached but after changing a LOT of times this code for the past few days I'm a bit lost, now. What am I missing?
Please, let me know what I should change in order to keep the number of threads constant it doesn't seem that I'm too far from the solution... Hopefully... :-)
Thanks in advance!
Your problem is that worker is not incremented until the new thread actually starts and gets to run - in the meantime, the main thread loops around, checks workers, finds that it hasn't changed, and starts another thread. It can repeat this many times, creating far too many threads.
So, you need to increment worker in the main thread, when you've decided to create a new thread.
You have another problem - you should be using condition variables to let the main thread sleep until it should start another thread, not using a busy-wait loop with a sleep(1); in it. The complete fixed code would look like:
#include <stdio.h>
#include <stdlib.h>
#include <pthread.h>
#include <unistd.h>
#define MAX_JOBS 50
#define MAX_THREADS 5
pthread_mutex_t mutex1 = PTHREAD_MUTEX_INITIALIZER;
pthread_cond_t cond1 = PTHREAD_COND_INITIALIZER;
int jobs = MAX_JOBS;
int workers = 0;
int counter = 0;
void *functionC() {
pthread_mutex_lock(&mutex1);
counter++;
printf("Counter value: %d\n",counter);
pthread_mutex_unlock(&mutex1);
// Do something...
sleep(4);
pthread_mutex_lock(&mutex1);
jobs--;
printf(" >>> Job done: %d\n",jobs);
/* Worker is about to exit, so decrement count and wakeup main thread */
workers--;
pthread_cond_signal(&cond1);
pthread_mutex_unlock(&mutex1);
return NULL;
}
int main(int argc, char *argv[]) {
int i=0, j=0;
pthread_t thread[MAX_JOBS];
// Create threads if the number of working threads doesn't exceed MAX_THREADS
while (i < MAX_JOBS) {
/* Block on condition variable until there are insufficient workers running */
pthread_mutex_lock(&mutex1);
while (workers >= MAX_THREADS)
pthread_cond_wait(&cond1, &mutex1);
/* Another worker will be running shortly */
workers++;
pthread_mutex_unlock(&mutex1);
pthread_create(&thread[i], NULL, &functionC, NULL);
i++;
}
// Wait all threads to finish
for (j=0;j<MAX_JOBS;j++) {
pthread_join(thread[j], NULL);
}
return(0);
}
Note that even though this works, it isn't ideal - it's best to create the number of threads you want up-front, and have them loop around, waiting for work. This is because creating and destroying threads has significant overhead, and because it often simplifies resource management. A version of your code rewritten to work like this would look like:
#include <stdio.h>
#include <stdlib.h>
#include <pthread.h>
#include <unistd.h>
#define MAX_JOBS 50
#define MAX_THREADS 5
pthread_mutex_t mutex1 = PTHREAD_MUTEX_INITIALIZER;
int jobs = MAX_JOBS;
int counter = 0;
void *functionC()
{
int running_job;
pthread_mutex_lock(&mutex1);
counter++;
printf("Counter value: %d\n",counter);
while (jobs > 0) {
running_job = jobs--;
pthread_mutex_unlock(&mutex1);
printf(" >>> Job starting: %d\n", running_job);
// Do something...
sleep(4);
printf(" >>> Job done: %d\n", running_job);
pthread_mutex_lock(&mutex1);
}
pthread_mutex_unlock(&mutex1);
return NULL;
}
int main(int argc, char *argv[]) {
int i;
pthread_t thread[MAX_THREADS];
for (i = 0; i < MAX_THREADS; i++)
pthread_create(&thread[i], NULL, &functionC, NULL);
// Wait all threads to finish
for (i = 0; i < MAX_THREADS; i++)
pthread_join(thread[i], NULL);
return 0;
}

Resources