Producer Consumer - using semaphores in child Processes - linux

I had implemented the Bounded buffer(Buffer size 5) problem using three semaphores, two counting (with count MAX 5) and one binary semaphore for critical section.
The producer and consumer processes were separate and were sharing the buffer.
Then I have moved on to try the same problem, this time with One Parent process that sets up the shared memory( Buffer) and two child processes which act like Producer and consumer.
I almost copied whatever Implemented in the earlier code into the new one (Producer goes into the ret ==0 and i==0 block and Consumer goes into ret ==0 and i==1 block..Here i is the count of the child processes);
However my process blocks. The pseudo implementation of the code is as follows:
Please suggest if the steps are correct. I think i may be going wrong with the sharing of semaphores and their values. The shared memory anyways gets shared implicitly between the parent and both child processes.
struct shma{readindex,writeindex,buf_max,char buf[5],used_count};
main()
{
struct shma* shm;
shmid = shmget();
shm = shmat(shmid);
init_shma(0,0,5,0);
while(i++<2)
{
ret = fork();
if(ret > 0 )
continue;
if(ret ==0 )
{
if(i==0)
{
char value;
sembuf[3]; semun u1; values[] = {5,1,0}; // Sem Numbers 1,2,3
semid = semget(3 semaphores);
semctl(SETALL,values);
while(1)
{
getValuefromuser(&value);
decrement(1);
decrement(2); // Critical section
*copy value to shared memory*
increment(2);
increment(3); // used count
}
}
if(i==1)
{
char value;
sembuf[3]; semun u1; values[] = {5,1,0}; // Sem Numbers 1,2,3
semid = semget(3 semaphores);
semctl(SETALL,values);
while(1)
{
decrement(3); // Used Count
decrement(2); // Critical Section
read and print(&value); // From Shared Memory
increment(2);
increment(1); // free slots
}
}
}
}//while
Cleanup Code.
}//main
Should I get semaphore ids in both child processes..or is there something else missing.

The pseudo code implementation would be something like this. Get the semaphore ID in the child process using same key, either using ftok or hardcoded key, then obtain the current value of semaphore then perform appropriate operations.
struct shma{readindex,writeindex,buf_max,char buf[5],used_count};
main()
{
struct shma* shm;
shmid = shmget();
shm = shmat(shmid);
init_shma(0,0,5,0);
sembuf[3]; semun u1; values[] = {5,1,0}; // Sem Numbers 1,2,3
semid = semget(3 semaphores);
semctl(SETALL,values);
while(i++<2)
{
ret = fork();
if(ret > 0 )
continue;
if(ret ==0 )
{
if(i==0)
{
char value;
sembuf[3]; semun u1; values[];
semid = semget(3 semaphores);
while(1)
{
getValuefromuser(&value);
decrement(1);
decrement(2); // Critical section
*copy value to shared memory*
increment(2);
increment(3); // used count
}
}
if(i==1)
{
char value;
sembuf[3]; semun u1; values[];
while(1)
{
getValuefromuser(&value);
decrement(3); // Used Count
decrement(2); // Critical Section
read and print(&value); // From Shared Memory
increment(2);
increment(1); // free slots
}
}
}
}//while
Cleanup Code.
}//main

I ultimately found a problem with my understanding.
I have to create and set up a semaphore or three semaphores in the parent process and then obtain their value in the respective producer and consumer child processes then use them accordingly.
I was earlier, creating semaphores in both producer and consumer.
silly me.!!

Related

Is this valid logic for a lock free queue based on lists and iterators (c++11)

Can anyone tell me if there is a flaw in the logic of my lock free queue code below? Specifically, I'm concerned that having one thread modify a std::list while other threads are simultaneously manipulating iterators to that list may not be OK. My application based around this method appears to be working fine, but the more I think about it the more I'm worried that I've misunderstood the relation between std::list and iterators and that there's a problem hiding.
Instead of explaining the code here, I tried to write it to be very easy to read. Generally only the producer thread modifies any of the std::lists, and the various consumer threads only work with iterators. Shared pointers are used to automate memory management. The entire example code is below, it should compile and run as-is on Visual Studio 2015.
// shared_pointers.cpp : Defines the entry point for the console application.
//
/* This process based on the lock free queues article found here: http://www.drdobbs.com/parallel/lock-free-queues/208801974
* The goal is to have one thread generate data, and mulitple threads consume the data, without requiring the overhead of locks.
* This design is based on the premise that list iterators are guaranteed to be valid, even after the list has been modified.
* The parent thread is the ONLY thread to modify the list, the child threads all work with iterators.
*
*
* The queue:
* [BeeeeeeeeeeeeeeeHwwwwwwwwwwwwwwwwwwwwT]
* ^ ^^^^ ^ ^^^ ^
* | | | | |
* Begin | Head | Tail
* Erase Waiting
*
* Begin - The start of the queue, this is a dummy element that won't be read
* Erase - These elements have already been read and are waiting to be erased
* Head - This is the frist unread element in the queue
* Waiting - These elements are waiting to be read
* Tail - This is the end of the list, same as vector::end()
*
* We will combine lists, iterators and shared pointers to make a lock free single-producer multiple-consumer queue.
* The main thread (producer thread) will allocate buffers and keep a list of shared pointers to the buffers.
* Every worker (consumer) thread will get list of the same shared pointers. As each consumer thread consumes the buffer, the
* buffer references will be removed from that threads list. Periodically the main thread will check the number of references
* to each shared pointer, and when it's down to 1 then all consumer threads are finished with the pointer and it will be released,
* freeing the buffer memory. The goal is to eliminate the overhead of locks and memory management.
*
*/
#include "stdafx.h"
#include <list>
#include <memory>
#include <thread>
#include <vector>
#include <windows.h>
#include <algorithm>
#include <iostream>
typedef std::shared_ptr<BYTE> SharedBytePtr;
typedef std::list<SharedBytePtr> SharedBytePtrList;
#define BUFFER_SIZE_BYTES (1024 * 1024 * 24) /* 24MB */
#define MAX_NUMBER_BUFFERS 20
#define NUMBER_CONSUMER_THREADS 50
#define NUMBER_DATA_READS 10000 /* Total number of buffers to "fill with data" by the producer thread, to be consumed by the worker threads */
class DataConsumer
{
public:
void StartInfiniteLoop();
void ConsumeData(SharedBytePtr &pData);
bool bExitNow; // Set true to cause the thread to exit
// Lock free queue stuff. Iterators should always be valid, but only 1 thread can edit the list!
SharedBytePtrList m_lMyBuffers;
SharedBytePtrList::iterator m_iHead, m_iTail;
DataConsumer()
{
bExitNow = false;
// Add an empty item to the list, we will always think this has already been read
std::shared_ptr<BYTE> pDummy;
m_lMyBuffers.push_back(pDummy);
m_iHead = m_lMyBuffers.begin(); // Dummy element, just a place holder
m_iTail = m_lMyBuffers.end(); // Always vector::end()
} // ctor()
};
/* Sit in this loop, consuming data buffers as they become available */
void DataConsumer::StartInfiniteLoop()
{
while (true)
{
if (bExitNow)
{
return;
}
// Do we have a buffer ready to be consumed? Remember buffer 0 is a dummy, so check or size > 1
if (m_lMyBuffers.size() > 1)
{
// Access the list elements using ONLY iterators! Only the main producer thread can access the list directly.
// Iterators are guaranteed by definition to be valid, even if the list is modified.
SharedBytePtrList::iterator iNextBuf = m_iHead;
++iNextBuf; // m_iHead is a dummy entry or already read
// Loop through all available buffers and consume them
while (iNextBuf != m_iTail)
{
m_iHead = iNextBuf;
ConsumeData(*m_iHead);
iNextBuf++;
}
}
else
{
// Nothing to do, unload the CPU until more data is ready
Sleep(1);
}
} // while
} // ::StartInfiniteLoop
void DataConsumer::ConsumeData(SharedBytePtr &pData)
{
// Do stuff....
Sleep(10); // Pretend that we're doing something with the data..
return;
}
class WorkerThreadClass
{
public:
std::thread theThread; // Reference to the actual thread
DataConsumer* dataConsumer; // Reference to the DataConsumer object where "theThread" will run..
};
int main()
{
// Make a local list to store a reference to every buffer allocated in the main (producer) thread
std::list<std::shared_ptr<BYTE>> lBuffers;
// Start up the consumer threads
std::vector<WorkerThreadClass*> vWorkerThreads;
for (int t = 0; t < NUMBER_CONSUMER_THREADS; t++)
{
DataConsumer *pDataConsumer = new DataConsumer;
WorkerThreadClass *pWorkerThread = new WorkerThreadClass();
// Startup the thread
pWorkerThread->theThread = std::thread(&DataConsumer::StartInfiniteLoop, pDataConsumer);
pWorkerThread->dataConsumer = pDataConsumer;
// Add our new worker thread to our list
vWorkerThreads.push_back(pWorkerThread);
} // for()
// We are the main (producer) thread now. Simulate 10,000 data reads. Each read will go into a buffer, and
// a reference to that buffer will be stored both on the main (producer) thread buffer list, and on the buffer list
// of every worker (consumer) thread for processing.
int iBuffersToRead = NUMBER_DATA_READS;
while (iBuffersToRead > 0)
{
// Check the buffer reference list on each worker (consumer) thread, and erase every entry that falls in the "erase" area, between the "begin" and "head" elements. These buffers have already
// been consumed by this thread. Modify these lists ONLY from the main (procuder) thread, NOT from the worker threads themselves!
// [BeeeeeeeeeeeeeeeHwwwwwwwwwwwwwwwwwwwwT]
std::for_each(vWorkerThreads.begin(), vWorkerThreads.end(), [&](WorkerThreadClass* thisThread)
{
thisThread->dataConsumer->m_lMyBuffers.erase(thisThread->dataConsumer->m_lMyBuffers.begin(), thisThread->dataConsumer->m_iHead); // clean up unused entries
});
// Have we already allocated our limit of buffers?
if (lBuffers.size() < MAX_NUMBER_BUFFERS)
{
iBuffersToRead--;
std::cout << "Buffer read number: " << NUMBER_DATA_READS - iBuffersToRead << std::endl;
// Create a new buffer
SharedBytePtr pBuf = (SharedBytePtr)new BYTE[BUFFER_SIZE_BYTES];
// Fill the buffer with data here
// ReadSomeDataIntoBuffer(pBuf);....
// Add a reference to this buffer to the main (producer) thread list of all buffers
lBuffers.push_back(pBuf);
// Now add a reference to this buffer to the list of buffers on every worker (consumer) thread. This will increase the shared_ptr ref count beyond "1", we will not release this
// buffer until the ref count is again down to "1", which means that the main (producer) thread owns the only reference to it, and all worker (consumer) threads are finished with it.
std::cout << "Adding buffer to threads, total buffers now: " << lBuffers.size() << std::endl;
std::for_each(vWorkerThreads.begin(), vWorkerThreads.end(), [&](WorkerThreadClass* thisThread)
{
thisThread->dataConsumer->m_lMyBuffers.push_back(pBuf); // Push back new buffer
thisThread->dataConsumer->m_iTail = thisThread->dataConsumer->m_lMyBuffers.end(); // Update tail
thisThread->dataConsumer->m_lMyBuffers.erase(thisThread->dataConsumer->m_lMyBuffers.begin(), thisThread->dataConsumer->m_iHead); // clean up old entries while we're here
});
} // if()
else
{
// We've hit our limit on buffers and cannot create new ones until old ones are freed.
// Check all references on the main (producer) list of buffer references. Any that are "unique()" have only 1 reference, meaning all worker (consumer) threads
// are finished with it. In this case, release our reference and the memory will be automatically freed.
SharedBytePtrList::iterator iNextBuf = lBuffers.begin();
while (iNextBuf != lBuffers.end())
{
if (iNextBuf->unique())
{
// We (main thread) hold the only reference, remove the reference and the memory referenced by the shared_ptr will be freed
*iNextBuf = nullptr;
// Remove the entry from the list
lBuffers.remove(*iNextBuf);
// List is now invalid, reset iterator and loop
iNextBuf = lBuffers.begin();
std::cout << "Released buffer, number buffers now: " << lBuffers.size() << "\n";
continue;
} // if()
else
{
// std::cout << "Buffer still in-use by some worker (consumer) threads, cannot release it yet! Buffer list size: " << lBuffers.size() << "\n";
}
iNextBuf++;
} // while()
} // else
} // for()
// All of our work is finished, time to clean up
// Tell all worker (consumer) threads to exit
std::for_each(vWorkerThreads.begin(), vWorkerThreads.end(), [&](WorkerThreadClass* thisThread) { thisThread->dataConsumer->bExitNow = true; });
// As each thread exits, clean up it's object
std::vector<WorkerThreadClass*>::iterator iNext = vWorkerThreads.begin();
while (iNext != vWorkerThreads.end())
{
// Wait for the thread to exit
(*iNext)->theThread.join();
// Clean up the associated object
delete (*iNext)->dataConsumer;
iNext++;
}
// Clean up the final reference to our buffers so their memory will be freed
std::for_each(lBuffers.begin(), lBuffers.end(), [&](std::shared_ptr<BYTE> pBuf) { pBuf = nullptr; });
return 0;
}

POSIX semaphore with related processes running threads

I have an assignment to implement Producer consumer problem in a convoluted way(may be to test my understanding). The parent process should set up a shared memory. The unnamed semaphores(for empty count and filled count) should be initialized and a mutex should be initialized. Then two child processes are created, a producer child and a consumer child. Each child process should create a new thread which should do the job.
PS: I have read that the semaphore's should be kept in a shared memory as they would be shared by different processes.
Please provide some hints, or suggest changes.
So far, I have done this:
struct shmarea
{
unsigned short int read;
unsigned short int max_size;
char scratch[3][50];
unsigned short int write;
sem_t sem1;// Empty slot semaphore
sem_t sem2;// Filled slot Semaphore
};
void *thread_read(void* args);
void *thread_write(void *args);
pthread_mutex_t work_mutex;
struct shmarea *shma;
int main()
{
int fork_value,i=0,shmid;
printf("Parent process id is %d\n\n",getpid());
int res1,res2;
key_t key;
char *path = "/tmp";
int id = 'S';
key = ftok(path, id);
shmid = shmget(key,getpagesize(),IPC_CREAT|0666);
printf("Parent:Shared Memory id = %d\n",id);
shma = shmat(shmid,0,0);
shma->read = 0;
shma->max_size = 3;
shma->write = 0;
pthread_t a_thread;
pthread_t b_thread;
void *thread_result1,*thread_result2;
res1 = sem_init(&(shma->sem1),1,3);//Initializing empty slot sempahore
res2 = sem_init(&(shma->sem2),1,0);//Initializing filled slot sempahore
res1 = pthread_mutex_init(&work_mutex,NULL);
while(i<2)
{
fork_value = fork();
if(fork_value > 0)
{
i++;
}
if(fork_value == 0)
{
if(i==0)
{
printf("***0***\n");
//sem_t sem1temp = shma->sem1;
char ch;int res;
res= pthread_create(&a_thread,NULL,thread_write,NULL);
}
if(i==1)
{
printf("***1***\n");
//sem_t sem2temp = shma->sem2;
int res;
char ch;
res= pthread_create(&b_thread,NULL,thread_read,NULL);
}
}
}
int wait_V,status;
res1 = pthread_join(a_thread,&thread_result1);
res2 = pthread_join(b_thread,&thread_result2);
}
void *thread_read(void *args)
{
while(1)
{
sem_wait(&(shma->sem2));
pthread_mutex_lock(&work_mutex);
printf("The buf read from consumer:%s\n",shma->scratch[shma->read]);
shma->read = (shma->read+1)%shma->max_size;
pthread_mutex_unlock(&work_mutex);
sem_post(&(shma->sem1));
}
}
void *thread_write(void *args)
{
char buf[50];
while(1)
{
sem_wait(&(shma->sem1));
pthread_mutex_lock(&work_mutex);
read(STDIN_FILENO,buf,sizeof(buf));
strcpy(shma->scratch[shma->write],buf);
shma->write = (shma->write+1)%shma->max_size;
pthread_mutex_unlock(&work_mutex);
sem_post(&(shma->sem2));
}
}
(1) Your biggest problem by far is that you have managed to write a fork bomb. Because you don't exit either child in the fork loop each child is going to fall through and loop around and create their own children until you crash or bring the system down. You want something more like this:
while(i < 2)
{
fork_value = fork();
if(fork_value > 0)
i++;
if(fork_value == 0)
{
if(i==0)
{
printf("0 child is pid %d\n", getpid());
int res;
res = pthread_create(&a_thread,NULL,thread_write,NULL);
res = pthread_join(a_thread,&thread_result1);
exit(0);
}
if(i==1)
{
printf("1 child is pid %d\n", getpid());
int res;
res = pthread_create(&b_thread,NULL,thread_read,NULL);
res = pthread_join(b_thread,&thread_result2);
exit(0);
}
}
}
for (i = 0; i < 2; ++i)
wait(NULL);
Notice the wait on the children which you neglected.
(2) Always check your return codes. They are like safety belts, a bit of a drag but so helpful when you crash. (Yes, I didn't take my advice here but you should.)
(3) These names are awful.
unsigned short int read;
unsigned short int write;
Stay away from naming variables after system calls. It's confusing and just asking for trouble.
(4) Terminology wise, processes with a common ancestor, like these, are related. The parent can open shared memory and other resources and pass it on to the children. Unrelated processes would, for example, multiple instances of program launched from different terminals. They can share resources but not in the "inherited" way forked processes do.
It's late and didn't get around to looking at what you are doing with the threads and such but this should get you started.

Multi thread Dead Lock - Producer & Customer module using pthread lib

Recently I'm investigate the pthread multi-thread lib and doing some example.
I try to write a Producer-Customer Module: There's a queue to store the Producer's product, and can be get by the Customer.
I set the queue MAX-SIZE as 20. When the queue is full, the Producer thread will wait, until the Customer thread consume one and nofity the Producer thread that he can start produce. And the same as Customer when the queue is empty, the Customer will wait until the Producer thread produce new one and notify him. :-)
I set the Customer thread consume faster than produce, it works fine as the log output in really what I expected. But, when I set the Producer thread consume faster than consume, it seems at last cause a deadlock :-(
I don't kown the reason, can anyone kindly read my code and give me some tips or how to modify the code?
Thanks!
#include "commons.h"
typedef struct tagNode {
struct tagNode *pNext;
char *pContent;
}NodeSt, *PNodeSt;
typedef struct {
size_t mNodeNum;
size_t mNodeIdx;
PNodeSt mRootNode;
}WorkQueue;
#define WORK_QUEUE_MAX 20
static pthread_cond_t g_adder_cond = PTHREAD_COND_INITIALIZER;
static pthread_mutex_t g_adder_mutex = PTHREAD_MUTEX_INITIALIZER;
static WorkQueue g_work_queue = {0};
//------------------------------------------------------------------------
void *customer_thread_runFunc(void *usrdat){
for( ; ; ) {
pthread_mutex_lock(&g_adder_mutex);{
while( g_work_queue.mNodeNum == 0 ) {
pthread_cond_wait(&g_adder_cond, &g_adder_mutex);
}
/********************** CONSUME NEW PRODUCT ***********************/
g_work_queue.mNodeNum --;
if( g_work_queue.mRootNode->pNext != NULL ) {
PNodeSt pTempNode = g_work_queue.mRootNode->pNext;
free( g_work_queue.mRootNode->pContent );
free( g_work_queue.mRootNode );
g_work_queue.mRootNode = pTempNode;
} else {
free( g_work_queue.mRootNode->pContent );
free( g_work_queue.mRootNode );
g_work_queue.mRootNode = NULL;
}
/********************** CONSUME PRODUCT END ***********************/
// Nofity Producer Thread
pthread_cond_signal(&g_adder_cond);
}pthread_mutex_unlock(&g_adder_mutex);
// PAUSE FOR 300ms
usleep(300);
}
return NULL;
}
//------------------------------------------------------------------------
void *productor_thread_runFunc( void *usrdat ) {
for( ; ; ) {
pthread_mutex_lock(&g_adder_mutex); {
char tempStr[64];
PNodeSt pNodeSt = g_work_queue.mRootNode;
while( g_work_queue.mNodeNum >= WORK_QUEUE_MAX ) {
pthread_cond_wait(&g_adder_cond, &g_adder_mutex);
}
/********************** PRODUCE NEW PRODUCT ***********************/
g_work_queue.mNodeNum ++;
g_work_queue.mNodeIdx ++;
if( pNodeSt != NULL ) {
for( ; pNodeSt->pNext != NULL; pNodeSt = pNodeSt->pNext );
pNodeSt->pNext = malloc(sizeof(NodeSt));
memset(pNodeSt->pNext, 0, sizeof(NodeSt));
sprintf( tempStr, "production id: %d", g_work_queue.mNodeIdx);
pNodeSt->pNext->pContent = strdup(tempStr);
} else {
g_work_queue.mRootNode = malloc(sizeof(NodeSt));
memset(g_work_queue.mRootNode, 0, sizeof(NodeSt));
sprintf( tempStr, "production id: %d", g_work_queue.mNodeIdx);
g_work_queue.mRootNode->pContent = strdup(tempStr);
}
/********************** PRODUCE PRODUCT END ***********************/
// Nofity Customer Thread
pthread_cond_signal(&g_adder_cond);
}pthread_mutex_unlock(&g_adder_mutex);
// PAUSE FOR 150ms, faster than Customer Thread
usleep(150);
}
return NULL;
}
//------------------------------------------------------------------------
int main(void) {
pthread_t pt1, pt3;
pthread_attr_t attr;
pthread_attr_init(&attr);
pthread_create(&pt1, &attr, customer_thread_runFunc, NULL);
pthread_create(&pt3, &attr, productor_thread_runFunc, NULL);
pthread_join(pt1, NULL);
pthread_join(pt3, NULL);
printf("MAIN - main thread finish!\n");
return EXIT_SUCCESS;
}
your producer is waiting on the same condition as your consumer? This is definitely a source of trouble. Think about your code conceptually. What preconditions do the producer need before "producing"? As you mentioned, the buffer need to have space.
I did not look in detail, but you probably need an additional condition variable which is used by the producer (not the same as the consumer). The producer wait only if the queue is full. The consumer signal every times it successfully retrieve something from the queue.
EDIT: Reading the doc of pthread lib, One mutex can be used by two conditions
IDEA OF PSEUDOCODE :)
Mutex mqueue
Condition cprod, ccons
produce()
mqueue.lock
while the queue is full
cprod.wait(mqueue)
end
do the production on queue
mcons.signal
mqueue.unlock
end produce
consume()
mqueue.lock
while the queue is empty
ccons.wait(mqueue)
end
do the consumption on the queue
cprod.signal
mqueue.unlock
end consume
Preferably signal when you have the lock. Here I don't think the order make a difference.

Producer-Consumer Implementation

I need to implement producer-consumer problem in my project. N consumers and M producers will be created. A producer will use publish(v) call to reach v data to consumer. A consumer will use get_data(v) call to get a copy of data v. I really don't know how to implement it. Please help me.
I am going to use C to implement it. I will create n process for consumers and m process for producers. If a producer publish a data, other producers can not do it until all consumers get it. I will use semaphores and shared memory to exchange data.
I found something which does similar job. But it is using threads but i need process instead. How can i change this.
#include <pthread.h>
#include <stdio.h>
#include <semaphore.h>
#define BUFF_SIZE 4
#define FULL 0
#define EMPTY 0
char buffer[BUFF_SIZE];
int nextIn = 0;
int nextOut = 0;
sem_t empty_sem_mutex; //producer semaphore
sem_t full_sem_mutex; //consumer semaphore
void Put(char item)
{
int value;
sem_wait(&empty_sem_mutex); //get the mutex to fill the buffer
buffer[nextIn] = item;
nextIn = (nextIn + 1) % BUFF_SIZE;
printf("Producing %c ...nextIn %d..Ascii=%d\n",item,nextIn,item);
if(nextIn==FULL)
{
sem_post(&full_sem_mutex);
sleep(1);
}
sem_post(&empty_sem_mutex);
}
void * Producer()
{
int i;
for(i = 0; i < 10; i++)
{
Put((char)('A'+ i % 26));
}
}
void Get()
{
int item;
sem_wait(&full_sem_mutex); // gain the mutex to consume from buffer
item = buffer[nextOut];
nextOut = (nextOut + 1) % BUFF_SIZE;
printf("\t...Consuming %c ...nextOut %d..Ascii=%d\n",item,nextOut,item);
if(nextOut==EMPTY) //its empty
{
sleep(1);
}
sem_post(&full_sem_mutex);
}
void * Consumer()
{
int i;
for(i = 0; i < 10; i++)
{
Get();
}
}
int main()
{
pthread_t ptid,ctid;
//initialize the semaphores
sem_init(&empty_sem_mutex,0,1);
sem_init(&full_sem_mutex,0,0);
//creating producer and consumer threads
if(pthread_create(&ptid, NULL,Producer, NULL))
{
printf("\n ERROR creating thread 1");
exit(1);
}
if(pthread_create(&ctid, NULL,Consumer, NULL))
{
printf("\n ERROR creating thread 2");
exit(1);
}
if(pthread_join(ptid, NULL)) /* wait for the producer to finish */
{
printf("\n ERROR joining thread");
exit(1);
}
if(pthread_join(ctid, NULL)) /* wait for consumer to finish */
{
printf("\n ERROR joining thread");
exit(1);
}
sem_destroy(&empty_sem_mutex);
sem_destroy(&full_sem_mutex);
//exit the main thread
pthread_exit(NULL);
return 1;
}
I'd suggest you to make a plan and start reading. For example:
Read about how to create and manage threads. Hint: pthread.
Think how will the threads communicate - usually they use common data structure. Hint: message queue
Think how to protect the data structure, so both threads can read and write safely. Hint: mutexes.
Implement consumer and producer code.
Really, if you want more information you have to work a bit and ask more specific questions. Good luck!

pthread_cond_broadcast problem

Using pthreads in linux 2.6.30 I am trying to send a single signal which will cause multiple threads to begin execution. The broadcast seems to only be received by one thread. I have tried both pthread_cond_signal and pthread cond_broadcast and both seem to have the same behavior. For the mutex in pthread_cond_wait, I have tried both common mutexes and separate (local) mutexes with no apparent difference.
worker_thread(void *p)
{
// setup stuff here
printf("Thread %d ready for action \n", p->thread_no);
pthread_cond_wait(p->cond_var, p->mutex);
printf("Thread %d off to work \n", p->thread_no);
// work stuff
}
dispatch_thread(void *p)
{
// setup stuff
printf("Wakeup, everyone ");
pthread_cond_broadcast(p->cond_var);
printf("everyone should be working \n");
// more stuff
}
main()
{
pthread_cond_init(cond_var);
for (i=0; i!=num_cores; i++) {
pthread_create(worker_thread...);
}
pthread_create(dispatch_thread...);
}
Output:
Thread 0 ready for action
Thread 1 ready for action
Thread 2 ready for action
Thread 3 ready for action
Wakeup, everyone
everyone should be working
Thread 0 off to work
What's a good way to send signals to all the threads?
First off, you should have the mutex locked at the point where you call pthread_cond_wait(). It's generally a good idea to hold the mutex when you call pthread_cond_broadcast(), as well.
Second off, you should loop calling pthread_cond_wait() while the wait condition is true. Spurious wakeups can happen, and you must be able to handle them.
Finally, your actual problem: you are signaling all threads, but some of them aren't waiting yet when the signal is sent. Your main thread and dispatch thread are racing your worker threads: if the main thread can launch the dispatch thread, and the dispatch thread can grab the mutex and broadcast on it before the worker threads can, then those worker threads will never wake up.
You need a synchronization point prior to signaling where you wait to signal till all threads are known to be waiting for the signal. That, or you can keep signaling till you know all threads have been woken up.
In this case, you could use the mutex to protect a count of sleeping threads. Each thread grabs the mutex and increments the count. If the count matches the count of worker threads, then it's the last thread to increment the count and so signals on another condition variable sharing the same mutex to the sleeping dispatch thread that all threads are ready. The thread then waits on the original condition, which causes it release the mutex.
If the dispatch thread wasn't sleeping yet when the last worker thread signals on that condition, it will find that the count already matches the desired count and not bother waiting, but immediately broadcast on the shared condition to wake workers, who are now guaranteed to all be sleeping.
Anyway, here's some working source code that fleshes out your sample code and includes my solution:
#include <stdio.h>
#include <pthread.h>
#include <err.h>
static const int num_cores = 8;
struct sync {
pthread_mutex_t *mutex;
pthread_cond_t *cond_var;
int thread_no;
};
static int sleeping_count = 0;
static pthread_cond_t all_sleeping_cond = PTHREAD_COND_INITIALIZER;
void *
worker_thread(void *p_)
{
struct sync *p = p_;
// setup stuff here
pthread_mutex_lock(p->mutex);
printf("Thread %d ready for action \n", p->thread_no);
sleeping_count += 1;
if (sleeping_count >= num_cores) {
/* Last worker to go to sleep. */
pthread_cond_signal(&all_sleeping_cond);
}
int err = pthread_cond_wait(p->cond_var, p->mutex);
if (err) warnc(err, "pthread_cond_wait");
printf("Thread %d off to work \n", p->thread_no);
pthread_mutex_unlock(p->mutex);
// work stuff
return NULL;
}
void *
dispatch_thread(void *p_)
{
struct sync *p = p_;
// setup stuff
pthread_mutex_lock(p->mutex);
while (sleeping_count < num_cores) {
pthread_cond_wait(&all_sleeping_cond, p->mutex);
}
printf("Wakeup, everyone ");
int err = pthread_cond_broadcast(p->cond_var);
if (err) warnc(err, "pthread_cond_broadcast");
printf("everyone should be working \n");
pthread_mutex_unlock(p->mutex);
// more stuff
return NULL;
}
int
main(void)
{
pthread_mutex_t mutex = PTHREAD_MUTEX_INITIALIZER;
pthread_cond_t cond_var = PTHREAD_COND_INITIALIZER;
pthread_t worker[num_cores];
struct sync info[num_cores];
for (int i = 0; i < num_cores; i++) {
struct sync *p = &info[i];
p->mutex = &mutex;
p->cond_var = &cond_var;
p->thread_no = i;
pthread_create(&worker[i], NULL, worker_thread, p);
}
pthread_t dispatcher;
struct sync p = {&mutex, &cond_var, num_cores};
pthread_create(&dispatcher, NULL, dispatch_thread, &p);
pthread_exit(NULL);
/* not reached */
return 0;
}

Resources