Producer-Consumer Implementation - linux

I need to implement producer-consumer problem in my project. N consumers and M producers will be created. A producer will use publish(v) call to reach v data to consumer. A consumer will use get_data(v) call to get a copy of data v. I really don't know how to implement it. Please help me.
I am going to use C to implement it. I will create n process for consumers and m process for producers. If a producer publish a data, other producers can not do it until all consumers get it. I will use semaphores and shared memory to exchange data.
I found something which does similar job. But it is using threads but i need process instead. How can i change this.
#include <pthread.h>
#include <stdio.h>
#include <semaphore.h>
#define BUFF_SIZE 4
#define FULL 0
#define EMPTY 0
char buffer[BUFF_SIZE];
int nextIn = 0;
int nextOut = 0;
sem_t empty_sem_mutex; //producer semaphore
sem_t full_sem_mutex; //consumer semaphore
void Put(char item)
{
int value;
sem_wait(&empty_sem_mutex); //get the mutex to fill the buffer
buffer[nextIn] = item;
nextIn = (nextIn + 1) % BUFF_SIZE;
printf("Producing %c ...nextIn %d..Ascii=%d\n",item,nextIn,item);
if(nextIn==FULL)
{
sem_post(&full_sem_mutex);
sleep(1);
}
sem_post(&empty_sem_mutex);
}
void * Producer()
{
int i;
for(i = 0; i < 10; i++)
{
Put((char)('A'+ i % 26));
}
}
void Get()
{
int item;
sem_wait(&full_sem_mutex); // gain the mutex to consume from buffer
item = buffer[nextOut];
nextOut = (nextOut + 1) % BUFF_SIZE;
printf("\t...Consuming %c ...nextOut %d..Ascii=%d\n",item,nextOut,item);
if(nextOut==EMPTY) //its empty
{
sleep(1);
}
sem_post(&full_sem_mutex);
}
void * Consumer()
{
int i;
for(i = 0; i < 10; i++)
{
Get();
}
}
int main()
{
pthread_t ptid,ctid;
//initialize the semaphores
sem_init(&empty_sem_mutex,0,1);
sem_init(&full_sem_mutex,0,0);
//creating producer and consumer threads
if(pthread_create(&ptid, NULL,Producer, NULL))
{
printf("\n ERROR creating thread 1");
exit(1);
}
if(pthread_create(&ctid, NULL,Consumer, NULL))
{
printf("\n ERROR creating thread 2");
exit(1);
}
if(pthread_join(ptid, NULL)) /* wait for the producer to finish */
{
printf("\n ERROR joining thread");
exit(1);
}
if(pthread_join(ctid, NULL)) /* wait for consumer to finish */
{
printf("\n ERROR joining thread");
exit(1);
}
sem_destroy(&empty_sem_mutex);
sem_destroy(&full_sem_mutex);
//exit the main thread
pthread_exit(NULL);
return 1;
}

I'd suggest you to make a plan and start reading. For example:
Read about how to create and manage threads. Hint: pthread.
Think how will the threads communicate - usually they use common data structure. Hint: message queue
Think how to protect the data structure, so both threads can read and write safely. Hint: mutexes.
Implement consumer and producer code.
Really, if you want more information you have to work a bit and ask more specific questions. Good luck!

Related

thread sync using mutex and condition variable

I'm trying to implement an multi-thread job, a producer and a consumer, and basically what I want to do is, when consumer finishes the data, it notifies the producer so that producer provides new data.
The tricky part is, in my current impl, producer and consumer both notifies each other and waits for each other, I don't know how to implement this part correctly.
For example, see the code below,
mutex m;
condition_variable cv;
vector<int> Q; // this is the queue the consumer will consume
vector<int> Q_buf; // this is a buffer Q into which producer will fill new data directly
// consumer
void consume() {
while (1) {
if (Q.size() == 0) { // when consumer finishes data
unique_lock<mutex> lk(m);
// how to notify producer to fill up the Q?
...
cv.wait(lk);
}
// for-loop to process the elems in Q
...
}
}
// producer
void produce() {
while (1) {
// for-loop to fill up Q_buf
...
// once Q_buf is fully filled, wait until consumer asks to give it a full Q
unique_lock<mutex> lk(m);
cv.wait(lk);
Q.swap(Q_buf); // replace the empty Q with the full Q_buf
cv.notify_one();
}
}
I'm not sure this the above code using mutex and condition_variable is the right way to implement my idea,
please give me some advice!
The code incorrectly assumes that vector<int>::size() and vector<int>::swap() are atomic. They are not.
Also, spurious wakeups must be handled by a while loop (or another cv::wait overload).
Fixes:
mutex m;
condition_variable cv;
vector<int> Q;
// consumer
void consume() {
while(1) {
// Get the new elements.
vector<int> new_elements;
{
unique_lock<mutex> lk(m);
while(Q.empty())
cv.wait(lk);
new_elements.swap(Q);
}
// for-loop to process the elems in new_elements
}
}
// producer
void produce() {
while(1) {
vector<int> new_elements;
// for-loop to fill up new_elements
// publish new_elements
{
unique_lock<mutex> lk(m);
Q.insert(Q.end(), new_elements.begin(), new_elements.end());
cv.notify_one();
}
}
}
Maybe that is close to what you want to achive. I used 2 conditional variables to notify producers and consumers between each other and introduced variable denoting which turn is now:
#include <ctime>
#include <condition_variable>
#include <iostream>
#include <mutex>
#include <queue>
#include <thread>
template<typename T>
class ReaderWriter {
private:
std::vector<std::thread> readers;
std::vector<std::thread> writers;
std::condition_variable readerCv, writerCv;
std::queue<T> data;
std::mutex readerMutex, writerMutex;
size_t noReaders, noWriters;
enum class Turn { WRITER_TURN, READER_TURN };
Turn turn;
void reader() {
while (1) {
{
std::unique_lock<std::mutex> lk(readerMutex);
while (turn != Turn::READER_TURN) {
readerCv.wait(lk);
}
std::cout << "Thread : " << std::this_thread::get_id() << " consumed " << data.front() << std::endl;
data.pop();
if (data.empty()) {
turn = Turn::WRITER_TURN;
writerCv.notify_one();
}
}
}
}
void writer() {
while (1) {
{
std::unique_lock<std::mutex> lk(writerMutex);
while (turn != Turn::WRITER_TURN) {
writerCv.wait(lk);
}
srand(time(NULL));
int random_number = std::rand();
data.push(random_number);
std::cout << "Thread : " << std::this_thread::get_id() << " produced " << random_number << std::endl;
turn = Turn::READER_TURN;
}
readerCv.notify_one();
}
}
public:
ReaderWriter(size_t noReadersArg, size_t noWritersArg) : noReaders(noReadersArg), noWriters(noWritersArg), turn(ReaderWriter::Turn::WRITER_TURN) {
}
void run() {
int noReadersArg = noReaders, noWritersArg = noWriters;
while (noReadersArg--) {
readers.emplace_back(&ReaderWriter::reader, this);
}
while (noWritersArg--) {
writers.emplace_back(&ReaderWriter::writer, this);
}
}
~ReaderWriter() {
for (auto& r : readers) {
r.join();
}
for (auto& w : writers) {
w.join();
}
}
};
int main() {
ReaderWriter<int> rw(5, 5);
rw.run();
}
Here's a code snippet. Since the worker treads are already synchronized, requirement of two buffers is ruled out. So a simple queue is used to simulate the scenario:
#include "conio.h"
#include <iostream>
#include <thread>
#include <mutex>
#include <queue>
#include <atomic>
#include <condition_variable>
using namespace std;
enum state_t{ READ = 0, WRITE = 1 };
mutex mu;
condition_variable cv;
atomic<bool> running;
queue<int> buffer;
atomic<state_t> state;
void generate_test_data()
{
const int times = 5;
static int data = 0;
for (int i = 0; i < times; i++) {
data = (data++) % 100;
buffer.push(data);
}
}
void ProducerThread() {
while (running) {
unique_lock<mutex> lock(mu);
cv.wait(lock, []() { return !running || state == WRITE; });
if (!running) return;
generate_test_data(); //producing here
lock.unlock();
//notify consumer to start consuming
state = READ;
cv.notify_one();
}
}
void ConsumerThread() {
while (running) {
unique_lock<mutex> lock(mu);
cv.wait(lock, []() { return !running || state == READ; });
if (!running) return;
while (!buffer.empty()) {
auto data = buffer.front(); //consuming here
buffer.pop();
cout << data << " \n";
}
//notify producer to start producing
if (buffer.empty()) {
state = WRITE;
cv.notify_one();
}
}
}
int main(){
running = true;
thread producer = thread([]() { ProducerThread(); });
thread consumer = thread([]() { ConsumerThread(); });
//simulating gui thread
while (!getch()){
}
running = false;
producer.join();
consumer.join();
}
Not a complete answer, though I think two condition variables could be helpful, one named buffer_empty that the producer thread will wait on, and another named buffer_filled that the consumer thread will wait on. Number of mutexes, how to loop, and so on I cannot comment on, since I'm not sure about the details myself.
Accesses to shared variables should only be done while holding the
mutex that protects it
condition_variable::wait should check a condition.
The condition should be a shared variable protected by the mutex that you pass to condition_variable::wait.
The way to check the condition is to wrap the call to wait in a while loop or use the 2-argument overload of wait (which is
equivalent to the while-loop version)
Note: These rules aren't strictly necessary if you truly understand what the hardware is doing. However, these problems get complicated quickly when with simple data structures, and it will be easier to prove that your algorithm is working correctly if you follow them.
Your Q and Q_buf are shared variables. Due to Rule 1, I would prefer to have them as local variables declared in the function that uses them (consume() and produce(), respectively). There will be 1 shared buffer that will be protected by a mutex. The producer will add to its local buffer. When that buffer is full, it acquires the mutex and pushes the local buffer to the shared buffer. It then waits for the consumer to accept this buffer before producing more data.
The consumer waits for this shared buffer to "arrive", then it acquires the mutex and replaces its empty local buffer with the shared buffer. Then it signals to the producer that the buffer has been accepted so it knows to start producing again.
Semantically, I don't see a reason to use swap over move, since in every case one of the containers is empty anyway. Maybe you want to use swap because you know something about the underlying memory. You can use whichever you want and it will be fast and work the same (at least algorithmically).
This problem can be done with 1 condition variable, but it may be a little easier to think about if you use 2.
Here's what I came up with. Tested on Visual Studio 2017 (15.6.7) and GCC 5.4.0. I don't need to be credited or anything (it's such a simple piece), but legally I have to say that I offer no warranties whatsoever.
#include <thread>
#include <vector>
#include <mutex>
#include <condition_variable>
#include <chrono>
std::vector<int> g_deliveryBuffer;
bool g_quit = false;
std::mutex g_mutex; // protects g_deliveryBuffer and g_quit
std::condition_variable g_producerDeliver;
std::condition_variable g_consumerAccepted;
// consumer
void consume()
{
// local buffer
std::vector<int> consumerBuffer;
while (true)
{
if (consumerBuffer.empty())
{
std::unique_lock<std::mutex> lock(g_mutex);
while (g_deliveryBuffer.empty() && !g_quit) // if we beat the producer, wait for them to push to the deliverybuffer
g_producerDeliver.wait(lock);
if (g_quit)
break;
consumerBuffer = std::move(g_deliveryBuffer); // get the buffer
}
g_consumerAccepted.notify_one(); // notify the producer that the buffer has been accepted
// for-loop to process the elems in Q
// ...
consumerBuffer.clear();
// ...
}
}
// producer
void produce()
{
std::vector<int> producerBuffer;
while (true)
{
// for-loop to fill up Q_buf
// ...
producerBuffer = { 0, 1, 2, 3, 4, 5, 6, 7, 8, 9 };
// ...
// once Q_buf is fully filled, wait until consumer asks to give it a full Q
{ // scope is for lock
std::unique_lock<std::mutex> lock(g_mutex);
g_deliveryBuffer = std::move(producerBuffer); // ok to push to deliverybuffer. it is guaranteed to be empty
g_producerDeliver.notify_one();
while (!g_deliveryBuffer.empty() && !g_quit)
g_consumerAccepted.wait(lock); // wait for consumer to signal for more data
if (g_quit)
break;
// We will never reach this point if the buffer is not empty.
}
}
}
int main()
{
// spawn threads
std::thread consumerThread(consume);
std::thread producerThread(produce);
// for for 5 seconds
std::this_thread::sleep_for(std::chrono::seconds(5));
// signal that it's time to quit
{
std::lock_guard<std::mutex> lock(g_mutex);
g_quit = true;
}
// one of the threads may be sleeping
g_consumerAccepted.notify_one();
g_producerDeliver.notify_one();
consumerThread.join();
producerThread.join();
return 0;
}

Try to compare 2 methods to implement bounded blocking queue

bounded blocking queue is famous, of course. There are mostly 2 methods to implement it. I try to understand which way is better:
Method 1: use counting semaphore
void *producer(void *arg) {
int i;
for (i = 0; i < loops; i++) {
sem_wait(&empty);
sem_wait(&mutex);
put(i);
sem_post(&mutex);
sem_post(&full);
}
}
void *consumer(void *arg) {
int i;
for (i = 0; i < loops; i++) {
sem_wait(&full);
sem_wait(&mutex);
int tmp = get();
sem_post(&mutex);
sem_post(&empty);
printf("%d\n", tmp);
}
}
Method 2: classic monitor pattern
class BoundedBuffer {
private:
int buffer[MAX];
int fill, use;
int fullEntries;
pthread_mutex_t monitor; // monitor lock
pthread_cond_t empty;
pthread_cond_t full;
public:
BoundedBuffer() {
use = fill = fullEntries = 0;
}
void produce(int element) {
pthread_mutex_lock(&monitor);
while (fullEntries == MAX)
pthread_cond_wait(&empty, &monitor);
//do something
pthread_cond_signal(&full);
pthread_mutex_unlock(&monitor);
}
int consume() {
pthread_mutex_lock(&monitor);
while (fullEntries == 0)
pthread_cond_wait(&full, &monitor);
//do something
pthread_cond_signal(&empty);
pthread_mutex_unlock(&monitor);
return tmp;
}
}
I understand the 2nd method can solve a lot of other problems. But how to compare these 2 methods? Looks like they can both fulfill the task.
Is there any link on detailed comparision?
Appreciate your help.
Thanks.
The big difference between those two methods is that the first one does not use pthread_ specific functions (semaphores are not part of pthread) and as such is not guaranteed to work in multithreaded enviornment.
In particular, semaphores do not protect memory ordering, so things written in one thread might not be readable on another. Mutexes are suitable for multi-thread message queue.

POSIX semaphore with related processes running threads

I have an assignment to implement Producer consumer problem in a convoluted way(may be to test my understanding). The parent process should set up a shared memory. The unnamed semaphores(for empty count and filled count) should be initialized and a mutex should be initialized. Then two child processes are created, a producer child and a consumer child. Each child process should create a new thread which should do the job.
PS: I have read that the semaphore's should be kept in a shared memory as they would be shared by different processes.
Please provide some hints, or suggest changes.
So far, I have done this:
struct shmarea
{
unsigned short int read;
unsigned short int max_size;
char scratch[3][50];
unsigned short int write;
sem_t sem1;// Empty slot semaphore
sem_t sem2;// Filled slot Semaphore
};
void *thread_read(void* args);
void *thread_write(void *args);
pthread_mutex_t work_mutex;
struct shmarea *shma;
int main()
{
int fork_value,i=0,shmid;
printf("Parent process id is %d\n\n",getpid());
int res1,res2;
key_t key;
char *path = "/tmp";
int id = 'S';
key = ftok(path, id);
shmid = shmget(key,getpagesize(),IPC_CREAT|0666);
printf("Parent:Shared Memory id = %d\n",id);
shma = shmat(shmid,0,0);
shma->read = 0;
shma->max_size = 3;
shma->write = 0;
pthread_t a_thread;
pthread_t b_thread;
void *thread_result1,*thread_result2;
res1 = sem_init(&(shma->sem1),1,3);//Initializing empty slot sempahore
res2 = sem_init(&(shma->sem2),1,0);//Initializing filled slot sempahore
res1 = pthread_mutex_init(&work_mutex,NULL);
while(i<2)
{
fork_value = fork();
if(fork_value > 0)
{
i++;
}
if(fork_value == 0)
{
if(i==0)
{
printf("***0***\n");
//sem_t sem1temp = shma->sem1;
char ch;int res;
res= pthread_create(&a_thread,NULL,thread_write,NULL);
}
if(i==1)
{
printf("***1***\n");
//sem_t sem2temp = shma->sem2;
int res;
char ch;
res= pthread_create(&b_thread,NULL,thread_read,NULL);
}
}
}
int wait_V,status;
res1 = pthread_join(a_thread,&thread_result1);
res2 = pthread_join(b_thread,&thread_result2);
}
void *thread_read(void *args)
{
while(1)
{
sem_wait(&(shma->sem2));
pthread_mutex_lock(&work_mutex);
printf("The buf read from consumer:%s\n",shma->scratch[shma->read]);
shma->read = (shma->read+1)%shma->max_size;
pthread_mutex_unlock(&work_mutex);
sem_post(&(shma->sem1));
}
}
void *thread_write(void *args)
{
char buf[50];
while(1)
{
sem_wait(&(shma->sem1));
pthread_mutex_lock(&work_mutex);
read(STDIN_FILENO,buf,sizeof(buf));
strcpy(shma->scratch[shma->write],buf);
shma->write = (shma->write+1)%shma->max_size;
pthread_mutex_unlock(&work_mutex);
sem_post(&(shma->sem2));
}
}
(1) Your biggest problem by far is that you have managed to write a fork bomb. Because you don't exit either child in the fork loop each child is going to fall through and loop around and create their own children until you crash or bring the system down. You want something more like this:
while(i < 2)
{
fork_value = fork();
if(fork_value > 0)
i++;
if(fork_value == 0)
{
if(i==0)
{
printf("0 child is pid %d\n", getpid());
int res;
res = pthread_create(&a_thread,NULL,thread_write,NULL);
res = pthread_join(a_thread,&thread_result1);
exit(0);
}
if(i==1)
{
printf("1 child is pid %d\n", getpid());
int res;
res = pthread_create(&b_thread,NULL,thread_read,NULL);
res = pthread_join(b_thread,&thread_result2);
exit(0);
}
}
}
for (i = 0; i < 2; ++i)
wait(NULL);
Notice the wait on the children which you neglected.
(2) Always check your return codes. They are like safety belts, a bit of a drag but so helpful when you crash. (Yes, I didn't take my advice here but you should.)
(3) These names are awful.
unsigned short int read;
unsigned short int write;
Stay away from naming variables after system calls. It's confusing and just asking for trouble.
(4) Terminology wise, processes with a common ancestor, like these, are related. The parent can open shared memory and other resources and pass it on to the children. Unrelated processes would, for example, multiple instances of program launched from different terminals. They can share resources but not in the "inherited" way forked processes do.
It's late and didn't get around to looking at what you are doing with the threads and such but this should get you started.

Differences between POSIX threads on OSX and LINUX?

Can anyone shed light on the reason that when the below code is compiled and run on OSX the 'bartender' thread skips through the sem_wait() in what seems like a random manner and yet when compiled and run on a Linux machine the sem_wait() holds the thread until the relative call to sem_post() is made, as would be expected?
I am currently learning not only POSIX threads but concurrency as a whole so absoutely any comments, tips and insights are warmly welcomed...
Thanks in advance.
#include <stdio.h>
#include <stdlib.h>
#include <semaphore.h>
#include <fcntl.h>
#include <unistd.h>
#include <pthread.h>
#include <errno.h>
//using namespace std;
#define NSTUDENTS 30
#define MAX_SERVINGS 100
void* student(void* ptr);
void get_serving(int id);
void drink_and_think();
void* bartender(void* ptr);
void refill_barrel();
// This shared variable gives the number of servings currently in the barrel
int servings = 10;
// Define here your semaphores and any other shared data
sem_t *mutex_stu;
sem_t *mutex_bar;
int main() {
static const char *semname1 = "Semaphore1";
static const char *semname2 = "Semaphore2";
pthread_t tid;
mutex_stu = sem_open(semname1, O_CREAT, 0777, 0);
if (mutex_stu == SEM_FAILED)
{
fprintf(stderr, "%s\n", "ERROR creating semaphore semname1");
exit(EXIT_FAILURE);
}
mutex_bar = sem_open(semname2, O_CREAT, 0777, 1);
if (mutex_bar == SEM_FAILED)
{
fprintf(stderr, "%s\n", "ERROR creating semaphore semname2");
exit(EXIT_FAILURE);
}
pthread_create(&tid, NULL, bartender, &tid);
for(int i=0; i < NSTUDENTS; ++i) {
pthread_create(&tid, NULL, student, &tid);
}
pthread_join(tid, NULL);
sem_unlink(semname1);
sem_unlink(semname2);
printf("Exiting the program...\n");
}
//Called by a student process. Do not modify this.
void drink_and_think() {
// Sleep time in milliseconds
int st = rand() % 10;
sleep(st);
}
// Called by a student process. Do not modify this.
void get_serving(int id) {
if (servings > 0) {
servings -= 1;
} else {
servings = 0;
}
printf("ID %d got a serving. %d left\n", id, servings);
}
// Called by the bartender process.
void refill_barrel()
{
servings = 1 + rand() % 10;
printf("Barrel refilled up to -> %d\n", servings);
}
//-- Implement a synchronized version of the student
void* student(void* ptr) {
int id = *(int*)ptr;
printf("Started student %d\n", id);
while(1) {
sem_wait(mutex_stu);
if(servings > 0) {
get_serving(id);
} else {
sem_post(mutex_bar);
continue;
}
sem_post(mutex_stu);
drink_and_think();
}
return NULL;
}
//-- Implement a synchronized version of the bartender
void* bartender(void* ptr) {
int id = *(int*)ptr;
printf("Started bartender %d\n", id);
//sleep(5);
while(1) {
sem_wait(mutex_bar);
if(servings <= 0) {
refill_barrel();
} else {
printf("Bar skipped sem_wait()!\n");
}
sem_post(mutex_stu);
}
return NULL;
}
The first time you run the program, you're creating named semaphores with initial values, but since your threads never exit (they're infinite loops), you never get to the sem_unlink calls to delete those semaphores. If you kill the program (with ctrl-C or any other way), the semaphores will still exist in whatever state they are in. So if you run the program again, the sem_open calls will succeed (because you don't use O_EXCL), but they won't reset the semaphore value or state, so they might be in some odd state.
So you should make sure to call sem_unlink when the program STARTS, before calling sem_open. Better yet, don't use named semaphores at all -- use sem_init to initialize a couple of unnamed semaphores instead.

Multi thread Dead Lock - Producer & Customer module using pthread lib

Recently I'm investigate the pthread multi-thread lib and doing some example.
I try to write a Producer-Customer Module: There's a queue to store the Producer's product, and can be get by the Customer.
I set the queue MAX-SIZE as 20. When the queue is full, the Producer thread will wait, until the Customer thread consume one and nofity the Producer thread that he can start produce. And the same as Customer when the queue is empty, the Customer will wait until the Producer thread produce new one and notify him. :-)
I set the Customer thread consume faster than produce, it works fine as the log output in really what I expected. But, when I set the Producer thread consume faster than consume, it seems at last cause a deadlock :-(
I don't kown the reason, can anyone kindly read my code and give me some tips or how to modify the code?
Thanks!
#include "commons.h"
typedef struct tagNode {
struct tagNode *pNext;
char *pContent;
}NodeSt, *PNodeSt;
typedef struct {
size_t mNodeNum;
size_t mNodeIdx;
PNodeSt mRootNode;
}WorkQueue;
#define WORK_QUEUE_MAX 20
static pthread_cond_t g_adder_cond = PTHREAD_COND_INITIALIZER;
static pthread_mutex_t g_adder_mutex = PTHREAD_MUTEX_INITIALIZER;
static WorkQueue g_work_queue = {0};
//------------------------------------------------------------------------
void *customer_thread_runFunc(void *usrdat){
for( ; ; ) {
pthread_mutex_lock(&g_adder_mutex);{
while( g_work_queue.mNodeNum == 0 ) {
pthread_cond_wait(&g_adder_cond, &g_adder_mutex);
}
/********************** CONSUME NEW PRODUCT ***********************/
g_work_queue.mNodeNum --;
if( g_work_queue.mRootNode->pNext != NULL ) {
PNodeSt pTempNode = g_work_queue.mRootNode->pNext;
free( g_work_queue.mRootNode->pContent );
free( g_work_queue.mRootNode );
g_work_queue.mRootNode = pTempNode;
} else {
free( g_work_queue.mRootNode->pContent );
free( g_work_queue.mRootNode );
g_work_queue.mRootNode = NULL;
}
/********************** CONSUME PRODUCT END ***********************/
// Nofity Producer Thread
pthread_cond_signal(&g_adder_cond);
}pthread_mutex_unlock(&g_adder_mutex);
// PAUSE FOR 300ms
usleep(300);
}
return NULL;
}
//------------------------------------------------------------------------
void *productor_thread_runFunc( void *usrdat ) {
for( ; ; ) {
pthread_mutex_lock(&g_adder_mutex); {
char tempStr[64];
PNodeSt pNodeSt = g_work_queue.mRootNode;
while( g_work_queue.mNodeNum >= WORK_QUEUE_MAX ) {
pthread_cond_wait(&g_adder_cond, &g_adder_mutex);
}
/********************** PRODUCE NEW PRODUCT ***********************/
g_work_queue.mNodeNum ++;
g_work_queue.mNodeIdx ++;
if( pNodeSt != NULL ) {
for( ; pNodeSt->pNext != NULL; pNodeSt = pNodeSt->pNext );
pNodeSt->pNext = malloc(sizeof(NodeSt));
memset(pNodeSt->pNext, 0, sizeof(NodeSt));
sprintf( tempStr, "production id: %d", g_work_queue.mNodeIdx);
pNodeSt->pNext->pContent = strdup(tempStr);
} else {
g_work_queue.mRootNode = malloc(sizeof(NodeSt));
memset(g_work_queue.mRootNode, 0, sizeof(NodeSt));
sprintf( tempStr, "production id: %d", g_work_queue.mNodeIdx);
g_work_queue.mRootNode->pContent = strdup(tempStr);
}
/********************** PRODUCE PRODUCT END ***********************/
// Nofity Customer Thread
pthread_cond_signal(&g_adder_cond);
}pthread_mutex_unlock(&g_adder_mutex);
// PAUSE FOR 150ms, faster than Customer Thread
usleep(150);
}
return NULL;
}
//------------------------------------------------------------------------
int main(void) {
pthread_t pt1, pt3;
pthread_attr_t attr;
pthread_attr_init(&attr);
pthread_create(&pt1, &attr, customer_thread_runFunc, NULL);
pthread_create(&pt3, &attr, productor_thread_runFunc, NULL);
pthread_join(pt1, NULL);
pthread_join(pt3, NULL);
printf("MAIN - main thread finish!\n");
return EXIT_SUCCESS;
}
your producer is waiting on the same condition as your consumer? This is definitely a source of trouble. Think about your code conceptually. What preconditions do the producer need before "producing"? As you mentioned, the buffer need to have space.
I did not look in detail, but you probably need an additional condition variable which is used by the producer (not the same as the consumer). The producer wait only if the queue is full. The consumer signal every times it successfully retrieve something from the queue.
EDIT: Reading the doc of pthread lib, One mutex can be used by two conditions
IDEA OF PSEUDOCODE :)
Mutex mqueue
Condition cprod, ccons
produce()
mqueue.lock
while the queue is full
cprod.wait(mqueue)
end
do the production on queue
mcons.signal
mqueue.unlock
end produce
consume()
mqueue.lock
while the queue is empty
ccons.wait(mqueue)
end
do the consumption on the queue
cprod.signal
mqueue.unlock
end consume
Preferably signal when you have the lock. Here I don't think the order make a difference.

Resources