Mutex understanding - multithreading

Can somebody explain to me why this code is bad:
int data;
void* worker(void* arg __attribute__((unused))) {
pthread_mutex_t m;
pthread_mutex_init(&m, NULL);
for (int i = 0; i < N; i++) {
pthread_mutex_lock(&m);
data++;
pthread_mutex_unlock(&m);
}
pthread_mutex_destroy(&m);
return NULL;
}
And this is ok:
int data;
pthread_mutex_t m;
void* worker(void* arg __attribute__((unused))) {
for (int i = 0; i < N; i++) {
pthread_mutex_lock(&m);
data++;
pthread_mutex_unlock(&m);
}
return NULL;
}
// ...
pthread_mutex_init(&m, NULL);
// ...
pthread_mutex_destroy(&m);
// ..
Do i always need to declare mutex variables globally?

The problem with local mutex is that it's only a locally accessible version of the mutex ... therefore when a thread locks the mutex in order to share some globally accessible data, the data itself is not protected, since every other thread will have it's own local mutex that can be locked and unlocked. It defeats the whole point of mutual exclusion.
I suggest also to think about exception safety. In this particular example you are just doing data++ in the middle of mutex lock/unlock. So what if you are putting another statement before pthread_mutex_unlock(&m); in the future which can throw a exception. Read about RAII.

Related

Dead lock in the mutex, condition variable code?

I'm reading the book, Modern Operation Systems by AS TANENBAUM and it gives an example explaining condition variable as below. It looks to me there is a deadlock and not sure what I miss.
Lets assume consumer thread starts first. Right after the_mutex is locked, consumer thread is blocked waiting for the condition variable, condc.
If producer is running at this time, the_mutex will still be locked, because consumer never releases it. So producer will also be blocked.
This looks to me a textbook deadlock issue. Did I miss something here? Thx
#include <stdio.h>
#include <pthread.h>
#define MAX 10000000000 /* Numbers to produce */
pthread_mutex_t the_mutex;
pthread_cond_t condc, condp;
int buffer = 0;
void* consumer(void *ptr) {
int i;
for (i = 1; i <= MAX; i++) {
pthread_mutex_lock(&the_mutex); /* lock mutex */
/*thread is blocked waiting for condc */
while (buffer == 0) pthread_cond_wait(&condc, &the_mutex);
buffer = 0;
pthread_cond_signal(&condp);
pthread_mutex_unlock(&the_mutex);
}
pthread_exit(0);
}
void* producer(void *ptr) {
int i;
for (i = 1; i <= MAX; i++) {
pthread_mutex_lock(&the_mutex); /* Lock mutex */
while (buffer != 0) pthread_cond_wait(&condp, &the_mutex);
buffer = i;
pthread_cond_signal(&condc);
pthread_mutex_unlock(&the_mutex);
}
pthread_exit(0);
}
int main(int argc, char **argv) {
pthread_t pro, con;
//Simplified main function, ignores init and destroy for simplicity
// Create the threads
pthread_create(&con, NULL, consumer, NULL);
pthread_create(&pro, NULL, producer, NULL);
}
When you wait on a condition variable, the associated mutex is released for the duration of the wait (that's why you pass the mutex to pthread_cond_wait).
When pthread_cond_wait returns, the mutex is always locked again.
Keeping this in mind, you can follow the logic of the example.

Try to compare 2 methods to implement bounded blocking queue

bounded blocking queue is famous, of course. There are mostly 2 methods to implement it. I try to understand which way is better:
Method 1: use counting semaphore
void *producer(void *arg) {
int i;
for (i = 0; i < loops; i++) {
sem_wait(&empty);
sem_wait(&mutex);
put(i);
sem_post(&mutex);
sem_post(&full);
}
}
void *consumer(void *arg) {
int i;
for (i = 0; i < loops; i++) {
sem_wait(&full);
sem_wait(&mutex);
int tmp = get();
sem_post(&mutex);
sem_post(&empty);
printf("%d\n", tmp);
}
}
Method 2: classic monitor pattern
class BoundedBuffer {
private:
int buffer[MAX];
int fill, use;
int fullEntries;
pthread_mutex_t monitor; // monitor lock
pthread_cond_t empty;
pthread_cond_t full;
public:
BoundedBuffer() {
use = fill = fullEntries = 0;
}
void produce(int element) {
pthread_mutex_lock(&monitor);
while (fullEntries == MAX)
pthread_cond_wait(&empty, &monitor);
//do something
pthread_cond_signal(&full);
pthread_mutex_unlock(&monitor);
}
int consume() {
pthread_mutex_lock(&monitor);
while (fullEntries == 0)
pthread_cond_wait(&full, &monitor);
//do something
pthread_cond_signal(&empty);
pthread_mutex_unlock(&monitor);
return tmp;
}
}
I understand the 2nd method can solve a lot of other problems. But how to compare these 2 methods? Looks like they can both fulfill the task.
Is there any link on detailed comparision?
Appreciate your help.
Thanks.
The big difference between those two methods is that the first one does not use pthread_ specific functions (semaphores are not part of pthread) and as such is not guaranteed to work in multithreaded enviornment.
In particular, semaphores do not protect memory ordering, so things written in one thread might not be readable on another. Mutexes are suitable for multi-thread message queue.

Do I need a mutex on a vector of pointers?

Here is a simplified version of my situation:
void AppendToVector(std::vector<int>* my_vector) {
for (int i = 0; i < 100; i++) {
my_vector->push_back(i);
}
}
void CreateVectors(const int num_threads) {
std::vector<std::vector<int>* > my_vector_of_pointers(10);
ThreadPool pool(num_threads);
for (for int i = 0; i < 10; i++) {
my_vector_of_pointers[i] = new std::vector<int>();
pool.AddTask(AppendToVector,
&my_vector_of_pointers[i]);
}
}
My question is whether I need to put a mutex lock in AppendToVector when running this with multiple threads? My intuition tells me I do not have to because there is no possibility of two threads accessing the same data, but I am not fully confident.
Every thread is appending different std::vector (inside AppendToVector function) so there is no need for locking in your case. In case you change your code in the way more than one thread access same vector then you will need lock. Don't be confused because std::vectors you are passing to AppendToVector are them-selfs elements of main std::list, it matters only that here threads are manipulating with completely different (not shared) memory

POSIX semaphore with related processes running threads

I have an assignment to implement Producer consumer problem in a convoluted way(may be to test my understanding). The parent process should set up a shared memory. The unnamed semaphores(for empty count and filled count) should be initialized and a mutex should be initialized. Then two child processes are created, a producer child and a consumer child. Each child process should create a new thread which should do the job.
PS: I have read that the semaphore's should be kept in a shared memory as they would be shared by different processes.
Please provide some hints, or suggest changes.
So far, I have done this:
struct shmarea
{
unsigned short int read;
unsigned short int max_size;
char scratch[3][50];
unsigned short int write;
sem_t sem1;// Empty slot semaphore
sem_t sem2;// Filled slot Semaphore
};
void *thread_read(void* args);
void *thread_write(void *args);
pthread_mutex_t work_mutex;
struct shmarea *shma;
int main()
{
int fork_value,i=0,shmid;
printf("Parent process id is %d\n\n",getpid());
int res1,res2;
key_t key;
char *path = "/tmp";
int id = 'S';
key = ftok(path, id);
shmid = shmget(key,getpagesize(),IPC_CREAT|0666);
printf("Parent:Shared Memory id = %d\n",id);
shma = shmat(shmid,0,0);
shma->read = 0;
shma->max_size = 3;
shma->write = 0;
pthread_t a_thread;
pthread_t b_thread;
void *thread_result1,*thread_result2;
res1 = sem_init(&(shma->sem1),1,3);//Initializing empty slot sempahore
res2 = sem_init(&(shma->sem2),1,0);//Initializing filled slot sempahore
res1 = pthread_mutex_init(&work_mutex,NULL);
while(i<2)
{
fork_value = fork();
if(fork_value > 0)
{
i++;
}
if(fork_value == 0)
{
if(i==0)
{
printf("***0***\n");
//sem_t sem1temp = shma->sem1;
char ch;int res;
res= pthread_create(&a_thread,NULL,thread_write,NULL);
}
if(i==1)
{
printf("***1***\n");
//sem_t sem2temp = shma->sem2;
int res;
char ch;
res= pthread_create(&b_thread,NULL,thread_read,NULL);
}
}
}
int wait_V,status;
res1 = pthread_join(a_thread,&thread_result1);
res2 = pthread_join(b_thread,&thread_result2);
}
void *thread_read(void *args)
{
while(1)
{
sem_wait(&(shma->sem2));
pthread_mutex_lock(&work_mutex);
printf("The buf read from consumer:%s\n",shma->scratch[shma->read]);
shma->read = (shma->read+1)%shma->max_size;
pthread_mutex_unlock(&work_mutex);
sem_post(&(shma->sem1));
}
}
void *thread_write(void *args)
{
char buf[50];
while(1)
{
sem_wait(&(shma->sem1));
pthread_mutex_lock(&work_mutex);
read(STDIN_FILENO,buf,sizeof(buf));
strcpy(shma->scratch[shma->write],buf);
shma->write = (shma->write+1)%shma->max_size;
pthread_mutex_unlock(&work_mutex);
sem_post(&(shma->sem2));
}
}
(1) Your biggest problem by far is that you have managed to write a fork bomb. Because you don't exit either child in the fork loop each child is going to fall through and loop around and create their own children until you crash or bring the system down. You want something more like this:
while(i < 2)
{
fork_value = fork();
if(fork_value > 0)
i++;
if(fork_value == 0)
{
if(i==0)
{
printf("0 child is pid %d\n", getpid());
int res;
res = pthread_create(&a_thread,NULL,thread_write,NULL);
res = pthread_join(a_thread,&thread_result1);
exit(0);
}
if(i==1)
{
printf("1 child is pid %d\n", getpid());
int res;
res = pthread_create(&b_thread,NULL,thread_read,NULL);
res = pthread_join(b_thread,&thread_result2);
exit(0);
}
}
}
for (i = 0; i < 2; ++i)
wait(NULL);
Notice the wait on the children which you neglected.
(2) Always check your return codes. They are like safety belts, a bit of a drag but so helpful when you crash. (Yes, I didn't take my advice here but you should.)
(3) These names are awful.
unsigned short int read;
unsigned short int write;
Stay away from naming variables after system calls. It's confusing and just asking for trouble.
(4) Terminology wise, processes with a common ancestor, like these, are related. The parent can open shared memory and other resources and pass it on to the children. Unrelated processes would, for example, multiple instances of program launched from different terminals. They can share resources but not in the "inherited" way forked processes do.
It's late and didn't get around to looking at what you are doing with the threads and such but this should get you started.

implementing a barrier issue

static int barrier_counter = 0;
pthread_cond_t condition;
pthread_mutex_t local_lock;
int init_barrier(int n) {
if (n < 0) {
return -1;
}
barrier_counter = n;
pthread_mutex_init(&local_lock, NULL);
pthread_cond_init(&condition, NULL);
return 0;
}
int barrier() {
pthread_mutex_trylock(&local_lock);
barrier_counter--;
printf("inside barrier befor the while n is : %d \n",barrier_counter );
while (0 < barrier_counter) {
printf("inside the barrier n is : %d\n", barrier_counter);
pthread_cond_wait(&condition,&local_lock);
}
printf("befor bordcast : %d \n",barrier_counter );
pthread_cond_broadcast(&condition);
printf("done the brodcast when n is : %d \n",barrier_counter );
pthread_mutex_unlock(&local_lock);
return 0;
}
we tried to implement barrier but somehow we dont get to do broadcast and we finish anyway.
we dont even get deadlock, there are few threads that are actually waiting but not the amount we specified in init_barrier.
pthread_cond_wait() requires a locked mutex. Your call to pthread_mutex_trylock() might fail, and so continue without acquiring the mutex.
I suggest you use pthread_mutex_lock() instead.
Also, unless you're looking to re-implement barriers, You should use pthread_barrier_init() and pthread_barrier_wait().

Resources