Suppose I have two thread T1 and T1.
Thread T1 will call t1_callback() and T2 is calling t2_callback().
T some_global_data;
pthread_mutex_t mutex;
void t1_callback()
{
pthread_mutex_lock(&mutex);
update_global_data(some_global_data);
pthread_mutex_unlock(&mutex);
}
void t2_callback()
{
pthread_mutex_lock(&mutex);
update_global_data(some_global_data);
pthread_mutex_unlock(&mutex);
}
Case
t1_callback() is holding the lock between time (t1 - t2).
In between this time (t1 - t2), if t2_callback has been called for say 10 times.
Question
Then how many times will t2_callback() will be executed, when t1_callback() release the mutex.
If a thread calls t2_callback() while another thread is executing t1_callback() and holding the lock, it (the thread running t2_callback()) will be suspended in pthread_mutex_lock(); until the lock is released. So it doesn't make sense to talk about one thread calling t2_callback() 10 times while the lock is held.
If 10 different threads all call t2_callback() in that time, they'll all be suspended in pthread_mutex_lock();, and they will each proceed one-at-a-time when the lock is released.
Related
On this request
ssize_t foo_read(struct file *filp, char *buf, size_t count,loff_t *ppos)
{
foo_dev_t * foo_dev = filp->private_data;
if (down_interruptible(&foo_dev->sem)
return -ERESTARTSYS;
foo_dev->intr = 0;
outb(DEV_FOO_READ, DEV_FOO_CONTROL_PORT);
wait_event_interruptible(foo_dev->wait, (foo_dev->intr= =1));
if (put_user(foo_dev->data, buf))
return -EFAULT;
up(&foo_dev->sem);
return 1;
}
With this completion
irqreturn_t foo_interrupt(int irq, void *dev_id, struct pt_regs *regs)
{
foo->data = inb(DEV_FOO_DATA_PORT);
foo->intr = 1;
wake_up_interruptible(&foo->wait);
return 1;
}
Assuming foo_dev->sem is initially 1 then only one thread is allowed to execute the section after down_interruptible(&foo_dev->sem) and threads waiting for that semaphore make sense to be put in a queue.(As i understand making foo_dev->sem greater than one will be a problem in that code).
So if only one passes always whats the use of foo_dev->wait queue, isnt it possible to suspend the current thread, save its pointer as a global *curr and wake it up when it completes its request?
Yes, it is possible to put single thread to wait (using set_current_state() and schedule()) and resume it later (using wake_up_process).
But this requires writing some code for check wakeup conditions and possible absent of a thread to wakeup.
Waitqueues provide ready-made functions and macros for wait on condition and wakeup it later, so resulted code becomes much shorter: single macro wait_event_interruptible() processes checking for event and putting thread to sleep, and single macro wake_up_interruptible() processes resuming possibly absent thread.
I wanna ask you some basic thing but it really bothers me a lot.
I'm currently studying 'pthread mutex' for system programming and as far as I know, when 'pthread_mutex_lock' is called only current thread is executed not any others. Can I think like this?
And when it comes to 'pthread_mutex_unlock', when this function is called, does the current thread pass the lock permission to others and wait until some other thread calls unlock function again? Or does every thread including current thread execute simultaneously until one of them calls lock function?
Here's the code I was studying:
#include <stdio.h>
#include <stdlib.h>
#include <pthread.h>
#include <unistd.h>
enum { STATE_A, STATE_B } state = STATE_A;
pthread_cond_t condA = PTHREAD_COND_INITIALIZER;
pthread_cond_t condB = PTHREAD_COND_INITIALIZER;
pthread_mutex_t mutex = PTHREAD_MUTEX_INITIALIZER;
void *threadA()
{
printf("A start\n");
int i = 0, rValue, loopNum;
while(i<3)
{
pthread_mutex_lock(&mutex);
while(state != STATE_A)
{
printf("a\n");
pthread_cond_wait(&condA, &mutex);
}
pthread_mutex_unlock(&mutex);
pthread_cond_signal(&condB);
for(loopNum = 1; loopNum <= 5; loopNum++)
{
printf("Hello %d\n", loopNum);
}
pthread_mutex_lock(&mutex);
state = STATE_B;
printf("STATE_B\n");
pthread_cond_signal(&condB);
pthread_mutex_unlock(&mutex);
i++;
}
return 0;
}
void *threadB()
{
printf("B start\n");
int n = 0, rValue;
while(n<3)
{
pthread_mutex_lock(&mutex);
while (state != STATE_B)
{
printf("b\n");
pthread_cond_wait(&condB, &mutex);
}
pthread_mutex_unlock(&mutex);
printf("Goodbye\n");
pthread_mutex_lock(&mutex);
state = STATE_A;
printf("STATE_A\n");
pthread_cond_signal(&condA);
pthread_mutex_unlock(&mutex);
n++;
}
return 0;
}
int main(int argc, char *argv[])
{
pthread_t a, b;
pthread_create(&a, NULL, threadA, NULL);
pthread_create(&b, NULL, threadB, NULL);
pthread_join(a, NULL);
pthread_join(b, NULL);
}
I kind of modified some of the original parts to make sure what's going on in this code such as adding printf("A start\n"), printf("a\n") so on.
And here are some outputs:
Output 1
B start
b
A start
Hello 1
Hello 2
Hello 3
Hello 4
Hello 5
b
STATE_B
a
Goodbye
STATE_A
b
Hello 1
Hello 2
Hello 3
Hello 4
Hello 5
b
STATE_B
a
Goodbye
STATE_A
b
Hello 1
Hello 2
Hello 3
Hello 4
Hello 5
b
STATE_B
Goodbye
STATE_A
Output 2
B start
b
A start
Hello 1
Hello 2
Hello 3
Hello 4
Hello 5
STATE_B
a
Goodbye
STATE_A
b
Hello 1
Hello 2
Hello 3
Hello 4
Hello 5
STATE_B
a
Goodbye
STATE_A
b
Hello 1
Hello 2
Hello 3
Hello 4
Hello 5
STATE_B
Goodbye
STATE_A
So I learned that when threads are called, they are called simultaneously. Based on this logic, I added the 'printf("A start\n")' and 'printf("B start\n")' in the beginning of the each thread function 'threadA() and threadB()'. But always 'printf("B start\n")' comes up first. If they are called at the same time, don't they have to come up alternatively, at least randomly?
Also after the first 'Hello' loop, I'm assuming 'Goodbye' message always should be earlier than 'a' since I guess the 'pthread_mutex_unlock' in ThreadA calls ThreadB and waits until ThreadB calls unlock function. I want to know how this code works.
I'm guessing I would be totally wrong and misunderstood a lot of parts since I'm a newbie in this field. But wanna get the answer. Thank you for reading this :)
when 'pthread_mutex_lock' is called only current thread is executed
not any others. Can I think like this?
I guess you can think like that, but you'll be thinking incorrectly. pthread_mutex_lock() doesn't cause only the calling thread to execute. Rather, it does one of two things:
If the mutex wasn't already locked, it locks the mutex and returns immediately.
If the mutex was already locked, it puts the calling thread to sleep, to wait until the mutex has become unlocked. Only after pthread_mutex_lock() has successfully acquired the lock, will pthread_mutex_lock() return.
Note that in both cases, the promise that pthread_mutex_lock() makes to the calling thread is this: when pthread_mutex_lock() returns zero/success, the mutex will be locked and the calling thread will be the owner of the lock. (The other possibility is that phread_mutex_lock() will return a negative value indicating an error condition, but that's uncommon in practice so I won't dwell on it)
when it comes to 'pthread_mutex_unlock', does the current thread pass
the lock permission to others and wait until some other thread calls
unlock function again?
The first thing to clarify is that pthread_mutex_unlock() never waits for anything; unlike pthread_mutex_lock(), pthread_mutex_unlock() always returns immediately.
So what does pthread_mutex_unlock() do?
Unlocks the mutex (note that the mutex must have already been locked by a previous call to pthread_mutex_lock() in the same thread. If you call pthread_mutex_unlock() on a mutex without having previously called pthread_mutex_lock() to acquire that same mutex, then your program is buggy and won't work correctly)
Notifies the OS's thread-scheduler (through some mechanism that is deliberately left undocumented, since as a user of the pthreads library you don't need to know or care how it is implemented) that the mutex is now unlocked. Upon receiving that notification, the OS will check to see what other threads (if any) are blocked inside their own call to pthread_mutex_lock(), waiting to acquire this mutex, and if there are any, it will wake up one of those threads so that that thread may acquire the lock and its pthread_mutex_lock() call can then return. All that may happen before or after your thread's call to pthread_mutex_unlock() returns; the exact order of execution is indeterminate and doesn't really matter.
I guess the 'pthread_mutex_unlock' in ThreadA calls ThreadB and waits
until ThreadB calls unlock function.
pthread_mutex_unlock() does no such thing. In general, threads don't/can't call functions in other threads. For what pthread_mutex_unlock() does do, see my description above.
pthread_mutex_lock() doesn't mean only one thread will execute - it just means that any other thread that also tries to call pthread_mutex_lock() on the same mutex object will be suspended until the first thread releases the lock with pthread_mutex_unlock().
If the other threads aren't trying to lock the same mutex, they can continue running simultaneously.
If multiple threads have tried to lock the same mutex while it is locked by the first thread, then when the mutex is released by the first thread with pthread_mutex_unlock() only one of them will be able to proceed (and then when that thread itself calls pthread_mutex_unlock(), another waiting thread will be able to proceed and so on).
Note that a thread waiting for a mutex to be unlocked will not necessarily start executing immediately upon the mutex being unlocked.
I am reading through the OSTEP book by prof.Remzi
http://pages.cs.wisc.edu/~remzi/OSTEP/
I could only partially understand how the following code results in wakeup/waiting race condition.(The code is taken from the books chapter.
http://pages.cs.wisc.edu/~remzi/OSTEP/threads-locks.pdf
void lock(lock_t *m) {
while (TestAndSet(&m->guard, 1) == 1); //acquire guard lock by spinning
if (m->flag == 0) {
m->flag = 1; // lock is acquired
m->guard = 0;
} else {
queue_add(m->q, gettid());
m->guard = 0;
park();
}
}
}
void unlock(lock_t *m) {
while (TestAndSet(&m->guard, 1) == 1); //acquire guard lock by spinning
if (queue_empty(m->q))
m->flag = 0; // let go of lock; no one wants it
else
unpark(queue_remove(m->q)); // hold lock (for next thread!)
m->guard = 0;
}
park() sys call puts a calling thread to sleep, and unpark(threadID) is used to wake a particular thread as designated by threadID.
Now if thread1 hold the lock by setting the m->flag to 1. If the thread2 comes in to acquire the lock, it fails. So the else case is executed, and the thread2 is added to queue, but-assume-if before park() sys call is made, thread2 is scheduled out and thread1 is given the timeslice. If thread1 releases the lock,unlock function tries to call unpark syscall(queue is non empty), since thread2 is in the queue. But the thread2 did not call park() sys call, it just got added to queue.
So the question is
1) what does the thread1's unpark() returns, just a error saying threadID not found?(os specific)
2) what happens to the lock flag ? it was supposed to be passed between the subsequent threads which called the lock routine, freeing the lock only when no more lock contention.
The book says thread2 will sleep for ever. But my understanding is any subsequent threads contesting for the locks will sleep forever,say thread3 tries to acquire lock at later time, because the the lock is never freed by thread1 during the unlock call.
My understanding is most probably wrong because the book was very specific in pointing out thread2 sleeping forever. Or am just reading too much in the example and my understanding is correct?!!! and there is a deadlock?
Mailed this question to prof.Remzi and got a reply from him !!!. Just posting the reply here.
Prof.Remzi's reply:
good questions!
I think you basically have it right.
unpark() will return (and perhaps say that the threadID was not sleeping);
in this implementation, the lock is left locked, and thread2 will sleep forever,
and as you say all subsequent threads trying to acquire the lock won't be
able to.
I think your understanding is correct. I think the unpark() will still return(but did not work normally). Since the thread2 never sleeps the lock held by thread1 will not free. The subsequent threads like thread3,...threadN will still add to the queue and sleep. Also, the thread2 has already been removed from the queue and I would say it is in kind of 'sleep' forever.
I have created many threads all waiting for there own condition. Each thread when runs signals its next condition and again goes into wait state.
However, I want that the currently running thread should signal its next condition after some specified period of time (very short period). How to achieve that?
void *threadA(void *t)
{
while(i<100)
{
pthread_mutex_lock(&mutex1);
while (state != my_id )
{
pthread_cond_wait(&cond[my_id], &mutex1);
}
// processing + busy wait
/* Set state to i+1 and wake up thread i+1 */
pthread_mutex_lock(&mutex1);
state = (my_id + 1) % NTHREADS;//usleep(1);
// (Here I don't want this sleep. I want that this thread completes it processing and signals next thread a bit later.)
/*nanosleep(&zero, NULL);*/
pthread_cond_signal(&cond[(my_id + 1) % NTHREADS]); // Send signal to Thread (i+1) to awake
pthread_mutex_unlock(&mutex1);
i++;
}
Signalling a condition does nothing if there is nothing waiting on the condition. So, if pthread 'x' signals condition 'cx' and then waits on it, it will wait for a very long time... unless some other thread also signals 'cx' !
I'm not really sure I understand what you mean by the pthread signalling its "next condition", but it occurs to me that there is not much difference between waiting to signal a waiting thread and the thread sleeping after it is signalled ?
From here: https://computing.llnl.gov/tutorials/pthreads/#ConVarSignal
Note that the pthread_cond_wait
routine will automatically and atomically unlock mutex while it waits.
The following sub-code is from the same link (formatting by me):
pthread_mutex_lock(&count_mutex);
while (count<COUNT_LIMIT)
{
pthread_cond_wait(&count_threshold_cv, &count_mutex);
printf("watch_count(): thread %ld Condition signal received.\n", my_id);
count += 125;
printf("watch_count(): thread %ld count now = %d.\n", my_id, count);
}
pthread_mutex_unlock(&count_mutex);
Question:
When it says that pthread_cond_wait will automatically unlock mutex while it waits, then why do we have to explicitly specify the function pthread_mutex_unlock at the end of the code above?
What's the point that I am missing?
When pthread_cond_wait unblocks it is holding the lock again. Say for example you go around the loop twice you get the following sequence of lock/unlocks on the mutex:
lock
# Around loop twice:
wait (unlock)
awaken (holding lock)
wait (unlock)
awaken (holding lock)
# loop done, still holding lock
unlock
If you don't have that last unlock there then you'll end up with deadlock the next time someone else wants to acquire the lock.