Which thread gets scheduled after first thread has exited? - linux

void main()
{
.....
pthread_mutex_init(&lock)
pthread_create(fun,...)
pthread_create(fun,...)
pthread_create(fun,...)
}
void fun()
{
pthread_mutex_lock(&lock)
...........
pthread_mutex_unlock(&lock)
}
In the code above, I created 3 threads calling same function fun. I can tell you that execution of fun takes long than creating the threads. So there are 3 threads initially. But 1st thread is already executing after taking lock. Now 2nd and third thread are waiting. My question is once the lock is released which thread will be scheduled. Is it 2nd thread and then third or depends on the scheduler. Does scheduler maintain any kind of queue for the waiting threads and schedules it in FIFO manner?

No, it does not work like a FIFO. One thread at random will be woken up.

Related

Thread deletion design

I have multi thread program. I have a design of my application as follows:
Suppose one is main thread, and other are slave threads. Main thread keep track of all slave thread ID's. During one of the scenario of application (one of the scenario is graceful shutdown of application), i want to delete slave threads from main thread.
Here slave threads may be executing i.e., either in sleep mode or doing some action which i cannot stop the action. So i want to delete the threads from main thread with thread IDs i stored internally.
Additional info:
While deleting i should not wait for thread current action to complete as it may take long time as i am reading from data base and taking some action in thread, in case of gracefull shut down i should not wait for action to complete as it may take time.
If i force delete a thread how can there will be a resource leaks?
Is above design is ok or there is any flow or any ways we can improve the design.
Thanks!
It's not okay. It's a bad practice to forcefully kill a thread from another thread because you'll very likely to have resource leaks. The best way is to use an event or signal to signal the client process to stop and wait until they exit gracefully.
The overall flow of the program would look like this:
Parent thread creates an event (say hEventParent). it then creates child threads and passes hEventParent as a parameter. The Parent thread keeps the hThread of the child thread(s).
Child threads do work but periodically waits for hEventParent.
When the program needs to exit, the parent thread sets hEventParent. It then waits for hThread (WaitForMultipleObjects also accepts hThread)
Child thread is notified then execute clean up routine and exits.
When all the threads exit, the parent can then exit.
The most common approach consists in the main thread sending a termination signal to all the threads, then waiting for the threads to end.
Typically the worker threads will have a loop, inside of which the work is done. You can add a boolean variable that indicates if the thread needs to end. For example:
terminate = false;
while (!terminate) {
// work here
}
If you want your worker threads to go to sleep when they have no work, then it gets a bit more complicated. In this case you could make the threads wait on semaphores. Each semaphore will be signaled when there is work to do, and that will awaken the thread. You will also signal the semaphore when the request to terminate is issued. Example worker thread:
terminate = false;
while (!terminate) {
// work here
wait(semaphore); // go to sleep
}
When the main thread wants to exit it will set terminate to true for all the threads and then signal the thread semaphores to awaken the threads and give them a chance to see the termination request. After that it will join all the threads, and only after all the threads are finished it will exit.
Note that the terminate boolean may need to be declared as volatile if you are using C/C++, to indicate to the compiler that it may be changed from another thread.

Mechanics of Condition.Signal()

If I had threads as below
void thread(){
while() {
lock.acquire();
if(condition not true)
{
Cond.wait()
}
// blah blah
Cond.Signal();
lock.release();
}
}
Well I guess my main question is that whether the signalling thread continues running for a while after cond.signal() or immediately gives up the CPU?. I would like it in some cases not to release the lock before the woken up thread finishes execution and in some other cases it may be beneficial to release the lock immediately after signalling, without waiting for the other woken thread to finish.
I understand that if there are any threads waiting on the condition then they get woken up on Cond.signal(). But what do you mean by woekn up - put on the ready queue or does the scheduler make sure that it runs immediately?.
and what about the signalling thread.. does it go to sleep on the same condtion upon signalling? .. so then some other thread has to wake it up to make it release the lock?.
This is in large part dependent on your environment (OS, library, language...) and how the synchronisation primitives are implemented. Since you haven't specified any I'll just give a general answer.
When putting a thread to sleep, most environment will choose to remove it from the scheduler's ready queue and the thread will give up its remaining CPU time. When woken up, the thread is simply placed back into the ready queue and will resume execution the next time the scheduler selects it from the queue.
It's also possible that the thread will do some active waiting (spinning) instead of being removed from the scheduler's ready queue. In this case, the thread will resume execution right away. Note that since a thread can still be run out of CPU of time while spinning, it might have to wait to be rescheduled before waking up. This is a useful strategy if your critical sections are very small and you don't want to pay for the scheduling overheads.
A hybrid approach would be to do a small amount of active waiting before removing the thread from the scheduler's ready queue.
As for the signaling thread, unless specified explicitly by your environment (I can't of any reasons but you never know), I wouldn't expect a call to signal() to block in a way that you have to wake it up. Signal() might have to synchronize itself with other threads calling signal() but those are implementation details and you shouldn't have to do anything about it.

do semaphores satisfies bounded waiting

Does semaphore satisfies bounded waiting or they are just for providing mutual exclusion??
Answer
It may break bounded waiting condition theoretically as you'll see below. Practically, it depends heavily on which scheduling algorithm is used.
The classic implementation of wait() and signal() primitive is as:
//primitive
wait(semaphore* S)
{
S->value--;
if (S->value < 0)
{
add this process to S->list;
block();
}
}
//primitive
signal(semaphore* S)
{
S->value++;
if (S->value <= 0)
{
remove a process P from S->list;
wakeup(P);
}
}
When a process calls the wait() and fails the "if" test, it will put itself into a waiting list. If more than one processe are blocked on the same semaphore, they're all put into this list(or they are somehow linked together as you can imagine). When another process leaves critical section and calls signal(), one process in the waiting list will be chosen to wake up, ready to compete for CPU again. However, it's the scheduler who decides which process to pick from the waiting list. If the scheduling is implemented in a LIFO(last in first out) manner for instance, it's possible that some process are starved.
Example
T1: thread 1 calls wait(), enters critical section
T2: thread 2 calls wait(), blocked in waiting list
T3: thread 3 calls wait(), blocked in waiting list
T4: thread 1 leaves critical section, calls signal()
T5: scheduler wakes up thread 3
T6: thread 3 enters critical section
T7: thread 4 calls wait(), blocked in waiting list
T8: thread 3 leaves critical section, calls signal()
T9: scheduler wakes up thread 4
..
As you can see, although you implements/uses the semaphore correctly, thread 2 has a unbounded waiting time, even possibly starvation, caused by continuous entering of new processes.

synchronising threads with mutexes

In Qt, I have a method which contains a mutex lock and unlock. The problem is when the mutex is unlock it sometimes take long before the other thread gets the lock back. In other words it seems the same thread can get the lock back(method called in a loop) even though another thread is waiting for it. What can I do about this? One thread is a qthread and the other thread is the main thread.
You can have your thread that just unlocked the mutex relinquish the processor. On Posix, you do that by calling pthread_yield() and on Windows by calling Sleep(0).
That said, there is no guarantee that the thread waiting on the lock will be scheduled before your thread wakes up again.
It shouldn't be possible to release a lock and then get it back if some other thread is already waiting on it.
Check that you actually releasing the lock when you think you do. Check that waiting thread actually waits (and not spins a loop with a trylock tests and sleeps, I actually done that once and was very puzzled at first :)).
Or if waiting thread really never gets time to even reach locking code, try QThread::yieldCurrentThread(). This will stop current thread and give scheduler a chance to give execution to somebody else. Might cause unnecessary switching depending on tightness of your loop.
If you want to make sure that one thread has priority over the other ones, an option is to use a QReadWriteLock. It's adapted to a typical scenario where n threads are going to read a value in a infinite loop, with only one thread updating it. I think it's the scenario you described.
QReadWriteLock offers two ways to lock: lockForRead() and lockForWrite(). The threads depending on the value will use the latter, while the thread updating the value (typically via the GUI) will use the former (lockForWrite()) and will have top priority. You won't need to sleep or yield or whatever.
Example code
Let's say you have a QReadWrite lock; somewhere.
"Reader" thread
forever {
lock.lockForRead();
if (condition) {
do_stuff();
}
lock.unlock();
}
"Writer" thread
// external input (eg. user) changes the thread
lock.lockForWrite(); // will block as soon as the reader lock ends
update_condition();
lock.unlock();

Tell if 'elapsed' event thread is still running?

Given a System.Timers.Timer, is there a way from the main thread to tell if the worker thread running the elapsed event code is still running?
In other words, how can one make sure the code running in the worker thread is not currently running before stopping the timer or the main app/service thread the timer is running in?
Is this a matter of ditching Timer for threading timer using state, or is it just time to use threads directly?
Look up ManualResetEvent, as it is made to do specifically what you're asking for.
Your threads create a new reset event, and add it to an accessible queue that your main thread can use to see if any threads are still running.
// main thread owns this
private List<ManualResetEvent> _resetEvents;
...
// main thread does this to wait for executing threads to finish
WaitHandle.WaitAll(_resetEvents.ToArray(), 2000, false)
...
// worker threads do this to signal the thread is done
myResetEvent.Set();
I can give you more sample code if you want, but I basically just copied it from the couple articles I read when I had to do this a year ago or so.
Forgot to mention, you can't add this functionality to the default threads you'll get when your timer fires. So you should make your timer handler be very lean and do nothing more than prepare and start a new worker thread.
...
ThreadPool.QueueUserWorkItem(new WaitCallback(MyWorkerDelegate),
myCustomObjectThatContainsAResetEvent);
For the out of the box solution, there is no way. The main reason is the thread running the TimerCallback function is in all likelihood still alive even if the code running the callback has completed. The TimerCallback is executed by a Thread out of the ThreadPool. When the task is completed the thread does not die, but instead goes back into the queue for the next thread pool task.
In order to get this to work your going to have to use a manner of thread safe signalling to detect the operation has completed.
Timer Documentation

Resources