I'm tyring to answer a question, a professor purposed us.
Threads usually have states Running, Ready, and Blocked. Suppose we wanted to add a Suspended state to maximize processor utilization through admitting a larger number of threads requiring more memory than available in the process' address space.Does the above make sense? If it does, explain why and explain what benet(s) we obtain. If it doesnot, explain why not.
The suspended state seems pretty stupid to me because synchronization would just be a terrible experience. In any case where you might want to suspend, going into a blocked state is probably a 10x better idea because of this. And on top of that isn't processor already utilized as best as it can be because when one thread gets blocked, another gets scheduled. By putting in a suspend state that you explicitly go into, you are pretty much manually controlling the scheduling. I'm really confused as to what benefits it would provide. Any ideas?
I completely agree with you, synchronization is not possible until you are not limited to threads start point synchronization. where you just create a thread in suspended state, and allow it to continue only when parent process raises a flag. But apart of this, synchronization is not possible with suspended thread model.
I also think blocking is a better than hanging up thread in suspended queue. Processor is already fully utilized, and It is not really beneficial anyway to put thread in suspended state until you are using it for some special purpose. Debuggers use suspended thread state, so that they can alter/break/trace thread's state. This shows exactly how we can use suspended state.
You are right, You are somewhat manually controlling thread's scheduling process and that makes It a terrible idea.
Related
Threads and parallel programming is really confusing the heck outta me. In this book, at page 9, the problem stated is that though a thread might be scheduled and put in the ready state, it does not necessarily mean that it has acquird a lock.
Briefly put, a thread (say t1) waiting on a lock is notified via a condition_variable and the thread is put in the ready state, but not executed. But just before it can run anything, another thread is scheduled (say t2) and executed. This means that the condition under which t1 assumes it is woken up no longer holds.
Does this imply that merely scheduling a thread or putting it the ready state does not mean that it acquired a lock? If this is the case, must I always put the precondition in a while loop? Is this another possible meaning of a spurious wakeup? Also, what other cases like this must I be aware of?
I was always under the assumption that if a thread is woken up from a wait (which is not a spurious wakeup), it immediately acquires the lock (wakeup = lock acquired, under this circumstance), as the kernel keeps track of this.
This question is in close relation to my other question posted here.
Thanks.
Where can I ask these noob questions, in sort of an interactive format with follow-up questions? These seem too dumb for stackoverflow.
must I always put the condition in a while loop?
It's good practice to do so. Even if you know that on some particular hardware platform and OS, it's impossible for the wait() to return unless the condition is true; it could behave differently after the OS has been updated, or it could behave differently if your code gets moved to a different platform, or it could behave differently after some change is made to your code.
If you ever work developing "enterprise" software, then changes like that can and will happen. Might as well start learning good habits that can help to avert future disasters.
I was always under the assumption that if a thread is woken up from a wait (which is not a spurious wakeup), it immediately acquires the lock
You can safely assume that wait() will not, under any circumstances, ever return until the mutex has been re-locked. The whole wait()/notify() paradigm depends on it behaving in that way.
I've learned that a process has running, ready, blocked, and suspended states. Threads also have these states except for suspended because it lives in the process's address space.
A process blocks most of the time when it is doing a blocking i/o or waiting for an event.
I can easily picture out a process getting blocked if its single-threaded or if it follows a one-to-many model, but how does it work if the process is multi-threaded?
For example:
I have a process with two threads in a system that follows a one-to-one model. One handles the gui and the other handles the blocking i/o. I know the process remains responsive because the other thread handles the i/o.
So is there by any chance the process gets blocked or should I just rule it out in this case?
I'm just getting into these stuff so forgive me If I haven't understand some of the important details yet.
Let's say you have a work queue where the UI thread schedules work to be done and the I\O thread looks there for work to do. The work queue itself is data that is read and modified from both threads, therefor you must synchronize access somehow or race conditions result.
The naive approach is to synchronize access to the queue using a lock (aka critical section). If the I\O thread acquires the lock and then blocks, the UI thread will only remain responsive until it decides it needs to schedule work and tries to acquire the lock. A better approach is to use a lock-free queue about which much has been written and you can easily search for more info.
But to answer your question, yes, it is still much easier than you might think to cause UI to stutter / hang even when using multiple threads. There are various libraries that make it easier or harder to solve this problem, so depending on your OS and language of choice, there may be something better than just OS primitives. Win32 (from what I remember) doesn't it make it very easy at all despite having all sorts of synchronization primitives. Pthreads and Boost never seemed very straightforward to me either. Apple's GCD makes it semantically much easier to express what you want (in my opinion), though there are still pitfalls one must be aware of (such as scheduling too many blocking operations on a single work queue to be done in parallel and causing the processor to thrash when they all wake up at the same time).
My advice is to just dive in and write lots of multithreaded code. It can be tough to debug but you will learn a lot and eventually it becomes second nature.
How do you tell the thread scheduler in linux to not interrupt your thread for any reason? I am programming in user mode. Does simply locking a mutex acomplish this? I want to prevent other threads in my process from being scheduled when a certain function is executing. They would block and I would be wasting cpu cycles with context switches. I want any thread executing the function to be able to finish executing without interruption even if the threads' timeslice is exceeded.
How do you tell the thread scheduler in linux to not interrupt your thread for any reason?
Can't really be done, you need a real time system for that. The closes thing you'll get with linux is to
set the scheduling policy to a realtime scheduler, e.g. SCHED_FIFO, and also set the PTHREAD_EXPLICIT_SCHED attribute. See e.g. here , even now though, e.g. irq handlers and other other stuff will interrupt your thread and run.
However, if you only care about the threads in your own process not being able to do anything, then yes, having them block on a mutex your running thread holds is sufficient.
The hard part is to coordinate all the other threads to grab that mutex whenever your thread needs to do its thing.
You should architect your sw so you're not dependent on the scheduler doing the "right" thing from your app's point of view. The scheduler is complicated. It will do what it thinks is best.
Context switches are cheap. You say
I would be wasting cpu cycles with context switches.
but you should not look at it that way. Use the multi-threaded machinery of mutexes and blocked / waiting processes. The machinery is there for you to use...
You can't. If you could what would prevent your thread from never releasing the request and starving other threads.
The best you can do is set your threads priority so that the scheduler will prefer it over lower priority threads.
Why not simply let the competing threads block, then the scheduler will have nothing left to schedule but your living thread? Why complicate the design second guessing the scheduler?
Look into real time scheduling under Linux. I've never done it, but if you indeed do NEED this this is as close as you can get in user application code.
What you seem to be scared of isn't really that big of a deal though. You can't stop the kernel from interrupting your programs for real interrupts or of a higher priority task wants to run, but with regular scheduling the kernel does uses it's own computed priority value which pretty much handles most of what you are worried about. If thread A is holding resource X exclusively (X could be a lock) and thread B is waiting on resource X to become available then A's effective priority will be at least as high as B's priority. It also takes into account if a process is using up lots of cpu or if it is spending lots of time sleeping to compute the priority. Of course, the nice value goes in there too.
Is it possible to detect a hung thread? This thread is not part of any thread pool, its just a system thread. Since thread is hung, it may not process any events.
Thanks,
In theory, it is impossible. If you are on Windows and suspect that the thread might be deadlocked, I guess you could use GetThreadContext a few times and check if it is always the same, but I don't know how reliable it will be.
Not in theory, but in practice it may be possible, depending on your workload. For example if it is supposed to respond to events, you could post a thread message (in windows) and see if it responds. You could set an event or flag that would cause it to do something - you then have to wait for a "reasonable" amount of time to see if it has responded. The question then arises what you would do with the "hung" thread, even if it has really hung and isn't just taking a long time to respond. The thread cannot generally safely be killed and you cannot generally interrupt an arbitrary thread. It is safe enough to log a message to the effect, but who will care? Probably the best thing to do is to note it and figure out the bug that is causing it to hang.
Depending on the workload and the kinds of processing done and other details, it may be possible to detect a hung thread. In some cases, modern VMs can detect a lock deadlock where two threads are hung waiting for the other to release a lock. (But don't rely on this, because it isn't always possible, only sometimes.)
We need a lot more information before we can give a specific answer to your question.
Now, this might be a very newbie question, but I don't really have experience with multithreaded programming and I haven't fully understood how threads work compared to processes.
When a process on my machine hangs, say it's waiting for some IO that never comes or something similar, I can kill and restart it because other processes aren't affected and can, for example, still operate my terminal. This is very obvious, of course.
I'm not sure whether it is the same with threads inside a process: If one hangs, are the others unaffected? In other words, can I run a "watchdog" thread which supervises the other threads and, for example kill and recreate hanging threads? For example, if I have a threadpool that I don't want to be drained by occasional hangups.
Threads are independent, but there's a difference between a process and a thread, and that is that in the case of processes, the operating system does more than just "kill" it. It also cleans up after it.
If you start killing threads that seems to be hung, most likely you'll leave resources locked and similar, something that the operating system would close for you if you did the same to a process.
So for instance, if you open a file for writing, and start producing data and write it to the file, and this thread now hangs, for whatever reason, killing the thread will leave the file still open, and most likely locked, up until you close the entire program.
So the real answer to your question is: No, you can not kill threads the hard way.
If you simply ask a thread to close, that's different because then the thread is still in control and can clean up and close resources before terminating, but calling an API function like "KillThread" or similar is bad.
If a thread hangs, the others will continue executing. However, if the hung thread has locked a semaphore, critical section or other kind of synchronization object, and another thread attempts to lock the same synchronization object, you now have a deadlock with two dead threads.
It is possible to monitor other threads from a thread. Depending on your platform, there are appliable API's: I refer you to those as you haven't stated what OS you are writing for.
You didn't mention about the platform, but as far as I'm concerned, NT kernel schedules threads, not processes and threats them independently in that manner. This might not be and is not true on other platforms (some platforms, like Windows 3.1, do not use preemptive multithreading and if one thread goes in infinite loop, everything is affected).
The simple answer is yes.
Typically though code in a thread will handle this likely hood itself. Most commonly many APIs that perform operations that may hang will have timeout features of their own.
Alternatively a thread will wait on not just an the operation that might hang but also a timer. If the timer signals first its assummed the operation has hung.
Since for a watch dog thread to be useful in this scenario would need some co-operation from code in the other threads having the threads themselves set timeouts makes more sense than a watchdog.
Threads get scheduled independent of each other. So you could indeed stop and restart hanging threads. Threads do not run in a separate address-space so a misbehaving thread can still overwrite memory or take locks needed by other threads in the same process.
There's a pretty good overview of some of the pitfalls of killing and suspending threads in the Java documentation explaining why the methods that do it are deprecated. Basically, if you expect to be able to kill a thread, you have to be very, very careful to make it work without some sort of corruption. If a thread is hung it's probably because of a bug...in which case killing it will probably result in corruption.
http://java.sun.com/j2se/1.4.2/docs/guide/misc/threadPrimitiveDeprecation.html
If you need to be able to kill things, use processes.