How is multitasking performed in operating systems? - multithreading

How is process-based multitasking achieved by using multi-threading in each process?
For example, consider when an operating system is running with two background process. Each process supports internally multi-threading features. Now, how does time slicing happen between and inside these processes, and how does time slicing happen between threads?

Look at publications by this man: http://en.wikipedia.org/wiki/Andrew_S._Tanenbaum
Or just feed your query into Google. There's many ways to skin the multi-tasking/multi-threading cat.
Come back when you have at least tried to find your own answers and ask some more specific questions.

One possible implementation is that the OS just schedules threads. When it switches to a thread, it obviously switches in the address space of the process the thread belongs to, but from a scheduling viewpoint the process is pretty much ignored (e.g., Windows works this way).

Related

Context switch: what decides when?

I'm looking for some background explanations about context switch in modern personal computers with mainstream architecture (say x64).
While context switch is mainly done by the hardware, I wonder what in the computer decides of task scheduling and context switch when running multiple threads and/or multiple processes. Is it the CPU itself, the operating system, the compiler/virtual machine... ?
I'd like to have an idea of what strategies are used to decide when to switch. For example, if I start a hundred threads doing independent dummy additions in endless loops, when will the context switches happen?
This is a complex subject that I can't do justice in a simple response here. But let me hit some high-points. I further am going to assume modern OS's like Windows or the various Unix derivatives and ignore embedded real-time systems.
The context switch is not performed in hardware. It is critical to understand this. It is performed in software via a OS subsystem known as the scheduler. The scheduler is a glorified interrupt controller that will fire many times a microsecond and decide what thread will execute next. The algorithms for doing so are numerous and the subject of many a PHD thesis. A good overview I found quickly is here: http://www.studytonight.com/operating-system/cpu-scheduling
Good Operating Systems books will go over this in detail. There are too many to note so pick your poison.
One last point, to grasp at a complete level how scheduling is performed it really helps to understand how virtual addressing schemes work as that is truly what differentiates processes from threads. Threads are what is critical in terms of the Scheduler put processes encapsulate threads and the virtual memory space.
I'm not sure this helps but I was at least able to correct one misconception and point you at a simple article on OS thread scheduling.

What's the point of multi-threading on a single core?

I've been playing with the Linux kernel recently and diving back into the days of OS courses from college.
Just like back then, I'm playing around with threads and the like. All this time I had been assuming that threads were automatically running concurrently on multiple cores but I've recently discovered that you actually have to explicitly code for handling multiple cores.
So what's the point of multi-threading on a single core? The only example I can think of is from college when writing a client/server program but that seems like a weak point.
All this time I had been assuming that threads were automatically
running concurrently on multiple cores but I've recently discovered
that you actually have to explicitly code for handling multiple cores.
The above is incorrect for any widely used, modern OS. All of Linux's schedulers, for example, will automatically schedule threads on different cores and even automatically move threads from one core to another when necessary to maximize core usage. There are some APIs that allow you to modify the schedulers' behavior, but these APIs are generally used to disable automatic thread-to-core scheduling, not to enable it.
So what's the point of multi-threading on a single core?
Imagine you have a GUI program whose purpose is to execute an expensive computation (for example, render a 3D image or a Mandelbrot set) and then display the result. Let's say this computation takes 30 seconds to complete on this particular CPU. If you implement that program the obvious way, and use only a single thread, then the user's GUI controls will be unresponsive for 30 seconds while the calculation is executing -- the user will be unable to do anything with your program, and possibly unable to do anything with his computer at all. Since users expect GUI controls to be responsive at all times, that would be a poor user experience.
If you implement that program with two threads (one GUI thread and one rendering thread), on the other hand, the user will be able to click buttons, resize the window, quit the program, choose menu items, etc, even while the computation is executing, because the OS is able to wake up the GUI thread and allow it to handle mouse/keyboard events when necessary.
Of course, it is possible to write this program with a single thread and keep its GUI responsive, by writing your single thread to do just a few milliseconds worth of computation, then check to see if there are GUI events available to process, handling them, then going back to do a bit more computation, etc. But if you code your app this way, you are essentially writing your own (very primitive) thread scheduler inside your app anyway, so why reinvent the wheel?
The first versions of MacOS were designed to run on a single core, but had no real concept of multithreading. This forced every application developer to correctly implement some manual thread management -- even if their app did not have any extended computations, they had to explicitly indicate when they were done using the CPU, e.g. by calling WaitNextEvent. This lack of multithreading made early (pre-MacOS-X) versions of MacOS famously unreliable at multitasking, since just one poorly written application could bring the whole computer to a grinding halt.
First, a program not only computes, but also waits for input/output and so can be considered as executing on an I/O processor. So even single-core machine is a multi-processor machine, and employing of multi-threading is justified.
Second, a task can be divided in several threads in the sake of modularity.
Multithreading is not only for taking advantage of multiple cores.
You need multiple processes for multitasking. For similar reason you are allowed to have multiple threads, which are lightweight compared with processes.
You probably don't want to spawn processes all the time for things like blocking I/O. That may be overkill.
And there is fiber, which is even more lightweight. So we have process, thread, and fiber for different levels of needs.
Well, when you say multithreading on a single core, there are things you need to consider. For example, the thread API that you are using - is it user level or kernel level. Most probably from you question I believe you are using user level threads.
Now, user level threads, depending upon the host OS or the API itself may map to single kernel thread or multiple. Many relations are possible like 1-1,many-1 or many-many.
Now, if there is a single core, your OS can still provide you several Kernel level threads which may behave as multiple processes to the CPU. In which case, OS will give you a time-slicing (and multi-programming) on the kernel threads leading to superfast context switch and via the user level API - you/your code will seem to have multithreaded features.
Also note that eventhough your processor is a single core, depending on the make, it can be hyperthreaded and have super deep pipelines allowing the concurrent running of Kernel threads with very low overhead.
For references: Check Intel/AMD architecture and how various OS provide Kernel threads.

Can two threads within the same process run simultaneously on two processors? Why / Why not?

I don't understand. Isn't this the whole idea of multi-threading?
Edit: Question modified from "Why two threads within the same process cannot run simultaneously on two processors?".
In the article you link to, it lists this as a limitation of user-level threads (that are implemented by an application itself, without being backed by OS-level threads).
That's correct, but it does not apply to "real" threads. The OS is free to schedule them across multiple processors.
Now that most operating systems have robust support for multithreading, I believe that those user-level threads are a thing of the past.
So, yes, the whole point of multi-threading is to be able to run code in parallel on as many CPU as you want to assign to it. And "user-level threads" were a workaround for platforms without proper native thread support, and it was limited in the way you describe (no multiple CPU for a single application process).

Why should I use a thread vs. using a process?

Separating different parts of a program into different processes seems (to me) to make a more elegant program than just threading everything. In what scenario would it make sense to make things run on a thread vs. separating the program into different processes? When should I use a thread?
Edit
Anything on how (or if) they act differently with single-core and multi-core would also be helpful.
You'd prefer multiple threads over multiple processes for two reasons:
Inter-thread communication (sharing data etc.) is significantly simpler to program than inter-process communication.
Context switches between threads are faster than between processes. That is, it's quicker for the OS to stop one thread and start running another than do the same with two processes.
Example:
Applications with GUIs typically use one thread for the GUI and others for background computation. The spellchecker in MS Office, for example, is a separate thread from the one running the Office user interface. In such applications, using multiple processes instead would result in slower performance and code that's tough to write and maintain.
Well apart from advantages of using thread over process, like:
Advantages:
Much quicker to create a thread than
a process.
Much quicker to switch
between threads than to switch
between processes.
Threads share data
easily
Consider few disadvantages too:
No security between threads.
One thread can stomp on another thread's
data.
If one thread blocks, all
threads in task block.
As to the important part of your question "When should I use a thread?"
Well you should consider few facts that a threads should not alter the semantics of a program. They simply change the timing of operations. As a result, they are almost always used as an elegant solution to performance related problems. Here are some examples of situations where you might use threads:
Doing lengthy processing: When a windows application is calculating it cannot process any more messages. As a result, the display cannot be updated.
Doing background processing: Some
tasks may not be time critical, but
need to execute continuously.
Doing I/O work: I/O to disk or to
network can have unpredictable
delays. Threads allow you to ensure
that I/O latency does not delay
unrelated parts of your application.
I assume you already know you need a thread or a process, so I'd say the main reason to pick one over the other would be data sharing.
Use of a process means you also need Inter Process Communication (IPC) to get data in and out of the process. This is a good thing if the process is to be isolated though.
You sure don't sound like a newbie. It's an excellent observation that processes are, in many ways, more elegant. Threads are basically an optimization to avoid too many transitions or too much communication between memory spaces.
Superficially using threads may also seem like it makes your program easier to read and write, because you can share variables and memory between the threads freely. In practice, doing that requires very careful attention to avoid race conditions or deadlocks.
There are operating-system kernels (most notably L4) that try very hard to improve the efficiency of inter-process communication. For such systems one could probably make a convincing argument that threads are pointless.
I would like to answer this in a different way. "It depends on your application's working scenario and performance SLA" would be my answer.
For instance threads may be sharing the same address space and communication between threads may be faster and easier but it is also possible that under certain conditions threads deadlock and then what do you think would happen to your process.
Even if you are a programming whiz and have used all the fancy thread synchronization mechanisms to prevent deadlocks it certainly is not rocket science to see that unless a deterministic model is followed which may be the case with hard real time systems running on Real Time OSes where you have a certain degree of control over thread priorities and can expect the OS to respect these priorities it may not be the case with General Purpose OSes like Windows.
From a Design perspective too you might want to isolate your functionality into independent self contained modules where they may not really need to share the same address space or memory or even talk to each other. This is a case where processes will make sense.
Take the case of Google Chrome where multiple processes are spawned as opposed to most browsers which use a multi-threaded model.
Each tab in Chrome can be talking to a different server and rendering a different website. Imagine what would happen if one website stopped responding and if you had a thread stalled due to this, the entire browser would either slow down or come to a stop.
So Google decided to spawn multiple processes and that is why even if one tab freezes you can still continue using other tabs of your Chrome browser.
Read more about it here
and also look here
I agree to most of the answers above. But speaking from design perspective i would rather go for a thread when i want set of logically co-related operations to be carried out parallel. For example if you run a word processor there will be one thread running in foreground as an editor and other thread running in background auto saving the document at regular intervals so no one would design a process to do that auto saving task separately.
In addition to the other answers, maintaining and deploying a single process is a lot simpler than having a few executables.
One would use multiple processes/executables to provide a well-defined interface/decoupling so that one part or the other can be reused or reimplemented more easily than keeping all the functionality in one process.
Came across this post. Interesting discussion. but I felt one point is missing or indirectly pointed.
Creating a new process is costly because of all of the
data structures that must be allocated and initialized. The process is subdivided into different threads of control to achieve multithreading inside the process.
Using a thread or a process to achieve the target is based on your program usage requirements and resource utilization.

Is there an advantage of the operating system understanding the characteristics of how a thread may be used?

Is there an advantage of the operating system understanding the characteristics of how a thread may be used? For example, what if there were a way in Java when creating a new thread to indicate that it would be used for intensive CPU calculations vs will block for I/O. Wouldn't thread scheduling improve if this were a capability?
I'm not sure what you're actually expecting the OS to do with the information that a thread is I/O or compute. The things which actually make the most difference to how threads get scheduled (ie thread priority and thread CPU affinity) are already exposed by APIs (and support for NUMA aspects are starting to appear in mainstream OS APIs too).
If by a "compute thread" you mean it's something doing background processing and less important than a GUI thread (from the point of view of maintaining app responsiveness) probably the most useful thing you can do is lower the priority of the compute threads a little.
That's what OS processes do. The OS has sophisticated scheduling for the processes. The OS tracks I/O use and CPU use and dynamically adjusts priorities so that CPU-intensive processing doesn't interfere with I/O.
If you want those features, use a proper OS process.
Is that even necessary? Threads blocking on I/O will cause CPU-intensive threads to run. The operating system decides how to schedule threads. AFAIK there's no way to give any hints with Java.
Yes, it is very important to understand them specially if you are one of those architects who like opening lot of threads, specially on windows.
Jeff Richter over at Wintellect has a library called PowerThreading. It is very useful if you are developing applications on .NET, but since you are talking about JAVA, it is still better to understand OS threads, kernel models and how the interrupts work.

Resources