Context switch: what decides when? - multithreading

I'm looking for some background explanations about context switch in modern personal computers with mainstream architecture (say x64).
While context switch is mainly done by the hardware, I wonder what in the computer decides of task scheduling and context switch when running multiple threads and/or multiple processes. Is it the CPU itself, the operating system, the compiler/virtual machine... ?
I'd like to have an idea of what strategies are used to decide when to switch. For example, if I start a hundred threads doing independent dummy additions in endless loops, when will the context switches happen?

This is a complex subject that I can't do justice in a simple response here. But let me hit some high-points. I further am going to assume modern OS's like Windows or the various Unix derivatives and ignore embedded real-time systems.
The context switch is not performed in hardware. It is critical to understand this. It is performed in software via a OS subsystem known as the scheduler. The scheduler is a glorified interrupt controller that will fire many times a microsecond and decide what thread will execute next. The algorithms for doing so are numerous and the subject of many a PHD thesis. A good overview I found quickly is here: http://www.studytonight.com/operating-system/cpu-scheduling
Good Operating Systems books will go over this in detail. There are too many to note so pick your poison.
One last point, to grasp at a complete level how scheduling is performed it really helps to understand how virtual addressing schemes work as that is truly what differentiates processes from threads. Threads are what is critical in terms of the Scheduler put processes encapsulate threads and the virtual memory space.
I'm not sure this helps but I was at least able to correct one misconception and point you at a simple article on OS thread scheduling.

Related

What's the point of multi-threading on a single core?

I've been playing with the Linux kernel recently and diving back into the days of OS courses from college.
Just like back then, I'm playing around with threads and the like. All this time I had been assuming that threads were automatically running concurrently on multiple cores but I've recently discovered that you actually have to explicitly code for handling multiple cores.
So what's the point of multi-threading on a single core? The only example I can think of is from college when writing a client/server program but that seems like a weak point.
All this time I had been assuming that threads were automatically
running concurrently on multiple cores but I've recently discovered
that you actually have to explicitly code for handling multiple cores.
The above is incorrect for any widely used, modern OS. All of Linux's schedulers, for example, will automatically schedule threads on different cores and even automatically move threads from one core to another when necessary to maximize core usage. There are some APIs that allow you to modify the schedulers' behavior, but these APIs are generally used to disable automatic thread-to-core scheduling, not to enable it.
So what's the point of multi-threading on a single core?
Imagine you have a GUI program whose purpose is to execute an expensive computation (for example, render a 3D image or a Mandelbrot set) and then display the result. Let's say this computation takes 30 seconds to complete on this particular CPU. If you implement that program the obvious way, and use only a single thread, then the user's GUI controls will be unresponsive for 30 seconds while the calculation is executing -- the user will be unable to do anything with your program, and possibly unable to do anything with his computer at all. Since users expect GUI controls to be responsive at all times, that would be a poor user experience.
If you implement that program with two threads (one GUI thread and one rendering thread), on the other hand, the user will be able to click buttons, resize the window, quit the program, choose menu items, etc, even while the computation is executing, because the OS is able to wake up the GUI thread and allow it to handle mouse/keyboard events when necessary.
Of course, it is possible to write this program with a single thread and keep its GUI responsive, by writing your single thread to do just a few milliseconds worth of computation, then check to see if there are GUI events available to process, handling them, then going back to do a bit more computation, etc. But if you code your app this way, you are essentially writing your own (very primitive) thread scheduler inside your app anyway, so why reinvent the wheel?
The first versions of MacOS were designed to run on a single core, but had no real concept of multithreading. This forced every application developer to correctly implement some manual thread management -- even if their app did not have any extended computations, they had to explicitly indicate when they were done using the CPU, e.g. by calling WaitNextEvent. This lack of multithreading made early (pre-MacOS-X) versions of MacOS famously unreliable at multitasking, since just one poorly written application could bring the whole computer to a grinding halt.
First, a program not only computes, but also waits for input/output and so can be considered as executing on an I/O processor. So even single-core machine is a multi-processor machine, and employing of multi-threading is justified.
Second, a task can be divided in several threads in the sake of modularity.
Multithreading is not only for taking advantage of multiple cores.
You need multiple processes for multitasking. For similar reason you are allowed to have multiple threads, which are lightweight compared with processes.
You probably don't want to spawn processes all the time for things like blocking I/O. That may be overkill.
And there is fiber, which is even more lightweight. So we have process, thread, and fiber for different levels of needs.
Well, when you say multithreading on a single core, there are things you need to consider. For example, the thread API that you are using - is it user level or kernel level. Most probably from you question I believe you are using user level threads.
Now, user level threads, depending upon the host OS or the API itself may map to single kernel thread or multiple. Many relations are possible like 1-1,many-1 or many-many.
Now, if there is a single core, your OS can still provide you several Kernel level threads which may behave as multiple processes to the CPU. In which case, OS will give you a time-slicing (and multi-programming) on the kernel threads leading to superfast context switch and via the user level API - you/your code will seem to have multithreaded features.
Also note that eventhough your processor is a single core, depending on the make, it can be hyperthreaded and have super deep pipelines allowing the concurrent running of Kernel threads with very low overhead.
For references: Check Intel/AMD architecture and how various OS provide Kernel threads.

How is multitasking performed in operating systems?

How is process-based multitasking achieved by using multi-threading in each process?
For example, consider when an operating system is running with two background process. Each process supports internally multi-threading features. Now, how does time slicing happen between and inside these processes, and how does time slicing happen between threads?
Look at publications by this man: http://en.wikipedia.org/wiki/Andrew_S._Tanenbaum
Or just feed your query into Google. There's many ways to skin the multi-tasking/multi-threading cat.
Come back when you have at least tried to find your own answers and ask some more specific questions.
One possible implementation is that the OS just schedules threads. When it switches to a thread, it obviously switches in the address space of the process the thread belongs to, but from a scheduling viewpoint the process is pretty much ignored (e.g., Windows works this way).

Parallel software?

What is the meaning of "parallel software" and what are the differences between "parallel software" and "regular software"?
What are its advantages and disadvantages?
Does writing "parallel software" require a specific hardware or programming language ?
Are the "parallel software" requires a specified hardware or programing language ?
Yes and Yes.
The first one is trivially easy. Most modern CPU's (Say anything newer than m6800) have hardware features that make it possible to do more than one thing at a time, though not necessarily both at the same time. For instance, when a timer interrupt goes off, a CPU could save what it's doing, and then start doing something else. Those tasks run concurrently.
Even without that, you could just get two machines with some sort of connection to each other (like a simple serial connection via a Null modem adapter) and they can both work on the same task in parallel.
Most new (not just modern but recent) CPU's have parallel computing resources built in. These multi-core CPU's can actually be working on two or more tasks at the same time, one task per core, and have special features that make it a bit more efficient for those tasks to cooperate.
The second one, requiring special software tools such as a parallel enabled language, is in some ways the hardest part of parallel computing. If you're the only person in the kitchen, it's pretty easy to cook a meal, by following each recipe from start to finish, one after the next, until all dishes are cooked. If you want to speed that up by adding more cooks, you have to be a bit more careful to not step on each other's toes.
The simplest way this is handled is by using a threading library that offers some tools so that multiple tasks can arrange to not clobber each other. This is not as easy as flagging a program as parallel and the system takes care of the rest, rather, you have to write each task to communicate with every other task at every place where there is the possibility of them interfering.
http://en.wikipedia.org/wiki/Thread_(computer_science)
In computer science, a thread of
execution results from a fork of a
computer program into two or more
concurrently running tasks. The
implementation of threads and
processes differs from one operating
system to another, but in most cases,
a thread is contained inside a
process. Multiple threads can exist
within the same process and share
resources such as memory, while
different processes do not share these
resources.
Most modern programming languages support multithreading in one way or another (even Javascript in the newest versions). :-)
Advantages and Disadvantages can depend on the task. If you have a lot of processing that you need to do, then multithreading can help you break that up into smaller units of work that each CPU can work on independently at the same time. However, multithreaded code will generally be more complex to write and maintain than single threaded code.
You can still write/run multithreaded code on a machine that has only one processor. Although there will only be one processor to execute the tasks, the operating system will ensure that they happen simultaneously by rapidly switching context and executing a few instructions for each thread at a time.
Some specialized hardware you may be familiar with which does parallel tasks is the GPU which can be found on most new computers. In this video, the Mythbusters demonstrate the difference between drawing on a single-threaded CPU, and a multi-threaded GPU:
http://www.youtube.com/watch?v=XtGf0HaW7x4&feature=player_embedded
parallel software can natively take advantage on multiple cores/cpus on a computer or sometimes across multiple computers. Examples include graphics rendering software and circuit design software.
Not so sure about disadvantages other than multi-processor aware software tends to be a CPU hog.

Developing Kernels to support Multiple CPUs

I am looking to get into operating system kernel development and figured my contribution would be to extend the SANOS operating system in order to support multiple core machines. I have been reading books on operating systems (Tannenbaum) as well as studying how BSD and Linux have tackled this challenge but still am stuck on several concepts.
Does SANOS need to have more sophisticated scheduling algorithms when it runs on multiple CPUs or will what is currently in place work fine?
I know that it is a good idea for threads to have affinity to a core that they were started on, but is this handled via scheduling or by changing the implementation of how threads are created?
What would need to be considered such that SANOS could run on a machine with hundreds of cores? From what I can tell, BSD and Linux at best only support a maximum of a dozen of cores.
Your reading material is good. SO no problems there. Also take a peek at the CS downloadable lectures on operating system design from Stanford.
The scheduling algorithm may need to be more sophisticated. This depends on the types of applications running and how greedy they are. Do they yield themselves or are they forced to. That kind of thing. This is more a question of what your processes want, or expect. A RTOS will have more complex scheduling than a desktop.
Threads should have an affinity to one core, because 2 threads in one process can execute in parallel ... but not at the same real-time on the same core. Putting them on different cores allows them to really-run-in-parallel. Also caching can be optimized for core affinity. This is really a mix of your thread implementation and scheduler. The sched may want to ensure threads are started at the same time on cores, rather than ad-hoc to reduce the amount of time threads wait on eachother and things. If your thread library is user-space, maybe it assigns core, or lets the scheduler decide based on capacity or recent deaths.
Scalability is often a kernel limit (which can be arbitrary). In Linux, if I recall, the limits are due to static sizing of arrays that hold CPU information structs in the scheduler. Hence they are a fixed size. This can be changed by recompiling the kernel. Most good scheduling algorithms will support a very large number of cores. As your core or processor count gets higher, you need to be careful that you don't fragment a processes execution too much. If a program has 2 threads, try and schedule them in close-time-proximity because causation may exist (through shared data) between them.
You also need to decide how your threads are implemented, and how a process is represented (be it heavy or lightweight) in the kernel. Are threads kernel managed? user-space managed? These things all have an impact on scheduler design. Look at how POSIX threads are implemented in various operating systems. There is just so much for you to think about :)
in short there are not really any straight-cut answers to where the logic does, or should reside. It is all down to design, application expectation, time-constraints (on the programs) and so on.
Hope this helps, I am not an expert here however.

Is there an advantage of the operating system understanding the characteristics of how a thread may be used?

Is there an advantage of the operating system understanding the characteristics of how a thread may be used? For example, what if there were a way in Java when creating a new thread to indicate that it would be used for intensive CPU calculations vs will block for I/O. Wouldn't thread scheduling improve if this were a capability?
I'm not sure what you're actually expecting the OS to do with the information that a thread is I/O or compute. The things which actually make the most difference to how threads get scheduled (ie thread priority and thread CPU affinity) are already exposed by APIs (and support for NUMA aspects are starting to appear in mainstream OS APIs too).
If by a "compute thread" you mean it's something doing background processing and less important than a GUI thread (from the point of view of maintaining app responsiveness) probably the most useful thing you can do is lower the priority of the compute threads a little.
That's what OS processes do. The OS has sophisticated scheduling for the processes. The OS tracks I/O use and CPU use and dynamically adjusts priorities so that CPU-intensive processing doesn't interfere with I/O.
If you want those features, use a proper OS process.
Is that even necessary? Threads blocking on I/O will cause CPU-intensive threads to run. The operating system decides how to schedule threads. AFAIK there's no way to give any hints with Java.
Yes, it is very important to understand them specially if you are one of those architects who like opening lot of threads, specially on windows.
Jeff Richter over at Wintellect has a library called PowerThreading. It is very useful if you are developing applications on .NET, but since you are talking about JAVA, it is still better to understand OS threads, kernel models and how the interrupts work.

Resources