Context Switch: Thread vs Process - multithreading

From what I understand, scheduling is based on processes, not threads. Then let's say I'm running two programs with the same logic, but one with multi-processing(10 processes) and the other one with multi-threading(10 threads). Then, since scheduling is based on processes, wouldn't the program with multi-processing dominate 10/11 of cpu time? The multi-threaded program would only have 1/11 of cpu time and 10 threads share that tiny time slice.
What am I missing?

Related

How "threads" get CPU and time slice?

Kindly, help me understand the following 'thread' concepts:
If concurrently running threads are part of a running process, how is time slice divided between multiple threads of a same process?
Also, since there is no new Process Control Clock created, how do they get their share of CPU allocation? Is it like, dispatcher lets TCB access CPU?
That's operating system scheduling job. The OS has a thread pool of active threads, it implements scheduling algorithm to make sure each thread is given amount of CPU time to run. For example Linux uses Completely Fair Scheduling

User-level threads for threading

From the Tanenbaum OS book it is mentioned the following:
"in user level threads, if a thread starts running, no other thread in that process will ever run unless the first thread voluntarily gives up the CPU".
That means threads are going to run one after the other (sequently) not in parallel. So what is the advantage of the user-level threads?
There are two concepts of multitasking in a single process multiple thread environment.
A single thread execute in time slice of the process. And that thread takes care of scheduling of other threads.
OS takes scheduling decision of process threads and might run them in parallel on different core.
You are talking about approach 1. Yes It has no advantage of multi-threading; but it let many threads / programs run one by one and give you "multitasking" (virtually).

How Linux scheduler schedules processes on multi-core processors?

Multi-core processors exploits thread level parallelism, it means that multiple threads runs in parallel. Suppose, a process has only one thread then, do the other cores remain idle during the execution of this process? In linux system, scheduler consider processes and threads both as a task. It doesn't differentiate between process and thread while scheduling it. So, does this means that different cores executes different threads of different processes in parallel?
When context-switch happens, does this happen only for one core or for all the cores of the cpu?
You are right: processes and threads are the same from the Linux scheduler's point of view. These tasks are queued according to the scheduler's rules and wait for their turn.
There are scheduling rules such as priority or CPU affinity (to prevent a thread to migrate to another core and preserve cache data).
A context switch may happen on a core every fixed amount of time (a time slice) because the CPU automatically runs some kernel code periodically to permit preemption. Depending on the scheduler's rules, a task can be run for many time slices. A context switch can also occur when a thread calls functions that makes it unrunnable (eg. waiting for IO).
In some cases, if not all, there is one scheduling process per core which does all that.
There is also a similar question on superuser

Does a process run threads in a sequential order?

The question is about multithreading. Say I have 3 threads, the main one, a child1, and a child2. Does the process executing these threads run it in an order that it works on one thread for a short amount of time, then works on the other, and so on and forth and keeps switching, or are the threads running without ever being stopped by the process? Somewhere I read that a thread gets stopped without finish, then another thread is worked on and stopped, then back to thread1 and so on on forth, but that wouldn't make any sense if any threads are stopped as the point of mutlithreading was that they are all concurrent and all run at the same time, but how does the processor do that?
This is in .Net/C#.
the scenario you describe is the way IS ran thread in the old age before multi-core
OS scheduled thread sequentially based in their priorities, but now... I suppose you have at least 2 core where 2 thread can run concurrently and the 3rd thread will be schedule and interrupt one of the other!!!!
The scenario you're describing is correct, except that one thread will normally be running at each time per processor core.
Simplified; if 3 threads are active on 4 cores, they will all always be allowed to run since there's always an available core to run them, while if 3 threads are active on 2 cores, only two can run at any time so they will have to take turns.
Operating systems schedule threads to execute on the available CPU cores (either real or virtual). In the past, most computers had single core CPUs, and thus only one thread could be executed at a time. Modern CPUs are typically 2, 4, or 8 core systems. Some of these cores are virtual, like Intel's hyperthreading CPUs which have twice as many virtual cores as physical cores.
However, there are almost always more threads than CPU cores available, so the OS will prioritize all of the threads on the system in order to run them as efficiently as possible. The threads created by your process may or may not truly run in parallel over any given time span, but you should assume that they will.

Threads inside a Process

Processes get CPU time as managed by the OS process scheduler.
Since threads run in parallel within a single process, does this mean that a process's CPU time is further distributed(sliced) among threads?
Or can the scheduler directly distribute CPU time among threads bypassing the parent process?
I suspect the answer varies with the OS. On Windows, the process is not merely bypassed, but completely ignored -- all the scheduler deals with is threads. Processes are relevant only to the degree that all non-kernel threads do have to belong to some process, and every process has to contain at least one thread.
The threads are run/scheduled by the operating system and therefore they get their own CPU time. The process CPU time is just the sum of the CPU times of all the threads in the process.
If you want your process to schedule the tasks itself, you should use fibers (Windows). These are a kind of threads but they are not scheduled by the OS. The process should handle the scheduling of fibers itself.
For Windows see http://msdn.microsoft.com/en-us/library/ms681917%28VS.85%29.aspx

Resources