Does a process run threads in a sequential order? - multithreading

The question is about multithreading. Say I have 3 threads, the main one, a child1, and a child2. Does the process executing these threads run it in an order that it works on one thread for a short amount of time, then works on the other, and so on and forth and keeps switching, or are the threads running without ever being stopped by the process? Somewhere I read that a thread gets stopped without finish, then another thread is worked on and stopped, then back to thread1 and so on on forth, but that wouldn't make any sense if any threads are stopped as the point of mutlithreading was that they are all concurrent and all run at the same time, but how does the processor do that?
This is in .Net/C#.

the scenario you describe is the way IS ran thread in the old age before multi-core
OS scheduled thread sequentially based in their priorities, but now... I suppose you have at least 2 core where 2 thread can run concurrently and the 3rd thread will be schedule and interrupt one of the other!!!!

The scenario you're describing is correct, except that one thread will normally be running at each time per processor core.
Simplified; if 3 threads are active on 4 cores, they will all always be allowed to run since there's always an available core to run them, while if 3 threads are active on 2 cores, only two can run at any time so they will have to take turns.

Operating systems schedule threads to execute on the available CPU cores (either real or virtual). In the past, most computers had single core CPUs, and thus only one thread could be executed at a time. Modern CPUs are typically 2, 4, or 8 core systems. Some of these cores are virtual, like Intel's hyperthreading CPUs which have twice as many virtual cores as physical cores.
However, there are almost always more threads than CPU cores available, so the OS will prioritize all of the threads on the system in order to run them as efficiently as possible. The threads created by your process may or may not truly run in parallel over any given time span, but you should assume that they will.

Related

Scheduling and Synchronization in Multicore CPU and in Single core CPU

From what I am understanding from the top answers of this post (
https://stackoverflow.com/questions/16116952/can-multithreading-be-implemented-on-a-single-processor-system#:~:text=Yes%2C%20you%20can%20have%20multiple,one%20thing%20at%20a%20time.),
If I am only running one multithreaded program that creates 4 threads on a multicore CPU system with 4 cores, there is no need for scheduling as all 4 threads of my program will be running in individual cores (or microprocessors). But there maybe a need for synchronization since all 4 threads access the memory of the program (or a process) that is stored in the same address space in the main memory.
On the other hand,
on a single core CPU computer. If I run the same program that creates 4 threads, I will need both synchronization and scheduling since all threads must utilize the same core (or a microprocessor).
Please correct my understanding if it is wrong.
there is no need for scheduling as all 4 threads of my program will be running in individual cores
This is not true in practice. The OS scheduler operate in both cases. Unless you pin threads to core, threads can migrate from one core to another. In fact, even if you pin them, there are generally few other threads that can be ready on the machine (eg. ssh daemon, tty session, graphical programs, kernel threads, etc.) so the OS has to schedule them. There will be context-switches though the number will be much lower than with a single processor.
there maybe a need for synchronization since all 4 threads access the memory of the program( or a process) that is stored in the same space in the main memory.
This is true. Note that threads can also work on different memory area (so that there is no need for synchronization except when they are joined). Note also that "main memory" includes CPU caches here.
In a single core CPU computer. If I run the same program that creates 4 threads, I will need both synchronization and Scheduling since all threads must utilize the same Core ( or a microprocessor).
Overall, yes. That being said, the term "scheduling" is unclear. THere are multiple kind of scheduling: preemptive VS cooperative scheduling. Here, as a programmer, you do not need to do anything special since the scheduling is done by the OS. Thus, it is a bit unexpected to say that you "need" scheduling. The OS will schedule the threads on the same core using preemption (by allocating different time-slices for each threads on the same core).

How does multithreading utilizes multiple cores?

So recently I've learned some basic knowledge about multithreading. What I've understood is that thread is a lightweight process that runs under processes by sharing memory, while one process is running under one CPU core.
Yet by this perspective I couldn't understand some saying that threads utilizes multiple cores and make the whole program executes more effective. From what I've known, threads created by one process should run only under that specific process, which means that it should only run under that very one CPU core. If we want to utilize multiple cores, we should actually use multiprocess to run parallelly. Most of what I've researched is only about the conclusion, i.e multithreading utilizes multiple cores, but none of them explains my question. Did I think anything wrong? Thanks!
Your confusion lies here:
[...] while one process is running under one CPU core.
[...] threads created by one process should run only under that specific process, which means that it should only run under that very one CPU core.
This is not true. I think what the various explanations you have read meant that any process have at least one thread (where a 'thread' is a sequence of instructions ran by a CPU core).
If you have a multithreaded program, the process will have several threads (sequences of instructions ran by a CPU core) that can run concurrently on different CPU cores.
There are many processes executing on your computer at any given time. The Operating System (OS) is the program that allocates the hardware resources (CPU cores) to all these processes and decides which process can use which cores for what amount of time before another process gets to use the CPU. Whether or not a process gets to use multiple cores is not entirely up to the process. More confusing still, multithreaded programs can use more threads than there are cores on the computer's CPU. In that case you can be certain that all your threads do not run in parallel.
One more thing:
[...] threads utilizes multiple cores and make the whole program executes more effective
I am going to sound very pedantic, but it is more complicated than that. It depends on what you mean by "effective". Are we talking about total computation time, energy consumption ..?
A sequential (1 thread) program may be very effective in terms of power consumption but taking a very long time to compute. If you are able to use multiple threads, you may be able to reduce that computation time but it will probably incur new costs (synchronization between threads, additional protection mechanisms against concurrent accesses ...).
Also, multithreading cannot help for certain tasks that fall outside of the CPU realm. For example, unless you have some very specific hardware support, reading a file from the hard-drive with 2 or more concurrent threads cannot be parallelized efficiently.

Process vs thread with example

I read articles on processes vs threads, but I am still not clear on the difference.
Suppose a process is using the CPU/Processor, doing some big calculation that takes 10 minutes. How will another process run at the same time in parallel? In a single core vs a dual core processor?
Same thing for threads, how will another thread run in parallel when the CPU/Processor is engaged with another thread?
How is context switching different for threads and for processes? I mean both process and threads use the same RAM memory, so what's the difference?
From my vague memory of Operating Systems I can offer you a little bit of help. First you have to know the difference between concurrent and simultaneous. They are not the same thing; simultaneous means both things occur at the same time and concurrent means they appear to be running simultaneously but in reality they're switching so fast you can't tell.
Processes and threads can be considered similar, but a big difference is that a process is much larger than a thread. For that reason, it is not good to have switching between processes. There is too much information in a process that would have to be saved and reloaded each time the CPU decides to switch processes.
A thread on the other hand is smaller and so it is better for switching. A process may have multiple threads that run concurrently, meaning not at the same exact time, but run together and switch between them. The context switching here is better because a thread won't have as much information to store/reload.
If you only have a single core then you can only do concurrent execution, for the most part. Once you have multiple cores you can have threads run on both cores and thus have simultaneous execution. It is up to the Operating System to schedule when threads run, when processes get to run, when to switch, how to switch them, etc. The Operating System gives you the illusion that work is being done simultaneously when this is not always the case.
If you have more confusion feel free to comment.
A process is a thing very related to the Operating System (OS). The thread is in the simplest terms, is an executing program. One or more threads run in the context of the process. The Java Virtual Machine (JVM) is a process in your OS.
And inside the JVM you can have multiple threads running concurrently.
The processor is a resource of your machine, like the memory. Your OS let your process to share the available resources, in our simple case processors and memory.
When you develop in Java, all processor in your machine are available resources.
When you develop your solution, you can have even multiple Java processes (i.e. multiple JVM) running a single or multiple thread each. But this mostly depends by your problem.
The real difference between a process and a thread is that both have an executing program, but threads share the same memory. This let your threads to theoretically work on the same data, but you have pay the complexity of concurrency and synchronisation.
Each CPU only runs one thread in a process at a time. However the OS can stop and save a thread and load and run another quickly (as little as 0.0001 seconds) This gives the illusion that many threads are running at once, even though only one is running.

Who schedules threads?

I have a question about scheduling threads. on the one hand, I learned that threads are scheduled and treated as processes in Linux, meaning they get scheduled like any other process using the conventional methods. (for example, the Completely Fair Scheduler in linux)
On the other hand, I also know that the CPU might also switch between threads using methods like Switch on Event or Fine-grain. For example, on cache miss event the CPU switches a thread. but what if the scheduler doesn't want to switch the thread? how do they agree on one action?
I'm really confused between the two: who schedules a thread? the OS or the CPU?
thanks alot :)
The answer is both.
What happens is really fairly simple: on a CPU that supports multiple threads per core (e.g., an Intel with Hyperthreading) the CPU appears to the OS as having some number of virtual cores. For example, an Intel i7 has 4 actual cores, but looks to the OS like 8 cores.
The OS schedules 8 threads onto those 8 (virtual) cores. When it's time to do a task switch, the OS's scheduler looks through the threads and finds the 8 that are...the most eligible to run (taking into account things like thread priority, time since they last ran, etc.)
The CPU has only 4 real cores, but those cores support executing multiple instructions simultaneously (and out of order, in the absence of dependencies). Incoming instructions get decoded and thrown into a "pool". Each clock cycle, the CPU tries to find some instructions in that pool that don't depend on the results of some previous instruction.
With multiple threads per core, what happens is basically that each actual core has two input streams to put into the "pool" of instructions it might be able to execute in a given clock cycle. Each clock cycle it still looks for instructions from that pool that don't depend on the results of previous instructions that haven't finished executing yet. If it finds some, it puts them into the execution units and executes them. The only major change is that each instruction now needs some sort of tag attached to designate which "virtual core" will be used to store results into--that is, each of the two threads has (for example) its own set of registers, and instructions from each thread have to write to the registers for that virtual core.
It is possible, however, for a CPU to support some degree of thread priority so that (for example) if the pool of available instructions includes some instructions from both input threads (or all N input threads, if there are more than two) it will prefer to choose instructions from one thread over instructions from another thread in any given clock cycle. This can be absolute, so it runs thread A as fast as possible, and thread B only with cycles A can't use, or it can be a "milder" preference, such as attempting to maintain a 2:1 ratio of instructions executed (or, of course, essentially any other ratio preferred).
Of course, there are other ways of setting up priorities as well (such as partitioning execution resources), but the general idea remains the same.
An OS that's aware of shared cores like this can also modify its scheduling to suit, such as scheduling only one thread on a pair of cores if that thread has higher priority.
The OS handles scheduling and dispatching of ready threads, (those that require CPU), onto cores, managing CPU execution in a similar fashion as it manages other resources. A cache-miss is no reason to swap out a thread. A page-fault, where the desired page is not loaded into RAM at all, may cause a thread to be blocked until the page gets loaded from disk. The memory-management hardware does that by generating a hardware interrupt to an OS driver that handles the page-fault.

Threads vs Cores

Say if I have a processor like this which says # cores = 4, # threads = 4 and without Hyper-threading support.
Does that mean I can run 4 simultaneous program/process (since a core is capable of running only one thread)?
Or does that mean I can run 4 x 4 = 16 program/process simultaneously?
From my digging, if no Hyper-threading, there will be only 1 thread (process) per core. Correct me if I am wrong.
A thread differs from a process. A process can have many threads. A thread is a sequence of commands that have a certain order. A logical core can execute on sequence of commands. The operating system distributes all the threads to all the logical cores available, and if there are more threads than cores, threads are processed in a fast cue, and the core switches from one to another very fast.
It will look like all the threads run simultaneously, when actually the OS distributes CPU time among them.
Having multiple cores gives the advantage that less concurrent threads will be placed on one single core, less switching between threads = greater speed.
Hyper-threading creates 2 logical cores on 1 physical core, and makes switching between threads much faster.
That's basically correct, with the obvious qualifier that most operating systems let you execute far more tasks simultaneously than there are cores or threads, which they accomplish by interleaving the executing of instructions.
A system with hyperthreading generally has twice as many hardware threads as physical cores.
The term thread is generally used as a description of an operating system concept that has the potential to execute independently of other threads. Whether it does so depends on whether it is stuck waiting for some event (disk or screen I/O, message queue), or if there are enough physical CPUs (hyperthreaded or not) to allow it run in the face of other non-waiting threads.
Hyperthreading is a CPU vendor term that means a single core, that can multiplex its attention between two computations. The easy way to think about a hyperthreaded core is as if you had two real CPUs, both slightly slower than what the manufacture says the core can actually do.
Basically this is up to the OS. A thread is a high-level construct holding a instruction pointer, and where the OS places a threads execution on a suitable logical processor. So with 4 cores you can basically execute 4 instructions in parallell. Where as a thread simply contains information about what instructions to execute and the instructions placement in memory.
An application normally uses a single process during execution and the OS switches between processes to give all processes "equal" process time. When an application deploys multiple threads the processes allocates more than one slot for execution but shares memory between threads.
Normally you make a difference between concurrent and parallell execution. Where parallell execution is when you actually physically execute instructions of more than one logical processor and concurrent execution is the the frequent switching of a single logical processor giving the apperence of parallell execution.

Resources