Is multithreading a time sharing or parallel processing?
Please explain me if CPU is multi core processor
Related
I have a performance issue in my application. I have a critical thread running on a specific cpu core (its affinity is set to this cpu core). It always performs the same task in a loop. But from time to time, the loop takes more time than it should.
This might be due to the code of the application itself or to some external factors (like other processes running on the same cpu core).
I would like to be able to create some kind of timeline of things that run on this core (which process, which thread and ideally which function).
What do you recommend me to use to do this? (I am working on Linux)
Thanks in advance,
Serb.
From my understanding, multithreading means under one process, multiple threads that containing instructions, registers, stack, etc,
1, run concurrently on single thread/core cpu device
2, run parallelly on multi core cpu device (just for example 10 threads on 10 core cpu)
And multiprocessing I thought means different processes run parallelly on multi core cpu device.
And today after reading an article, it got me thinking if I am wrong or the article is wrong.
https://medium.com/better-programming/is-node-js-really-single-threaded-7ea59bcc8d64
Multiprocessing is the use of two or more CPUs
(processors) within a single computer system. Now, as there are
multiple processors available, multiple processes can be executed at a
time.
Isn't it the same as a multithreading process that runs on a multi core cpu device??
What did I miss? or maybe it's me not understanding multiprocessing fully.
Multiprocessing means running multiple processes in accordance to the operating system scheduling algorithm. Modern operating systems use some variation of time sharing to run user process in a pseudo-parallel mode. In presence of multiple cpus, the OS can take advantage of them and run some processes in real parallel mode.
Processes in contrast to threads are independent from each other in respect of memory and other process context. They could talk to each other using Inter Process Communication (IPC) mechanisms. Shared resources can be allocated for the processes and require process level synchronization to access them.
Threads, on the other hand share the same memory location and other process context. They can access the same memory location and need to be synchronized using thread synchronization techniques, like mutexes and conditional variables.
Both threads and processes are scheduled by the operating system in similar manner. So, the quote you provided is not completely correct. You do not need multiple cpus for multi-processing, however you need them to allow few processes to really run at the same time. There could be as many processes as cores which run simultaneously, however other processes will share the cpus as well in time-sharing manner.
I am trying to understand Threading in NodeJS and how it works.
Currently what i understand:
Cluster: -
Built on top of Child_process, but with TCP distributed between clusters.
Best for distributing/balancing incoming http requests, while bad for cpu intensive tasks.
Works by taking advantage of available cores in cpu, by cloning nodeJS webserver instances on other cores.
Child_process:
Make use also of different cores available, but its bad since it costs huge amount of resources to fork a child process since it creates virtual memory.
Forked processes could communicate with the master thread through events and vice versa, but there is no communication between forked processes.
Worker threads:
Same as child process, but forked processes can communicate with each other using bufferArray
1) Why worker threads is better than child process and when we should use each of them?
2) What would happen if we have 4 cores and clustered/forked nodeJS webserver 4 times(1 process for each core), then we used worker threads (There is no available cores) ?
You mentioned point under worker-threads that they are same in nature to child-process. But in reality they are not.
Process has its own memory space on other hand, threads use the shared memory space.
Thread is part of process. Process can start multiple threads. Which means that multiple threads started under process share the memory space allocated for that process.
I guess above point answers your 1st question why thread model is preferred over the process.
2nd point: Lets say processor can handle load of 4 threads at a time. But we have 16 threads. Then all of them will start sharing the CPU time.
Considering 4 core CPU, 4 processes with limited threads can utilize it in better way but when thread count is high, then all threads will start sharing the CPU time. (When I say all threads will start sharing CPU time I'm not considering the priority and niceness of the process, and not even considering the other processes running on the same machine.)
My Quick search about time-slicing and CPU load sharing:
https://en.wikipedia.org/wiki/Time-sharing
https://www.tutorialspoint.com/operating_system/os_process_scheduling_qa2.htm
This article even answers how switching between processes can slow down the overall performance.
Worker threads are are similar in nature to threads in any other programming language.
You can have a look at this thread to understand in overall about
difference between thread and process:
What is the difference between a process and a thread?
I'm a kernel noob including schedulers. I understand that there is a IO scheduler and a task scheduler and according to this post IO scheduler uses normal tasks that are handled by the task schedule in the end.
So if I run an user space thread that was assigned to an isolated core (using isolcpus) and it will do some IO operation, will the the
task created by the IO scheduler get executed on the isolated core ?
Since CFS seems to favor user interaction does this mean that CPU intensive threads might get a lower CPU time in the long run?
Isolating cores can help mitigate this issue?
Isolating cores can decrease the scheduling latency (the time it takes for a thread that was marked as runnable to get executed ) for
the threads that are pined to the isolated cores?
So if I run an user space thread that was assigned to an isolated core
(using isolcpus) and it will do some IO operation, will the the task
created by the IO scheduler get executed on the isolated core ?
What isolcpus is doing is taking that particular core out of kernel list of cpu where it can schedule tasks. So once you isolate a cpu from kernel's list of cpus it will never schedule any task on that core, no matter whether that core is idle or is being used by some other process/thread.
Since CFS seems to favor user interaction does this mean that CPU
intensive threads might get a lower CPU time in the long run?
Isolating cores can help mitigate this issue?
Isolating cpu has a different use altogether in my opinion. Basically if your applications has both fast threads(threads with no system calls, and are latency sensitive) and slow threads(threads with system calls) you would want to have dedicated cpu cores for your fast threads so that they are not interrupted by kernel's scheduling process and hence can run to their completion without any noise. Fast threads are usually latency sensitive. On the other hand slow threads or threads which are not really latency sensitive and are doing supporting logic for your application need not have dedicated cpu cores. As mentioned earlier isloting cpu servers a different purpose. We do all this all the time in our organization.
Isolating cores can decrease the scheduling latency (the time it takes
for a thread that was marked as runnable to get executed ) for the
threads that are pined to the isolated cores?
Since you are taking cpus from kernel's list of cpus this will surely impact other threads and processes, but then again you would want to pay extra thought and attention to what really is your latency sensitive code and you would want to separate it from your non-latency sensitive code.
Hope it helps.
Say I'm developing firmware for a smart thermostat in someone's home. The current implementation is a multi threaded solution running on a single core processor (lets just throw out Cortex-M since that's what I'm familiar with) and I'm using some off the shelf RTOS.
If I take that project and move/port it over to a dual/multi core processor, how does that work? Do I just tell the RTOS which threads should run on each core and the RTOS manages it all from there? Is there a certain amount of refactoring that needs to be done on each thread so that it works more efficiently in a multi core environment? Or does the RTOS just take whatever thread is in the READY state and run that task on a core with free time available?
Generally speaking, the fact that you're running on a multi-core machine shouldn't matter. It's up for the OS to schedule threads to available cores. Of course your RTOS needs to support the multi-core platform!
There's a gotcha: if your code doesn't handle concurrency properly, and especially if it doesn't handle memory barriers properly, you might run into bugs that were hidden by the fact that it all ran serially on one core. Once you toss a second core into the mix, any such bugs tend to surface, but usually they do it first during an important demo or after release. So design your code so that it will be concurrency-bug-free by construction.