I have a misunderstanding of the difference between single-threading and multi-threading programming, so I want an answer to the following question to make everything clear.
Suppose that there are 9 independent tasks and I want to accomplish them with a single-threaded program and a multi-threaded program. Basically it will be something like this:
Single-thread:
- Execute task 1
- Execute task 2
- Execute task 3
- Execute task 4
- Execute task 5
- Execute task 6
- Execute task 7
- Execute task 8
- Execute task 9
Multi-threaded:
Thread1:
- Execute task 1
- Execute task 2
- Execute task 3
Thread2:
- Execute task 4
- Execute task 5
- Execute task 6
Thread3:
- Execute task 7
- Execute task 8
- Execute task 9
As I understand, only ONE thread will be executed at a time (get the CPU), and once the quantum is finished, the thread scheduler will give the CPU time to another thread.
So, which program will be finished earlier? Is it the multi-threaded program (logically)? or is it the single-thread program (since the multi-threading has a lot of context-switching which takes some time)? and why? I need a good explanation please :)
It depends.
How many CPUs do you have? How much I/O is involved in your tasks?
If you have only 1 CPU, and the tasks have no blocking I/O, then the single threaded will finish equal to or faster than multi-threaded, as there is overhead to switching threads.
If you have 1 CPU, but the tasks involve a lot of blocking I/O, you might see a speedup by using threading, assuming work can be done when I/O is in progress.
If you have multiple cpus, then you should see a speedup with the multi-threaded implementation over the single-threaded since more than 1 thread can execute in parallel. Unless of course the tasks are I/O dominated, in which case the limiting factor is your device speed, not CPU power.
As I understand, only ONE thread will be executed at a time
That would be the case if the CPU only had one core. Modern CPUs have multiple cores, and can run multiple threads in parallel.
The program running three threads would run almost three times faster. Even if the tasks are independent, there are still some resources in the computer that has to be shared between the threads, like memory access.
Well, this isn't entirely language agnostic. Some interpreted programming languages don't support real Threads. That is, threads of execution can be defined by the program, but the interpreter is single threaded so all execution is on one core of the CPU.
For compiled languages and languages that support true multi-threading, a single CPU can have many cores. Actually, most desktop computers now have 2 or 4 cores. So a multi-threaded program executing truely independent tasks can finish 2-4 times faster based on the number of available cores in the CPU.
Assumption Set:
Single core with no hyperthreading;
tasks are CPU bound;
Each task take 3 quanta of time;
Each scheduler allocation is limited to 1 quanta of time;
FIFO scheduler Nonpreemptive;
All threads hit the scheduler at the same time;
All context switches require the same amount of time;
Processes are delineated as follows:
Test 1: Single Process, single thread (contains all 9 tasks)
Test 2: Single Process, three threads (contain 3 tasks each)
Test 3: Three Processes, each single threaded (contain 3 tasks each)
Test 4: Three Processes, each with three threads (contain one task each)
With the above assumptions, they all finish at the same time. This is because there is an identicle amount of time scheduled for the CPU, context switches are identicle, there is no interrupt handling, and nothing is waiting for IO.
For more depth into the nature of this, please find this book.
The main difference between single thread and multi thread in Java is that single thread executes tasks of a process while in multi-thread, multiple threads execute the tasks of a process.
A process is a program in execution. Process creation is a resource consuming task. Therefore, it is possible to divide a process into multiple units called threads. A thread is a lightweight process. It is possible to divide a single process into multiple threads and assign tasks to them. When there is one thread in a process, it is called a single threaded application. When there are multiple threads in a process, it is called a multi-threaded application.
ruby vs python vs nodejs : performances in web app, which takes alot of I/O non blockingrest/dbQuery will impact alot. and being the only multi threaded of all 3, nodejs is the winner with big lead gap
Related
I have a computer with 1 cpu, 4 cores and 2 threads per core that can run. So I have effiency with maximum 8 running threads.
When I write a program in C and create threads using pthred_create function, how many threads is recommended to be created: 7 or 8? Do I have to substract the main thread, thus create 7, or main thread should not be counted and I can effiently crete 8? I know that in theory you can create much more, like thousands, but I want to be effiently planned, according with my computer architecture.
Which thread started which is not much relevant. A program's initial thread is a thread: while it is scheduled on an execution unit, no other thread can use that execution unit. You cannot have more threads executing concurrently than you have execution units, and if you have more than that eligible to run at any given time then you will pay the cost of extra context switches without receiving any offsetting gain from additional concurrency.
To a first approximation, then, yes, you must count the initial thread. But read the above carefully. The relevant metric is not how many threads exist at any given time, but rather how many are contending for execution resources. Threads that are currently blocked (on I/O, on acquiring a mutex, on pthread_join(), etc.) do not contend for execution resources.
More precisely, then, it depends on your threads' behavior. For example, if the initial thread follows the pattern of launching a bunch of other threads and then joining them all without itself performing any other work, then no, you do not count that thread, because it does not contend for CPU to any significant degree while the other threads are doing so.
https://code.tutsplus.com/articles/introduction-to-parallel-and-concurrent-programming-in-python--cms-28612
From this link I have studied, I have few questions
Q1 : How thread pool (Concurrent) and threading are different here? why do we see the performance improvement. Threading with Que is having 4 threads and each runs cooperatively during the idle time and picks the item from the Que once they get website response. As i see, the thread pool is also in a way doing the same. completing its work and waiting for the manager to assign a task; which is very similar to picking a new item from the Que. I'm not sure how this is different and why i see the perfroamcne improvment. Seems i'm wrong in interpreting the poling here. Could you expalin
Q2 : Question 2 : using multiprocessing the time taken is more. If I have multiprocessor which can handle multiple processes at a time, then all my 4 processes should be handled by it at a time. That is the real parallelization is happening. Also, I have a question here - in such case since 4 processes are running same function doesn't GIL try to stop them executing the same piece of code. Lets suppose all of them share a common variable that gets updated - like number of websites checked. So how does GIL work in these cases of multiprocessing?
Also, here are the same processes used again and again or they get killed and created every time after their job - I think same processes are used. Also, I think that the performance problem is because of the process creation compared to light weight threads at the concurrent threading phase - which is costly. So could you explain more in detail how the GIL is working here and process are running, are they running cooperatively (like each process wait for its turn - like threads in a process do). Or are these processes using the multiprocessors to run really parallel. Also, my other question is If I have a 8 core machine, I think I can run 8 threads of a same process simultaneously or parallel. if I have the 8 core machine can I run 2 processes with 4 threads each? can I run 8 processes on 8 cores? I think cores are only for threads of a process, which means I cant run the 8 process on 8 cores but I can run as many number of processes as many CPU's or multiprocessor system is mine, am i right? So can I run 2 processes with 4 threads each? on my 8 core machine with 2 multiprocessors and each processor having 4 cores each?
Python has a rich set of libraries for multitasking with Processes and Threads. However, there is overlap between the libraries, the choice depends on how abstractly you view the computational tasks. For example, the concurrent.futures library views threads as asynchronous tasks, while the Threading library deals with them as high-level threads. Further, the _thread implements a low-level interface for threading exposing all the synchronization mechanisms.
The GIL(Global Interpreter Lock) is just a synchronization primitive, specifically a mutex which prevents multiple threads of the same process from executing Python bytecode fragments(for certain objects which need to remain consistent with concurrent operations). This is exactly why Python threads excel with I/O operations in terms of speed when compared to compute intensive tasks.(owing to the fact that the GIL is released in case of certain blocking calls/computationally intensive libraries such as numpy). Note that only CPython and Pypy versions of Python are constrained by the GIL mechanism.
Now, let's see those questions...
How thread pool (Concurrent) and threading are different here? Why do we see the performance improvement?
Coming to the comparison between Threading and concurrent.futures.ThreadPoolExecutor (aka threading_squirrel vs future_squirrel), I've executed both programs with the same test case. There are two factors that contribute to this "performance improvement":
Network HEAD requests: Remember that network operations need not complete in the same time period every time you execute them... due to the very nature of packet transfer delays...
Order of thread execution: In the website you've linked, the author creates all threads initially, sets up the queue full of website links and then starts all of them in a list comprehension loop. In ThreadPoolExecutor of concurrent.futures, each time a task is submitted, a thread is assigned to it if the predefined maximum number of threads/workers have not been reached. I've changed the code to mirror this technique. It seems to give a speedup as the first thread begins work early on and doesn't need to wait for the queue to be filled up...
How does GIL work in these cases of multiprocessing?
Remember that the GIL comes into effect for threads of a process only, not among processes. GIL locks up the whole interpreter bytecode during a thread of execution, so the other threads have to wait for their turn. This is the reason multiprocessing used processes instead of threads, as each process has it's own interpreter and consequently, it's own GIL.
Are the same processes used again and again or they get killed and created every time after their job?
The concept of pooling is to reduce the overhead of creating and destroying workers(be it threads or processes) during computation. However, the processes are kind of "brand new" in the sense that the library effectively asks the OS to perform a fork in an UNIX based OS or spawn in an NT based OS...
Also, are the processes running co-operatively?
Maybe. They have to run in co-operation if they use shared memory...(need not be running together). There is definitely going to be a context switch if there are more processes than the OS can allocate to its processors' cores. They can run in parallel if there's no shared memory updates to make.
If I have the 8 core machine can I run 2 processes with 4 threads each? Can I run 8 processes on 8 cores?
Sure (subject to the GIL, in Python). Each process can be allocated to each processing unit for execution. A processing unit can be a physical or a virtual core of a CPU. As long as the OS scheduler supports it, it's possible. Any reasonable split up of processes and threads are possible. If all are allocatable, that's the best situation, else you will encounter context switches...(which are more expensive when it comes to processes)
Hope I've answered all those questions!
Here are a few resources:
MultiCore CPUs, Multithreading and context switching?
Why does multiprocessing use only a single core after I import numpy?
Bonus celery-squirrel resource
Definitions:
Process(task): ist a program in execution. e.g: Notepad
Thread: A thread is a single sequence of instructions. A process consists of one or more threads(but only one can execute at a time).
According to the lecture a single core processor can run a single process(task) at a time.Only one thread can execute at a time but the Operating system achieves Multithreading using time slicing(thread context switch). This Thread switching happens frequently enough that the user perceives the threads as running at the same time (but they aren't running parallel!)and it occurs inside the one process.A Process context switch is similar to thread context switch with a difference that it takes place between processes (example between mediaplayer und notepad) instead between threads.
I'm not sure if this example is valid : taking two processes e.g: Notepad and Mediaplayer on a single core processor. One can play music and write in a Notepad at the same time although the two processes aren't runnin parallely(Process context switching or multitasking).Inside the one process e.g :Mediaplayer one can listen to music and create playlists at the same time although the two threads aren't running parallely (Thread context switch or multithreading)
1st Question : Are my Information above right ?
2nd Question : would an Execution of Threads in a multicore Process look the same inside a one core but with a difference that the threads of different processes can run parallely?.Is multithreading here the process of running multiple threads simultaneously on difference processes or the process of swiching between threads on a one core ? The same Question would be also for Multitasking.
How would the Process context switch and thread context switch in this case take place ?
3rd Question: The Professor used the Term single threaded processor. Is this Term an another name for sigle core Processor ?
or
several threads belonging to the same process can be executed on several CPU cores simultaneously.Time slicing still happens on multicore systems. Say one have Process with 20 Threads running on a quadcore - the OS still has to schedule 21 Threads to run on only 4 cores.
A single-threaded process runs on only one single core at a time. But that doesn't mean it'll run on the same core until it exits. The OS might give him a time slice to run on Core 1 now, pause it, and give it another time slice on Core 2 later
note : I read a lot of books and i googled enough before i decided to ask here.
EDITED
Yes, you seem to have a good understanding of this topic (not sure if it is really interesting, though). However, you seem to overthink it. I suggest a simpler way of understanding the way it works on the modern systems (it is really wild west when you start look back, with the idea of light-weighted processess and such, but I will not talk about it).
The process is a shell. It's only purpose in life is to provide environment for threads. Only the threads are really executed, process itself is never executed. A single process can host multiple threads within it, and when it hosts only one thread, one can say process is executed - but it is simply a manner of saying. A CPU can only execute a thread, not a process.
Your professor, as they often do, makes misleading statements. There is no such thing as single-threaded processor. There are single and multicored processors, and those processors can be joined together to provide multi-processor environment. From the application developer perspective, a single CPU with 4 cores does not differ from 4 single-core CPUs. There are differences, of course - but usually not for the application developer.
Multitasking is a laymen term. It can mean whatever one wants it to mean, and better be avoided in non-specific contexts.
I hope I did clarify your confusiuon.
The answer to your questions is the following:
Q1: On a single core processor two tasks can't run parallelly in the form of executing two (processor) instructions at the same time, the only possible way of multithreading is time-slicing realized by the task-scheduler (of the OS), so in that case you are approximately right. I would complete your view on the subject with the fact, that nowadays almost none of the applications are single-threaded. I don't know if notepad uses multiple threads, but I'm pretty sure, media player is multithreaded, and the task scheduler schedules time slices between threads not processes. (Fun fact: a single-threaded .NET application already runs 4-5 threads.)
Q2: Task scheduler on any system tries to spread the load between available cores, so time slices will work most likely how you displayed above, but if a process executes an additional thread, it will be executed on the core with the least load over it. Multiple cores also mean, multiple (processor) instructions can and will be executed at the same time.
Q3: In practice multithreaded processor and multicore processor means something very similar, but not the same. You see for example Intel Core i3/i5/i7 CPUs are equipped with an internal pseudo-task-scheduler, which doubles the number of virtual cores by scheduling the execution of two threads on the same core, so for example my i5 system is 2 cored but 4 threaded.
your most of concepts seem valid with non standard terms.
here is explanation of what are threads and process and then multithreading
process is running instance of program is true
when there were no thread then resources were only distributed among processes.
Now processes have threads so resources are distributed to threads but isolation is same of process means two processes still need IPC to communicate with each other. You can say multithreading as lightweight processes which can be scheduled by operating system. multit-hreading is an extension of multi-tasking so if there is one core and two processes: one with two threads and one with 4 threads, the contention of accessing core is between 6 threads not 2 processes.
for thread switch and process switch see thread context switch vs process context switch
For example, let us assume that in my operating system a context switch to another process occurs after 100μ of execution time. Furthermore, my computer has only one processor with one thread of execution possible.
If I have Process A which contains only one thread of execution and Process B which has four threads of execution, will this mean that the thread in process A will run for 100μ and process B will also run for 100μ but split the execution time between each thread before context switching?
Process A: ran for 100μ
Thread 1 in Process A execution time: 100μ
Process B: ran for 100μ
Thread 1 in Process A execution time: ~25μ
Thread 2 in Process A execution time: ~25μ
Thread 3 in Process A execution time: ~25μ
Thread 4 in Process A execution time: ~25μ
Would the above be correct?
Moreover, would this be different if I had a quad core processor? If I had a quad core processor, would this potentially mean each thread could run for 100μ each across all processors?
It all really depends on what you are doing within the process / processing in each thread. If the process you are trying to run can benefit from splitting over threads, like for example, making calls to a web service for processing (since a web service can accept multiple calls at once and execute then separately), then no... the single thread will take longer to process than the 4 threads simply because it is executing the calls linearly instead of simultaneously.
On the other hand, if you are executing a process / code that does not benefit from thread splitting, then the time to finish all 4 processing threads will be the same on a single core.
However, in most cases, splitting the processing into threads should take less time than executing it on a single thread, if you do it right.
The matter of Cores doesn't factor in in this case unless you are attempting to run more threads than one core can handle. In which case, the OS will run the extra threads on a separate core.
This link explains a bit more the situation with Cores and Hyper-Threading...
http://www.howtogeek.com/194756/cpu-basics-multiple-cpus-cores-and-hyper-threading-explained/
Thread switches are always on the same interval regardless of process ownership. So if it's 100micro then it's always 100micro. Unless of course the thread itself surrenders execution. When this thread is going to run again is where things get complicated
The question is about multithreading. Say I have 3 threads, the main one, a child1, and a child2. Does the process executing these threads run it in an order that it works on one thread for a short amount of time, then works on the other, and so on and forth and keeps switching, or are the threads running without ever being stopped by the process? Somewhere I read that a thread gets stopped without finish, then another thread is worked on and stopped, then back to thread1 and so on on forth, but that wouldn't make any sense if any threads are stopped as the point of mutlithreading was that they are all concurrent and all run at the same time, but how does the processor do that?
This is in .Net/C#.
the scenario you describe is the way IS ran thread in the old age before multi-core
OS scheduled thread sequentially based in their priorities, but now... I suppose you have at least 2 core where 2 thread can run concurrently and the 3rd thread will be schedule and interrupt one of the other!!!!
The scenario you're describing is correct, except that one thread will normally be running at each time per processor core.
Simplified; if 3 threads are active on 4 cores, they will all always be allowed to run since there's always an available core to run them, while if 3 threads are active on 2 cores, only two can run at any time so they will have to take turns.
Operating systems schedule threads to execute on the available CPU cores (either real or virtual). In the past, most computers had single core CPUs, and thus only one thread could be executed at a time. Modern CPUs are typically 2, 4, or 8 core systems. Some of these cores are virtual, like Intel's hyperthreading CPUs which have twice as many virtual cores as physical cores.
However, there are almost always more threads than CPU cores available, so the OS will prioritize all of the threads on the system in order to run them as efficiently as possible. The threads created by your process may or may not truly run in parallel over any given time span, but you should assume that they will.