Is a schedulable unit of CPU time slice process or thread? - multithreading

I want to clarify whether "a schedulable unit of CPU time slice" is "process" or "thread" (kernel managed thread). What I mean by "schedulable unit of CPU time slice" is the unit which CPU scheduler of an operating system allocates CPU time slice.
According to "Short-term scheduling" in wikipedia, process is used to refer the schedulable unit.
"This scheduler can be preemptive, implying that it is capable of forcibly removing processes from a CPU when it decides to allocate that CPU to another process"
Also, according to "Time slice" in wikepedia,
"The scheduler is run once every time slice to choose the next process to run."
Also, according to "Thread" in wikepedia,
"a process is a unit of resources, while a thread is a unit of scheduling and execution"
According to "Processes and Threads" in microsoft docs,
"A thread is the basic unit to which the operating system allocates processor time."
According to "Is thread scheduling done by the CPU, kernel, or both?" in quora,
"The CPU (hardware) just carries out instructions. The CPU itself has no concept of threads or scheduling, although there may be features in the CPU that support them.
"The operating system kernel (a set of instructions, aka software) executes on the CPU (hardware). A scheduling algorithm in the kernel of the operating system chooses which thread to execute next, and directs the CPU to begin executing the next instruction in that chosen thread".

Clarification: my understanding of "a schedulable unit of CPU time slice" is "a unit that can be scheduled during a given CPU time slice" (since if "schedulable unit" would be a time then the question does not make much sense to me).
Based on this, put it shortly, "a schedulable unit of CPU time slice" for a given logical core can be seen as a software thread (more specifically its execution context composed of registers and process information).
Operating systems scheduler operates on tasks. Tasks can be threads, processes, or other unusual structure (eg. dataflows).
Modern mainstream operating system mainly schedule threads on processing units (typically hardware threads also called logical cores). You can get more information about how the Windows scheduler works in the Microsoft documentation. The documentation explicitly states:
A thread is the entity within a process that can be scheduled for execution
On Linux, the default scheduler, CFS, operates on task (ie. task_struct data structure). Tasks can be a thread, a group of threads or a process. This was done that way so to make the scheduler more generic and also because this scheduler was designed long ago, when processors had only 1 core and people focused on processes rather than thread. The multi-core era since caused applications to use a lot of threads so to use available cores. As a result, nowadays, it is generally threads that are actually scheduled AFAIK. This is explained in the famous research paper The Linux Scheduler: a Decade of Wasted Cores (which also explain a bit how the CFS operate regarding the target processor).
Note that the term "process" can sometime refer to a thread since threads are sometime called "lightweight processes" and basic processes are sometime called "heavy processes". Processes can even be a generic term for both heavy and lightweight processes (ie. threads and actual processes). This is a very confusing terminology and a misuse of language (like the term "processors" sometimes used for cores). In practice, this is often not a problem in a specific context since threads and processes may be used interchangeably though (in such a case, people should use a generic term like "tasks").
As for "a schedulable unit of CPU time slice" this is a bit more complex. A simple and naive answer is: a thread (it is definitively not processes alone). That being said, a thread is a software-defined concept (like processes). It is basically a stack, few registers, and a parent process (with possibly some meta-information and a TLS space). CPUs does not operate directly on such data structure. CPU does not have a concept of thread stack for example (it is just a section of the virtual process memory like any other). They just need an execution context which is composed of registers and a process configuration (in protected mode). For sake of simplicity, we can say that they execute threads. Mainstream modern x86 processors are very complex, and each core is often able to run multiple threads at the same time. This is called simultaneous multithreading (aka. Hyper-Threading for Intel processors). x86 physical cores are typically composed of two logical threads (ie. hardware threads) that can each execute a software threads.

I think your misunderstanding is actually a misunderstanding of what the English words mean in this context.
A time slice is a period of time. Maybe it is a fraction of a second. Maybe a few seconds.
Threads and processes are effectively tasks that the computer is going to perform. (I am simplifying here. The notion of a task has multiple meanings, even in the IT context. And on a modern OS, a process is actually a collection of threads that share the same virtual memory address space.)
The CPU1 or processor is hardware that will run a (native) thread. A typical computer will have multiple CPUs. However, each CPU in a computer can only run one thread at a time.
The operating system therefore needs to schedule each of the threads it knows about to run on a specific CPU. The part of the operating system that does this is called the scheduler.
If there are more threads to run than CPUs to run them, the scheduler will typically schedule a thread to a CPU for a fixed period of time; i.e a time slice. When the thread's time slice has elapsed, the scheduler will suspend it and put it back into the queue, and then schedule a different thread to run on the CPU.
The metaphor is that we are "slicing up" the available compute time on the CPUs and sharing the slices between the threads that need it.
1 - There is some disagreement over what "CPU" actually means. I am taking the view that it refers to what it commonly called a "core". Intel confusingly introduced the marketing term2 "hyperthread" which refers to a feature in which the physical hardware of a "core" can run as two independent instruction executors. However, in hyperthread mode, the OS scheduler will typically treat the hyperthreads as if they are distinct cores, so this is not pertinent to your question.
2 - The actual concept behind hyperthreads goes back to the 1960s, before Intel even existed as a company; see https://en.wikipedia.org/wiki/Barrel_processor.
So the answer to your question:
I want to clarify whether "a schedulable unit of CPU time slice" is "process" or "thread" (kernel managed thread).
A time slice is a schedulable unit of CPU time.
A time slice is neither a process or a thread. Indeed that doesn't even make sense because "process" and "thread" are not time.
I concur with #Solomon Slow's comment. Wikipedia is not authoritative. But the real problem is that different pages are written and edited by different people over time, and they often use IT terminology inconsistently.
My advice would be to find and read One good textbook on (modern) operating system design and architecture. A well-written text book should be self-consistent in its use of terminology.

A time slice is a unit of time, for example 10 ms in a traditional Linux kernel built with HZ=100 (100 timer interrupts per second). After a task has been running for that long on a CPU core, the kernel regains control on that CPU and calls schedule() to decide what task this CPU core should run next; the task it was already running, or a different task.
The scheduler can also run earlier if an external interrupt comes in, especially near the end of the current timeslice: if there's a higher-priority task that's now waiting for a CPU, e.g. after waiting for I/O or after a sleep() system call ended, it makes sense for the OS to schedule it onto this core, instead of finishing the time-slice of whatever CPU-bound task was interrupted.
Task is a useful word for the things the scheduler has to pick from. Without implying separate process or threads within a process, and also leaving room for kernel tasks like a Linux interrupt handler "bottom half" that aren't threads or processes.
Every thread of a process needs to get scheduled separately to execute on a CPU core (if it's not blocked).
The articles you found about scheduling processes is using the simplifying assumption that each process is single-threaded.
Or they're assuming a 1:n threading model, where the OS is only aware of one task for the whole process, and multithreading is done in user-space, "green threads" instead of native threads. Most mainstream OSes these days use a 1:1 threading model, where every C++ or Java thread is a separately schedulable task visible to the OS, although n:m models are possible where you have multiple OS-scheduled tasks, but not as many as you have high-level-language threads.

Related

How Tasks are scheduled in a multi-core processor

I've got confused about how tasks are scheduled in a multi-core processor. Actually, different sources have different opinions. Importantly, there isn't enough document about tasks scheduling mechanism in a multi-core processor. Therefore, I decided to ask you a question.
I depicted a process that contains a process kernel thread, and two user-level threads. and provide a pseudo-code about the processing logic.
The question is, How this process will be executed in a multi-core processing unite that contains 2 physical cores and 4 logical processors (each core has 2). Such that, there are not any waiting processes, and the CPU was assigned to the process completely.
I guess it works like below:
Note: PKT_C1_LP1 means process kernel thread is assigned to core 1 and logical processor 1
|--PKT_C1_LP1--1s--| |--T1_C1_LP1--1s--| |--TSK1_C1_LP1--1s--|
|--T2_C1_LP2--2s-----------| |--TSK2_C1_LP2--1s--|
----------- timeline ----------->
Update
Seems like the answer(s) to your question(s) will depend a lot on what
OS and scheduler your system is running.
Because there aren't any waiting processes and also enough resources. So I believe that almost all of the scheduling algorithms in any os will have insignificant differences. However, let's say, for simplicity it is:
non-preemptive FCFS scheduling
Here's a timing diagram of the code that each thread needs to execute. This imagines a maximal case where each task immmediately spawns a new thread. The green sections are infinitesimally short pieces of code (think, "not-to-scale") but are basically just scheduling operations. And the red sections are similarly short process EXIT and thread END scheduling operations. (I've omitted penalties associated with thread creation. And notice that worker threads do not END, they just go idle, and they stay in a thread pool.
Basic Timing Diagram
Now the first thing you'll notice is that, because of the way tasks work, the second task can be executed on the same thread that scheduled it, because no more tasks are scheduled, and the thread is only going to await that task. This has nothing to do with thread scheduling, and everything to do with how tasks efficiently manage their pool of worker threads. This is application-level code, not os-level code that accomplishes this. The diagram below requires 1 fewer threads thanks to tasks.
Timing Diagram with smarter tasks
Now we can look at what the scheduler needs to do. We are still dealing with only logical processors. (The details of which core will execute which thread are complicated so let's leave that out for the moment.) Here we see that we can naively execute all each of these threads on their own processor.
Greedy usage of processors
It will likely be more efficient to execute the worker thread on one of the previous processors. They are idle when worker thread 1 needs to execute, so it makes more sense to reuse one of the previously allocated processors. Here task 1 code in worker thread 1 is shown executing on processor 2 (could also have been assigned to processor 1 because it is also free, but stay tuned for the next diagram and you'll see why I put it on processor 2).
Schedule thread to reuse a processor
And finally, we can construct the last version that takes us to the most efficient scheduling. This hinges on optimizing the case where you create a thread and then immediately join a thread. Different operating systems try to optimize this case so that the newly created thread can run on the same processor. It means that creating the thread doesn't immediately schedule the new thread on a free processor and burn the cost of a context switch back to the thread that scheduled it. Instead, the new thread is scheduled when we block in our Join operation, or when the next clock interrupt occurs. If we can quickly get to our Join call before an interrupt triggers the scheduler (we're talking < 10 ms on a typical operating systems for such things to be triggered by the clock chip) then the scheduling will happen more efficiently like this (below), where thread 2 can be scheduled to run on the same processor without a context switch. (Interestingly, Linux and Windows optimize this case differently.)
Final timing diagram
You'll notice (above) that this can now all execute on only two logical processors.
Whether it is more efficient to run these on separate cores or different logical processors of the same core is a nuance of the operating system again that depends highly on virtual memory usage and also the hardware specs of the processor and its caches. Different operating systems will do different things here, too. And the details matter greatly. Non-uniform memory architecture would affect the decision too.
In the real world, the operating system may use heuristics to determine the best priority and placement for threads and processes. The real world answer is so much different and more nuanced than this "computer science" answer I've given and depends on the specific details.
Additional Reading/Viewing:
Windows and Linux: A Tale of Two Kernels - Tech-Ed 2004 (Older but excellent info)
Processes, Threads, and Jobs in the Windows Operating System
Scheduling: Introduction; and Multiprocessor Scheduling (Advanced)
Capacity Aware Scheduling

Maximum number of threads and multithreading

I'm tripping up on the multithreading concept.
For example, my processor has 2 cores (and with hyper threading) 2 threads per core totaling to 4 thread. So does this mean my CPU can execute four separate instructions simultaneously? Is each thread capable of being a multi-thread?
So does this mean my CPU can execute four separate instructions simultaneously? Is each thread capable of being a multi-thread?
In short to both, yes.
A CPU can only execute 1 single instruction per phase in a clock cycle, due to certain factors like pipelining, a CPU might be able to pass multiple instructions through the different phases in a single clock cycle, and the frequency of the clock might be extremely fast, but it's still only 1 instruction at a time.
As an example, NOP is an x86 assembly instruction which the CPU interprets as "no operation this cycle" that's 1 instruction out of the hundreds or thousands (and more) that are executed from something even as simple as:
int main(void)
{
while (1) { /* eat CPU */ }
return 0;
}
A CPU thread of execution is one in which a series of instructions (a thread of instructions) are being executed, it does not matter from what "application" the instructions are coming from, a CPU does not know about high level concepts (like applications), that's a function of the OS.
So if you have a computer with 2 (or 4/8/128/etc.) CPU's that share the same memory (cache/RAM), then you can have 2 (or more) CPU's that can run 2 (or more) instructions at (literally) the exact same time. Keep in mind that these are machine instructions that are running at the same time (i.e. the physical side of the software).
An OS level thread is something a bit different. While the CPU handles the physical side of the execution, the OS handles the logical side. The above code breaks down into more than 1 instruction and when executed, actually gets run on more than 1 CPU (in a multi-CPU aware environment), even though it's a single "thread" (at the OS level), the OS schedules when to run the next instructions and on what CPU (based on the OS's thread scheduling policy, which is different amongst the various OS's). So the above code will eat up 100% CPU usage per a given "time slice" on that CPU it's running on.
This "slicing" of "time" (also known as preemptive computing) is why an OS can run multiple applications "at the same time", it's not literally1 at the same time since a CPU can only handle 1 instruction at a time, but to a human (who can barely comprehend the length of 1 second), it appears "at the same time".
1) except in the case with a multi-CPU setup, then it might be literally the same time.
When an application is run, the kernel (the OS) actually spawns a separate thread (a kernel thread) to run the application on, additionally the application can request to create another external thread (i.e. spawning another process or forking), or by creating an internal thread by calling the OS's (or programming languages) API which actually call lower level kernel routines that spawn and maintain the context switching of the spawned thread, additionally, any created thread is also capable of calling the same API's to spawn other separate threads (thus a thread is capable of being "multi-threaded").
Multi-threading (in the sense of applications and operating systems), is not necessarily portable, so while you might learn Java or C# and use their API's (i.e. Thread.Start or Runnable), utilizing the actual OS API's as provided (i.e. CreateThread or pthread_create and the slew of other concurrency functions) opens a different door for design decisions (i.e. "does platform X support thread library Y"); just something to keep in mind as you explore the different API's.
I hope that can help add some clarity.
I actually researched this very topic in my Operating Systems class.
When using threads, a good rule of thumb for increased performance for CPU bound processes is to use an equal number of threads as cores, except in the case of a hyper-threaded system in which case one should use twice as many cores. The other rule of thumb that can be concluded is for I/O bound processes. This rule is to quadruple the number threads per cores, except for the case of a hyper-threaded system than one can quadruple the number of threads per core.

Node.js single thread VS Tranditonal webserver thread pool

I am a newbie to node.js. I am currently reading the book called 'Beignning Node.js' by Basarat Ali Syed.
Here is an excerpt from it which states the disadvantage of thread pool of traditional web servers:
Most web servers used thread pool this
method a few years back and many continue to use today. However,
this method is not without drawbacks. Again there is wasting of RAM
between threads. Also the OS needs to context switch between threads
(even when they are idle), and this results in wasted CPU resources.
I don't quite understand why there is context switch between threads inside thread pool. As far as I could understand, one thread will last during the duration of a task. And once the task is completed, the thread will be free to receive the next task.
So My Q1: Why does it need context switch? When will the context switch between threads happen?
My Q2: Why does not node.js use multiple threads to handle events in the event queue? Isn't it more efficient and reduce the queuing time of events?
Context switch is when the OS need to run more threads than there are CPU cores. Say for example you have 10 threads. And they are all busy (meaning none of them have finished completing their tasks). But your CPU is only a dual core CPU (assume no hyperthreading for simplicity). So, how can all 10 threads run? It's not possible!!
The answer is context switch. The OS, when presented with lots of processes and threads to execute, will allocate a certain amount of time for each thread to run. After this time the OS will switch to another thread so that all threads will get some time to use the CPU.
The term "context switch" refers to the fact that when the OS needs to give the CPU to another thread/process it needs to copy all the values in registers temporarily to that thread's memory otherwise the other process/thread will mess up the calculation of the switched thread when it resumes. The OS will also need to re-point the virtual memory tables so that two processes will not mess up each other's memory. How expensive this operation is depends on the CPU architecture. Some architectures like the Sparc are optimized for context switching. Hyperthreading is a feature that implements context switching in hardware so it's faster (but then again, you only get one extra context per CPU with Hyperthreading as implemented on Intel/AMD64 architecture).
Not using multiple threads completely avoids context switching. Especially if your program is the only program running. So on a single core CPU, a nonblocking, single-threaded program can often beat a multithreaded program.
However, it's rare to find a single core CPU these days. The ideal number of threads you'd want to run is equal to the number of cores you have. Doing so would also avoid context switching. But even so, getting a complex multithreaded program to run fast is not easy. It's easier to get a nonblocking singlethreaded program to run fast. And in most web applications a multithreaded program wouldn't have any advantage over a nonblocking singlethreaded program because they're both I/O bound.
A nonblocking singlethreaded program is basically implementing thread-like behavior in userspace using events. This is sometimes called "green threads" in languages that support syntax that make event-oriented programming look like multithreaded programming.

erlang threading and OS threads correlation [duplicate]

Erlang is known for being able to support MANY lightweight processes; it can do this because these are not processes in the traditional sense, or even threads like in P-threads, but threads entirely in user space.
This is well and good (fantastic actually). But how then are Erlang threads executed in parallel in a multicore/multiprocessor environment? Surely they have to somehow be mapped to kernel threads in order to be executed on separate cores?
Assuming that that's the case, how is this done? Are many lightweight processes mapped to a single kernel thread?
Or is there another way around this problem?
Answer depends on the VM which is used:
1) non-SMP: There is one scheduler (OS thread), which executes all Erlang processes, taken from the pool of runnable processes (i.e. those who are not blocked by e.g. receive)
2) SMP: There are K schedulers (OS threads, K is usually a number of CPU cores), which executes Erlang processes from the shared process queue. It is a simple FIFO queue (with locks to allow simultaneous access from multiple OS threads).
3) SMP in R13B and newer: There will be K schedulers (as before) which executes Erlang processes from multiple process queues. Each scheduler has it's own queue, so process migration logic from one scheduler to another will be added. This solution will improve performance by avoiding excessive locking in shared process queue.
For more information see this document prepared by Kenneth Lundin, Ericsson AB, for Erlang User Conference, Stockholm, November 13, 2008.
I want to ammend previous answers.
Erlang, or rather the Erlang runtime system (erts), defaults the number of schedulers (OS threads) and the number of runqueues to number of processing elements on your platform. That is processors cores or hardware threads. You can change these settings in runtime using:
erlang:system_flag(schedulers_online, NP) -> PrevNP
The Erlang processes does not have any affinity to any schedulers yet. The logic balancing the processes between the schedulers follows two rules. 1) A starving scheduler will steal work from another scheduler. 2) Migration paths are setup to push processes from schedulers with lots of processes to schedulers with less work. This is done to assure fairness in reduction count (execution time) for each process.
Schedulers however can be locked to specific processing elements. This not done by default. To let erts do the scheduler->core affinity use:
erlang:system_flag(scheduler_bind_type, default_bind) -> PrevBind
Several other bind types can be found in the documentation. Using affinity can greatly improve performance in heavy load situations! Especially in high lock contention situations. Also, the linux kernel cannot handle hyperthreads to say the least. If you have hyperthreads on your platform you should really use this feature in erlang.
I'm purely guessing here, but I'd imagine that there's a small number of threads, which pick processes from a common process pool for execution. Once a process hits a blocking operation, the thread executing it puts it aside and picks another. When a process being executed causes another process to become unblocked, that newly unblocked process gets placed into the pool. I suppose a thread might also stop execution of a process even when it's not blocked at certain points to serve other processes.
I would like to add some input to what was described in the accepted answer.
Erlang Scheduler is the essential part of the Erlang Runtime System and provides its own abstraction and implementation of the conception of lightweight processes atop the OS threads.
Each Scheduler runs within a single OS thread. Normally, there are as many schedulers as CPU (cores) are on he hardware (it is configurable though and naturally does not bring much value when number of schedulers exceeds those of hardware cores). The system might also be configured that scheduler will not jump between OS threads.
Now, when the Erlang process is being created it is entirely the responsibility of the ERTS and Scheduler to manage life cycle and resources consumption as well as its memory footprint etc.
One of the core implementation details is that each process has a time budget of 2000 reductions available when the Scheduler picks up that process from the run queue. Each progress in the system (even I/O) is guaranteed to have a reductions budget. That is what actually makes ERTS a system with preemptive multitasking.
I would recommend a great blog post on that topic by Jesper Louis Andersen http://jlouisramblings.blogspot.com/2013/01/how-erlang-does-scheduling.html
As the short answer: Erlang processes are not OS threads and do not map to them directly. Erlang Schedulers are what runs on the OS threads and provide smart implementation of more finely grained Erlang processes hiding those details behind programmer's eyes.

Threads & Processes Vs MultiThreading & Multi-Core/MultiProcessor : How they are mapped?

I was very confused but the following thread cleared my doubts:
Multiprocessing, Multithreading,HyperThreading, Multi-core
But it addresses the queries from the hardware point of view. I want to know how these hardware features are mapped to software?
One thing that is obvious is that there is no difference between MultiProcessor(=Mutlicpu) and MultiCore other than that in multicore all cpus reside on one chip(die) where as in Multiprocessor all cpus are on their own chips & connected together.
So, mutlicore/multiprocessor systems are capable of executing multiple processes (firefox,mediaplayer,googletalk) at the "sametime" (unlike context switching these processes on a single processor system) Right?
If it correct. I'm clear so far. But the confusion arises when multithreading comes into picture.
MultiThreading "is for" parallel processing. right?
What are elements that are involved in multithreading inside cpu? diagram? For me to exploit the power of parallel processing of two independent tasks, what should be the requriements of CPU?
When people say context switching of threads. I don't really get it. because if its context switching of threads then its not parallel processing. the threads must be executed "scrictly simultaneously". right?
My notion of multithreading is that:
Considering a system with single cpu. when process is context switched to firefox. (suppose) each tab of firefox is a thread and all the threads are executing strictly at the same time. Not like one thread has executed for sometime then again another thread has taken until the context switch time is arrived.
What happens if I run a multithreaded software on a processor which can't handle threads? I mean how does the cpu handle such software?
If everything is good so far, now question is HOW MANY THREADS? It must be limited by hardware, I guess? If hardware can support only 2 threads and I start 10 threads in my process. How would cpu handle it? Pros/Cons? From software engineering point of view, while developing a software that will be used by the users in wide variety of systems, Then how would I decide should I go for multithreading? if so, how many threads?
First, try to understand the concept of 'process' and 'thread'. A thread is a basic unit for execution: a thread is scheduled by operating system and executed by CPU. A process is a sort of container that holds multiple threads.
Yes, either multi-processing or multi-threading is for parallel processing. More precisely, to exploit thread-level parallelism.
Okay, multi-threading could mean hardware multi-threading (one example is HyperThreading). But, I assume that you just say multithreading in software. In this sense, CPU should support context switching.
Context switching is needed to implement multi-tasking even in a physically single core by time division.
Say there are two physical cores and four very busy threads. In this case, two threads are just waiting until they will get the chance to use CPU. Read some articles related to preemptive OS scheduling.
The number of thread that can physically run in concurrent is just identical to # of logical processors. You are asking a general thread scheduling problem in OS literature such as round-robin..
I strongly suggest you to study basics of operating system first. Then move on multithreading issues. It seems like you're still unclear for the key concepts such as context switching and scheduling. It will take a couple of month, but if you really want to be an expert in computer software, then you should know such very basic concepts. Please take whatever OS books and lecture slides.
Threads running on the same core are not technically parallel. They only appear to be executed in parallel, as the CPU switches between them very fast (for us, humans). This switch is what is called context switch.
Now, threads executing on different cores are executed in parallel.
Most modern CPUs have a number of cores, however, most modern OSes (windows, linux and friends) usually execute much larger number of threads, which still causes context switches.
Even if no user program is executed, still OS itself performs context switches for maintanance work.
This should answer 1-3.
About 4: basically, every processor can work with threads. it is much more a characteristic of operating system. Thread is basically: memory (optional), stack and registers, once those are replaced you are in another thread.
5: the number of threads is pretty high and is limited by OS. Usually it is higher than regular programmer can successfully handle :)
The number of threads is dictated by your program:
is it IO bound?
can the task be divided into a number of smaller tasks?
how small is the task? the task can be too small to make it worth to spawn threads at all.
synchronization: if extensive synhronization is required, the penalty might be too heavy and you should reduce the number of threads.
Multiple threads are separate 'chains' of commands within one process. From CPU point of view threads are more or less like processes. Each thread has its own set of registers and its own stack.
The reason why you can have more threads than CPUs is that most threads don't need CPU all the time. Thread can be waiting for user input, downloading something from the web or writing to disk. While it is doing that, it does not need CPU, so CPU is free to execute other threads.
In your example, each tab of Firefox probably can even have several threads. Or they can share some threads. You need one for downloading, one for rendering, one for message loop (user input), and perhaps one to run Javascript. You cannot easily combine them because while you download you still need to react to user's input. However, download thread is sleeping most of the time, and even when it's downloading it needs CPU only occasionally, and message loop thread only wakes up when you press a button.
If you go to task manager you'll see that despite all these threads your CPU use is still quite low.
Of course if all your threads do some number-crunching tasks, then you shouldn't create too many of them as you get no performance benefit (though there may be architectural benefits!).
However, if they are mainly I/O bound then create as many threads as your architecture dictates. It's hard to give advice without knowing your particular task.
Broadly speaking, yeah, but "parallel" can mean different things.
It depends what tasks you want to run in parallel.
Not necessarily. Some (indeed most) threads spend a lot of time doing nothing. Might as well switch away from them to a thread that wants to do something.
The OS handles thread switching. It will delegate to different cores if it wants to. If there's only one core it'll divide time between the different threads and processes.
The number of threads is limited by software and hardware. Threads consume processor and memory in varying degrees depending on what they're doing. The thread management software may impose its own limits as well.
The key thing to remember is the separation between logical/virtual parallelism and real/hardware parallelism. With your average OS, a system call is performed to spawn a new thread. What actually happens (whether it is mapped to a different core, a different hardware thread on the same core, or queued into the pool of software threads) is up to the OS.
Parallel processing uses all the methods not just multi-threading.
Generally speaking, if you want to have real parallel processing, you need to perform it in hardware. Take the example of the Niagara, it has up to 8-cores each capable of executing 4-threads in hardware.
Context switching is needed when there are more threads than is capable of being executed in parallel in hardware. Even then, when executed in series (switching between one thread to the next), they are considered concurrent because there is no guarantee on the order of switching. So, it may go T0, T1, T2, T1, T3, T0, T2 and so on. For all intents and purposes, the threads are parallel.
Time slicing.
That would be up to the OS.
Multithreading is the execution of more than one thread at a time. It can happen both on single core processors and the multicore processor systems. For single processor systems, context switching effects it. Look!Context switching in this computational environment refers to time slicing by the operating system. Therefore do not get confused. The operating system is the one that controls the execution of other programs. It allows one program to execute in the CPU at a time. But the frequency at which the threads are switched in and out of the CPU determines the transparency of parallelism exhibited by the system.
For multicore environment,multithreading occurs when each core executes a thread.Though,in multicore again,context switching can occur in the individual cores.
I think answers so far are pretty much to the point and give you a good basic context. In essence, say you have quad core processor, but each core is capable of executing 2 simultaneous threads.
Note, that there is only slight (or no) increase of speed if you are running 2 simultaneous threads on 1 core versus you run 1st thread and then 2nd thread vertically. However, each physical core adds speed to your general workflow.
Now, say you have a process running on your OS that has multiple threads (i.e. needs to run multiple things in "parallel") and has some kind of stack of tasks in a queue (or some other system with priority rules). Then software sends tasks to a queue and your processor attempts to execute them as fast as it can. Now you have 2 cases:
If a software supports multiprocessing, then tasks will be sent to any available processor (that is not doing anything or simply finished doing some other job and job send from your software is 1st in a queue).
If your software does not support multiprocessing, then all of your jobs will be done in a similar manner, but only by one of your cores.
I suggest reading Wikipedia page on thread. Very first picture there already gives you a nice insight. :)

Resources