What is the difference between kernel threads and user threads? Is it that kernel thread are scheduled and executed in kernel mode? What are techniques used for creating kernel threads?
Is it that user thread is scheduled, executed in user mode? Is it that Kernel does not participate in executing/scheduling user threads? When interrupts occur in executing user thread then who handles it?
Whenever, thread is created a TCB is created for each. now in case of user level threads
Is it that this TCB is created in user's address space ?
In case of switching between two user level threads who handles the context switching ?
There is a concept of multithreading models :
Many to one
One to one
Many to Many.
What are these models? How are these models practically used?
Have read few articles on this topic but still confused
Wants to clear the concept ..
Thanks in advance,
Tazim
Wikipedia has answers to most if not all of these questions.
http://en.wikipedia.org/wiki/Thread_(computer_science)
http://en.wikipedia.org/wiki/Thread_(computer_science)#Processes.2C_kernel_threads.2C_user_threads.2C_and_fibers
What is the difference between kernel threads and user threads?
Kernel threads are privileged and can access things off-limits to user mode threads. Take a look at "Ring (Computer Security)" on Wikipedia. On Windows, user mode corresponds to Ring 3, while kernel mode corresponds to Ring 0.
What are techniques used for creating kernel threads?
This is extremely dependent upon the operating system.
now in case of user level threads Is it that this TCB is created in user's address space ?
The TCB records information about a thread that the kernel uses in running that thread, right? So if it were allocated in user space, the user mode thread could modify or corrupt it, which doesn't seem like a very good idea. So, don't you suppose it's created in kernel space?
What are these models? How are these models practically used?
Wikipedia seems really clear about that.
Kernel thread means a thread that the kernel is responsible for scheduling. This means, among other things, that the kernel is able to schedule each thread on different cpus/cores at the same time.
How to use them varies a lot with programming languages and threading APIs, but as a simple illustration,
void task_a();
void task_b();
int main() {
new_thread(task_a);
new_thread(task_b);
// possibly do something else in the main thread
// wait for the threads to complete their work
}
In every implementation I am familiar with, the kernel may pause them at any time. ("pre-emptive")
User threads, or "User scheduled threads", make the program itself responsible for switching between them. There are many ways of doing this and correspondingly there is a variety of names for them.
On one end you have "Green threads"; basically trying to do the same thing as kernel threads do. Thus you keep all the complications of programming with real threads.
On the opposite end, you have "Fibers", which are required to yield before any other fiber gets run. This means
The fibers are run sequentially. There is no parallell performance gains to be had.
The interactions between fibers is very well defined. Other code run only at the exact points you yield. Other code won't be changing variables while you're working on them.
Most of the low-level complexities programmers struggle with in multithreading, such as cache coherency (looking at MT questions on this site, most people don't get that), are not a factor.
As the simplest example of fibers I can think of:
while(tasks_not_done) {
do_part_of_a();
do_part_of_b();
}
where each does some work, then returns when that part is done. Note that these are done sequentially in the same "hardware thread" meaning you do not get a performance increase from parallellism. On the other hand, interactions between them are very well defined, so you don't have race conditions. The actual working of each function can vary. They could also be "user thread objects" from some vector/array.
Essentially user threads run in the context of a user with the appropriate privilege levels e.g. user threads most certainly won't have access to kernel-level memory/data structures/routines etc. Whereas Kernel threads run in the context of the OS kernel thus giving them privileges to execute code which has access to low level kernel routines/memory/data structures.
Related
When a thread does something that may cause it to become blocked locally, for example, waiting for another thread in its process to complete some work, it calls a run-time system procedure. This procedure checks to see if the thread must be put into blocked state. If so, it stores the thread's registers in the thread table, looks in the table for a ready thread to run, and reloads the machine registers with the new thread's saved values. As soon as the stack pointer and program counter have been switched, the new thread comes to life again automatically. If the machine happens to have an instruction to store all the registers and another one to load them all, the entire thread switch can be done in just a handful of instructions. Doing thread switching like this is at least an order of magnitude-maybe more-faster than trapping to the kernel and is a strong argument in favor of user-level threads packages.
Source: Modern Operating Systems (Andrew S. Tanenbaum | Herbert Bos)
The above argument is made in favor of user-level threads. The user-level thread implementation is depicted as kernel managing all the processes, where individual processes can have their own run-time (made available by a library package) that manages all the threads in that process.
Of course, merely calling a function in the run-time than trapping to kernel might have a few less instructions to execute but why the difference is so huge?
For example, if threads are implemented in kernel space, every time a thread has to be created the program is required to make a system call. Yes. But the call only involves adding an entry to the thread table with certain attributes (which is also the case in user space threads). When a thread switch has to happen, kernel can simply do what the run-time (at user-space) would do. The only real difference I can see here is that the kernel is being involved in all this. How can the performance difference be so significant?
Threads implemented as a library package in user space perform significantly better. Why?
They're not.
The fact is that most task switches are caused by threads blocking (having to wait for IO from disk or network, or from user, or for time to pass, or for some kind of semaphore/mutex shared with a different process, or some kind of pipe/message/packet from a different process) or caused by threads unblocking (because whatever they were waiting for happened); and most reasons to block and unblock involve the kernel in some way (e.g. device drivers, networking stack, ...); so doing task switches in kernel when you're already in the kernel is faster (because it avoids the overhead of switching to user-space and back for no sane reason).
Where user-space task switching "works" is when kernel isn't involved at all. This mostly only happens when someone failed to do threads properly (e.g. they've got thousands of threads and coarse-grained locking and are constantly switching between threads due to lock contention, instead of something sensible like a "worker thread pool"). It also only works when all threads are the same priority - you don't want a situation where very important threads belonging to one process don't get CPU time because very unimportant threads belonging to a different process are hogging the CPU (but that's exactly what happens with user-space threading because one process has no idea about threads belonging to a different process).
Mostly; user-space threading is a silly broken mess. It's not faster or "significantly better"; it's worse.
When a thread does something that may cause it to become blocked locally, for example, waiting for another thread in its process to complete some work, it calls a run-time system procedure. This procedure checks to see if the thread must be put into blocked state. If so, it stores the thread's registers in the thread table, looks in the table for a ready thread to run, and reloads the machine registers with the new thread's saved values. As soon as the stack pointer and program counter have been switched, the new thread comes to life again automatically. If the machine happens to have an instruction to store all the registers and another one to load them all, the entire thread switch can be done in just a handful of instructions. Doing thread switching like this is at least an order of magnitude-maybe more-faster than trapping to the kernel and is a strong argument in favor of user-level threads packages.
This is talking about a situation where the CPU itself does the actual task switch (and either the kernel or a user-space library tells the CPU when to do a task switch to what). This has some relatively interesting history behind it...
In the 1980s Intel designed a CPU ("iAPX" - see https://en.wikipedia.org/wiki/Intel_iAPX_432 ) for "secure object oriented programming"; where each object has its own isolated memory segments and its own privilege level, and can transfer control directly to other objects. The general idea being that you'd have a single-tasking system consisting of global objects using cooperating flow control. This failed for multiple reasons, partly because all the protection checks ruined performance, and partly because the majority of software at the time was designed for "multi-process preemptive time sharing, with procedural programming".
When Intel designed protected mode (80286, 80386) they still had hopes for "single-tasking system consisting of global objects using cooperating flow control". They included hardware task/object switching, local descriptor table (so each task/object can have its own isolated segments), call gates (so tasks/objects can transfer control to each other directly), and modified a few control flow instructions (call far and jmp far) to support the new control flow. Of course this failed for the same reason iAPX failed; and (as far as I know) nobody has ever used these things for the "global objects using cooperative flow control" they were originally designed for. Some people (e.g. very early Linux) did try to use the hardware task switching for more traditional "multi-process preemptive time sharing, with procedural programming" systems; but found that it was slow because the hardware task switch did too many protection checks that could be avoided by software task switching and saved/reloaded too much state that could be avoided by a software task switching;p and didn't do any of the other stuff needed for a task switch (e.g. keeping statistics of CPU time used, saving/restoring debug registers, etc).
Now.. Andrew S. Tanenbaum is a micro-kernel advocate. His ideal system consists of isolated pieces in user-space (processes, services, drivers, ...) communicating via. synchronous messaging. In practice (ignoring superficial differences in terminology) this "isolated pieces in user-space communicating via. synchronous messaging" is almost entirely identical to Intel's twice failed "global objects using cooperative flow control".
Mostly; in theory (if you ignore all the practical problems, like CPU not saving all of the state, and wanting to do extra work on task switches like tracking statistics), for a specific type of OS that Andrew S. Tanenbaum prefers (micro-kernel with synchronous message passing, without any thread priorities), it's plausible that the paragraph quoted above is more than just wishful thinking.
I think the answer to this can use a lot of OS and parallel distributive computing knowledge (And I am not sure about the answer but I will try my best)
So if you think about it. The library package will have a greater amount of performance than you write in the kernel itself. In the package thing, interrupt given by this code will be held at once and al the execution will be done. While when you write in kernel different other interrupts can come before. Plus accessing threads again and again is harsh on the kernel since everytime there will be an interrupt. I hope it will be a better view.
it's not correct to say the user-space threads are better that the kernel-space threads since each one has its own pros and cons.
in terms of user-space threads, as the application is responsible for managing thread, its easier to implement such threads and that kind of threads have not much reliance on OS. however, you are not able to use the advantages of multi processing.
In contrary, the kernel space modules are handled by OS, so you need to implement them according to the OS that you use, and it would be a more complicated task. However, you have more control over your threads.
for more comprehensive tutorial, take a look here.
I am reading sections about user space thread from the book "Modern Operating System". It states that:
Another, and probably the most devastating argument against user-level threads, is that programmers generally want threads precisely in applications where the threads block often, as, for example, in a multithreaded Web server. These threads are constantly making system calls. Once a trap has occurred to the kernel to carry out the system call, it is hardly any more work for the kernel to switch threads if the old one has blocked, and having the kernel do this eliminates the need for constantly making select system calls that check to see if read system calls are safe. For applications that are essentially entirely CPU bound and rarely block, what is the point of having threads at all? No one would seriously propose computing the first n prime numbers or playing chess using threads because there is nothing to be gained by doing it that way.
I am particularly confused about the bold text.
1.Since these are user space threads, how can the kernel do a "switch threads"?
2. "having the kernel do this" , what does "this" here mean?
I thought behaviors are like:
1. "select" call is made, and find following system call is a blocking one.
2. Then the user space thread scheduler makes a thread switching and execute anohter thread.
For some reason, colleges insist on using operating systems textbooks that are confusing and at times nonsensical.
First, what is being described here is ENTIRELY system specific. On SOME operating systems, a synchronous system call will block all threads. This is not true in ALL operating systems.
Second, user threads are the poor man's way of doing them. In ye olde days user threads came into being because there were no operating system support. There are some that promote user threads as being more "efficient" than kernel threads (in theory a library can switch threads faster than the kernel) but this is total BS in practice. User threads are completely obsolete and systems that force developers to use them for threading are OBSOLETE. Even systems older systems like VMS have kernel threads.
In a modern OS course, "user threads" should be a sidebar or historical footnote.
In essence, your book is trying to make a debate where none exists. It's like post WWII U.S. Army assessments comparing the Sherman Tank to the Panther. They talk about things like the Sherman having move comfortable seats to try to make the two sound comparable when, in reality, the Sherman was obsolete and not even in the same class at the Panther.
1.Since these are user space threads, how can the kernel do a "switch threads"? 2. "having the kernel do this" , what does "this" here mean?
What they appear to be suggesting is that the thread will block the process when it makes a system call. When the occurs, the operating system will make a context switch. In this case the operating system is making a "thread switch" to another process anyway. The [correct] conclusion they are trying to lead you to then is that this switch take away the user threads have in alleged reduced overhead.
I thought behaviors are like: 1. "select" call is made, and find following system call is a blocking one. 2. Then the user space thread scheduler makes a thread switching and execute anohter thread.
Let me take the case of a user thread implementation that is not totally blocked by blocking system calls.
The library sets a timer for thread switching.
The thread start or resumes executing.
The thread makes a blocking system service (e.g, select).
The operating system switches the process out as part of the system service processing.
The timer goes off.
The process becomes current again and the OS invokes the timer handler in the library.
The library schedules another thread to execute.
The problem you face is that a blocking system service is usually going to have as part of its processing code to trigger a context switch. Because the system does know no about threads (otherwise it would be using kernel threads), a thread calling such a blocking service is going to pass through the code.
Even though the process may have threads that are executable, the operating system has no way to cause them to be executed because it has know knowledge of them because they are managed by a library in the process.
Many-to-One Model
One-to-One Model
Many-to-Many Model
Advantages and disadvantages of each model ?
Can you give an example ?
EDIT:
One thing is confusing me with the Many-to-One Model
I'm quoting the book:
"Thread management is done by the thread library in user space, so it
is efficient; but the entire process will block if a thread makes a
blocking system call. Also, because only one thread can access the
kernel at a time, multiple threads are unable to run in parallel on
multiprocessors"
Does it mean all processes in kernel will be blocked, due to the fact that the swapping is done by the application, not by OS scheduler.
(since in this model we manage threads in user-mode) ?
Or, only the threads belonging to the same process of the thread that made the blocking system call will be blocked ?
Thanks in advance!
We have to assign user level threads to kernel level threads, based on the this the mapping can be:
One to one (One user thread mapped to one kernel level thread)
Many to one(Many User level threads mapped to one kernel level thread)
Many to many(Many User level threads mapped to many kernel level threads)
Here the number of kernel level threads are generally set to lesser than number of user level thread, since management of kernel level threads is much more expensive since it involves kernel intervention in their(kernel level thread's) management.
Because of this reason only the fourth mapping of "one to many"(one user level thread to multiple kernel level threads) to one does not make sense.
"Thread management is done by the thread library in user space, so it is efficient; but the entire process will block if a thread makes a blocking system call. Also, because only one thread can access the kernel at a time, multiple threads are unable to run in parallel on multiprocessors"
this example may help to understand this line:
Does it mean all processes in kernel will be blocked, due to the fact that the swapping is done by the application, not by OS scheduler. (since in this model we manage threads in user-mode) ? Or, only the threads belonging to the same process of the thread that made the blocking system call will be blocked ?
One process's threads are independent of other process's threads.So only the threads belonging to the same process of the thread that made the blocking system call will be blocked.
I hope this does make sense to you...
I can see your problem. You have a horrible book.
You're asking about a couple of related issues. First of all, there are two general ways to implement threads.
1) Threads are implemented in a library using timers. In systems that schedule processes for execution this is the only way to do thread. This was ONLY way to do threads in the olde days. This system is usually called "user threads." User threads are multiplexed within a process. The process does the scheduling of its own threads.
The mythical advantage of "user threads" over "kernel threads" (below) is that they are more efficient. This is what your quoted passage is referring to. The statement "the entire process will block if a thread makes a blocking system call" is only true on some [unix] systems.
2) Threads are implemented in the operating system. A process consists of an address space and one or more threads. The operating system kernel schedules THREADS for execution rather than PROCESSES. These are kernel thread.
Note that even if the system supports kernel threads, it is possible for a process to use user threads. The two are not mutually exclusive. However, a system that does not natively support kernel threads can only use user thread.
That's the simple way to explain the different threading models.
-=-=-=-=-=-=-=-=-=-=-=-
The one-to-one, many-to-one, and many-to-many models are a needless confusion for students. Now we have to get into overlapping terminology.
Let's change the terminology around. For #1, instead of calling the schedulable unit of execution a "process" we call it a "kernel thread." There can only be one kernel thread per process in this model. Then the threads in the process are are "user threads." Any number of user threads execute within/are mapped to a kernel thread. This is then the many-to-one model. User threads = many-to-one.
If we have the operating system create the thread (a kernel thread), let's theoretically call what is being executed a "user thread." Each user thread maps to/executed in one and only one kernel thread. This is then the one-to-one model.
The many-to-one model is the same as what is normally called "user threading model."
The terminology is starting to get nonsensical because there is only one thread but we are calling it a user thread mapped to a kernel thread.
The one-to-one model is what is normally called the kernel threading model.
Lastly, we get to the many-to-many model. It is theoretical BS. In theory, there could be many user threads mapped to many kernel threads. In other words, a single user thread could execute within different kernel threads. I have never heard of system implementing threads this way and I cannot imagine any practicable advantage of such a system.
-=-=-=-=-=-=-=-=-=-=-=-=-
As to your last question, in some operating systems, blocking system calls block also block the timers used to implement user threads (a/k/a many-to-one). If one thread make blocking call, it blocks all the other threads in the PROCESS from executing until the blocking call completes.
This blocking does not occur in all systems (something an OS textbook should point out).
I was looking at the differences between user-level threads and kernel-level threads, which I basically understood.
What's not clear to me is the point of implementing user-level threads at all.
If the kernel is unaware of the existence of multiple threads within a single process, then which benefits could I experience?
I have read a couple of articles that stated user-level implementation of threads is advisable only if such threads do not perform blocking operations (which would cause the entire process to block).
This being said, what's the difference between a sequential execution of all the threads and a "parallel" execution of them, considering they cannot take advantage of multiple processors and independent scheduling?
An answer to a previously asked question (similar to mine) was something like:
No modern operating system actually maps n user-level threads to 1
kernel-level thread.
But for some reason, many people on the Internet state that user-level threads can never take advantage of multiple processors.
Could you help me understand this, please?
I strongly recommend Modern Operating Systems 4th Edition by Andrew S. Tanenbaum (starring in shows such as the debate about Linux; also participating: Linus Torvalds). Costs a whole lot of bucks but it's definitely worth it if you really want to know stuff. For eager students and desperate enthusiasts it's great.
Your questions answered
[...] what's not clear to me is the point of implementing User-level threads
at all.
Read my post. It is comprehensive, I daresay.
If the kernel is unaware of the existence of multiple threads within a
single process, then which benefits could I experience?
Read the section "Disadvantages" below.
I have read a couple of articles that stated that user-level
implementation of threads is advisable only if such threads do not
perform blocking operations (which would cause the entire process to
block).
Read the subsection "No coordination with system calls" in "Disadvantages."
All citations are from the book I recommended in the top of this answer, Chapter 2.2.4, "Implementing Threads in User Space."
Advantages
Enables threads on systems without threads
The first advantage is that user-level threads are a way to work with threads on a system without threads.
The first, and most obvious, advantage is that
a user-level threads package can be implemented on an operating system that does not support threads. All operating systems used to
fall into this category, and even now some still do.
No kernel interaction required
A further benefit is the light overhead when switching threads, as opposed to switching to the kernel mode, doing stuff, switching back, etc. The lighter thread switching is described like this in the book:
When a thread does something that may cause it to become blocked
locally, for example, waiting for another thread in its process to
complete some work, it calls a run-time system procedure. This
procedure checks to see if the thread must be put into blocked state.
If, so it stores the thread’s registers (i.e., its own) [...] and
reloads the machine registers with the new thread’s saved values. As soon as the stack
pointer and program counter have been switched, the new thread comes
to life again automatically. If the machine happens to have an
instruction to store all the registers and another one to load them
all, the entire thread switch can be done in just a handful of in-
structions. Doing thread switching like this is at least an order of
magnitude—maybe more—faster than trapping to the kernel and is a
strong argument in favor of user-level threads packages.
This efficiency is also nice because it spares us from incredibly heavy context switches and all that stuff.
Individually adjusted scheduling algorithms
Also, hence there is no central scheduling algorithm, every process can have its own scheduling algorithm and is way more flexible in its variety of choices. In addition, the "private" scheduling algorithm is way more flexible concerning the information it gets from the threads. The number of information can be adjusted manually and per-process, so it's very finely-grained. This is because, again, there is no central scheduling algorithm needing to fit the needs of every process; it has to be very general and all and must deliver adequate performance in every case. User-level threads allow an extremely specialized scheduling algorithm.
This is only restricted by the disadvantage "No automatic switching to the scheduler."
They [user-level threads] allow each process to have its own
customized scheduling algorithm. For some applications, for example,
those with a garbage-collector thread, not having to worry about a
thread being stopped at an inconvenient moment is a plus. They also
scale better, since kernel threads invariably require some table space
and stack space in the kernel, which can be a problem if there are a
very large number of threads.
Disadvantages
No coordination with system calls
The user-level scheduling algorithm has no idea if some thread has called a blocking read system call. OTOH, a kernel-level scheduling algorithm would've known because it can be notified by the system call; both belong to the kernel code base.
Suppose that a thread reads from the keyboard before any keys have
been hit. Letting the thread actually make the system call is
unacceptable, since this will stop all the threads. One of the main
goals of having threads in the first place was to allow each one to
use blocking calls, but to prevent one blocked thread from affecting
the others. With blocking system calls, it is hard to see how this
goal can be achieved readily.
He goes on that system calls could be made non-blocking but that would be very inconvenient and compatibility to existing OSes would be drastically hurt.
Mr Tanenbaum also says that the library wrappers around the system calls (as found in glibc, for example) could be modified to predict when a system cal blocks using select but he utters that this is inelegant.
Building upon that, he says that threads do block often. Often blocking requires many system calls. And many system calls are bad. And without blocking, threads become less useful:
For applications that are essentially entirely CPU bound and rarely
block, what is the point of having threads at all? No one would
seriously propose computing the first n prime numbers or playing chess
using threads because there is nothing to be gained by doing it that
way.
Page faults block per-process if unaware of threads
The OS has no notion of threads. Therefore, if a page fault occurs, the whole process will be blocked, effectively blocking all user-level threads.
Somewhat analogous to the problem of blocking system calls is the
problem of page faults. [...] If the program calls or jumps to an
instruction that is not in memory, a page fault occurs and the
operating system will go and get the missing instruction (and its
neighbors) from disk. [...] The process is blocked while the necessary
instruction is being located and read in. If a thread causes a page
fault, the kernel, unaware of even the existence of threads, naturally
blocks the entire process until the disk I/O is complete, even though
other threads might be runnable.
I think this can be generalized to all interrupts.
No automatic switching to the scheduler
Since there is no per-process clock interrupt, a thread acquires the CPU forever unless some OS-dependent mechanism (such as a context switch) occurs or it voluntarily releases the CPU.
This prevents usual scheduling algorithms from working, including the Round-Robin algorithm.
[...] if a thread starts running, no other thread in that process
will ever run unless the first thread voluntarily gives up the CPU.
Within a single process, there are no clock interrupts, making it
impossible to schedule processes round-robin fashion (taking turns).
Unless a thread enters the run-time system of its own free will, the scheduler will never get a chance.
He says that a possible solution would be
[...] to have the run-time system request a clock signal (interrupt) once a
second to give it control, but this, too, is crude and messy to
program.
I would even go on further and say that such a "request" would require some system call to happen, whose drawback is already explained in "No coordination with system calls." If no system call then the program would need free access to the timer, which is a security hole and unacceptable in modern OSes.
What's not clear to me is the point of implementing user-level threads at all.
User-level threads largely came into the mainstream due to Ada and its requirement for threads (tasks in Ada terminology). At the time, there were few multiprocessor systems and most multiprocessors were of the master/slave variety. Kernel threads simply did not exist. User threads had to be created to implement languages like Ada.
If the kernel is unaware of the existence of multiple threads within a single process, then which benefits could I experience?
If you have kernel threads, threads multiple threads within a single process can run simultaneously. In user threads, the threads always execute interleaved.
Using threads can simplify some types of programming.
I have read a couple of articles that stated user-level implementation of threads is advisable only if such threads do not perform blocking operations (which would cause the entire process to block).
That is true on Unix and maybe not all unix implementations. User threads on many operating systems function perfectly fine with blocking I/O.
This being said, what's the difference between a sequential execution of all the threads and a "parallel" execution of them, considering they cannot take advantage of multiple processors and independent scheduling?
In user threads. there is never parallel execution. In kernel threads, the can be parallel execution IF there are multiple processors. On a single processor system, there is not much advantage to using kernel threads over single threads (contra: note the blocking I/O issue on Unix and user threads).
But for some reason, many people on the Internet state that user-level threads can never take advantage of multiple processors.
In user threads, the process manages its own "threads" by interleaving execution within itself. The process can only have a thread run in the processor that the process is running in.
If the operating system provides system services to schedule code to run on a different processor, user threads could run on multiple processors.
I conclude by saying that for practicable purposes there are no advantages to user threads over kernel threads. There are those that will assert that there are performance advantages, but for there to be such an advantage it would be system dependent.
I am learning the Computer OS, I am confused about the real relationship between kernel level threads and the user level thread, The staff just said they are mapped. I just wonder how they mapped, and what's that for?
Thank you.
Every code at some point executes at a kernel level thread. A user level thread can be thought of as an abstraction, they work as if they are kernel threads but it is up to the language or platform implementing those user threads to define how they're gonna work.
They might be mapped on a 1:1 basis to a kernel thread, but there might be a number of user threads sharing the same kernel thread (and in this case the platform/language that provides the user threads that takes care of switching between different user threads during the processor time given to the single kernel thread running them)