Thread context switch Vs. process context switch - multithreading

Could any one tell me what is exactly done in both situations? What is the main cost each of them?

The main distinction between a thread switch and a process switch is that during a thread switch, the virtual memory space remains the same, while it does not during a process switch.
Both types involve handing control over to the operating system kernel to perform the context switch. The process of switching in and out of the OS kernel along with the cost of switching out the registers is the largest fixed cost of performing a context switch.
A more fuzzy cost is that a context switch messes with the processors cacheing mechanisms. Basically, when you context switch, all of the memory addresses that the processor "remembers" in its cache effectively become useless. The one big distinction here is that when you change virtual memory spaces, the processor's Translation Lookaside Buffer (TLB) or equivalent gets flushed making memory accesses much more expensive for a while. This does not happen during a thread switch.

Process context switching involves switching the memory address space. This includes memory addresses, mappings, page tables, and kernel resources—a relatively expensive operation. On some architectures, it even means flushing various processor caches that aren't sharable across address spaces. For example, x86 has to flush the TLB and some ARM processors have to flush the entirety of the L1 cache!
Thread switching is context switching from one thread to another in the same process (switching from thread to thread across processes is just process switching).Switching processor state (such as the program counter and register contents) is generally very efficient.

First of all, operating system brings outgoing thread in a kernel mode if it is not already there, because thread switch can be performed only between threads, that runs in kernel mode. Then the scheduler is invoked to make a decision about thread to which will be performed switching. After decision is made, kernel saves part of the thread context that is located in CPU (CPU registers) into the dedicated place in memory (frequently on the top of the kernel stack of outgoing thread). Then the kernel performs switch from kernel stack of outgoing thread on to kernel stack of the incoming thread. After that, kernel loads previously stored context of incoming thread from memory into CPU registers. And finally returns control back into user mode, but in user mode of the new thread.
In the case when OS has determined that incoming thread runs in another process, kernel performs one additional step: sets new active virtual address space.
The main cost in both scenarios is related to a cache pollution. In most cases, the working set used by the outgoing thread will differ significantly from working set which is used by the incoming thread. As a result, the incoming thread will start its life with avalanche of cache misses, thus flushing old and useless data from the caches and loading the new data from memory. The same is true for TLB (Translation Look Aside Buffer, which is on the CPU). In the case of reset of virtual address space (threads run in different processes) the penalty is even worse, because reset of virtual address space leads to the flushing of the entire TLB, even if new thread actually needs to load only few new entries. As a result, the new thread will start its time quantum with lots TLB misses and frequent page walking. Direct cost of threads switch is also not negligible (from ~250 and up to ~1500-2000 cycles) and depends on the CPU complexity, states of both threads and sets of registers which they actually use.
P.S.: Good post about context switch overhead: http://blog.tsunanet.net/2010/11/how-long-does-it-take-to-make-context.html

process switching: it is a transition between two memory resident of process in a multiprogramming environment;
context switching: it is a changing context from an executing program to an interrupt service routine (ISR).

In Thread Context Switching, the virtual memory space remains the same while it is not in the case of Process Context Switch. Also, Process Context Switch is costlier than Thread Context Switch.

I think main difference is when calling switch_mm() which handles memory descriptors of old and new task. In the case of threads, the virtual memory address space is unchanged (threads share virtual memory), so very little has to be done, and therefore less costly.

Though thread context switching needs to change the execution context (registers, stack pointers, program counters), they don't need to change address space as processes context switches do. There's an additional cost when you switch address space, more memory access (paging, segmentation, etc) and you have to flush TLB when entering or exiting a new process...

In short, the thread context switch does not assign a brand new set of memory and pid, it uses the same as the parent since it is running within the same process. A process one spawns a new process and thus assigns new mem and pid.
There is a loooooot more to it. They have written books on it.
As for cost, a process context switch >>>> thread as you have to reset all of the stack counters etc.

Assuming that The CPU the OS runs has got Some High Latency Devices Attached,
It makes sense to run another thread Of the Process's Address Space, while the high latency device responds back.
But, if the High Latency Device is responding faster than the time to need do set up of table + translation of Virtual To Physical memories for a NEW Process, then it is questionable if a switch is essential at all.
Also, HOT cache(data needed for running the process/thread is reachable in less time) is better choice.

Related

Process Management in Linux Kernel

I have been studying the subsystems of the Linux kernel. There, it is written that Linux kernel is responsible for context switching(letting another process to use the CPU). Here are the steps the kernel goes through to do context switching:
The CPU (the actual hardware) interrupts the current process based
on an internal timer, switches into kernel mode, and hands control
back to the kernel.
The kernel records the current state of the CPU and memory, which
will be essential to resuming the process that was just interrupted.
The kernel performs any tasks that might have come up during the
preceding time slice (such as collecting data from input and output,
or I/O, operations).
The kernel is now ready to let another process run. The kernel analyzes
the list of processes that are ready to run and chooses one.
The kernel prepares the memory for this new process, and then prepares the CPU.
The kernel tells the CPU how long the time slice for the new process
will last.
The kernel switches the CPU into user mode and hands control of the
CPU to the process.
Problem with me is that I can't understand the 3rd step above. Can someone please shed some light on that sentence? Thanks!
The CPU (the actual hardware) interrupts the current process based on an internal timer, switches into kernel mode, and hands control back to the kernel.
Most task switches are caused by tasks blocking (because they have to wait for a mutex, disk IO, user IO, end-user action, etc).
It's better (more accurate) to say that "something" (IRQ, system call) causes a switch to the kernel's code before the kernel decides it wants to do a task switch, and that this "something" is not part of the task switch itself.
The kernel records the current state of the CPU and memory, which will be essential to resuming the process that was just interrupted.
Sort of. Because "something" (IRQ, system call) causes a switch to the kernel's code before the kernel decides it wants to do a task switch; all task switches only every happen between kernel's code (for one task) and kernel's code (for another task). Because task switches only ever switch from kernel's code to kernel's code; the task switch itself doesn't need to care about user-space memory (which isn't so important for kernel's code) or kernel's memory (which is global/shared by all CPUs and all virtual address spaces). More; because some registers are "callee preserved" (by C calling conventions) and some are "constant as far as kernel is concerned" (e.g. segment registers) the task switch code doesn't need to care about various parts of the CPU state either.
The kernel performs any tasks that might have come up during the preceding time slice (such as collecting data from input and output, or I/O, operations).
Also not part of the task switch (more "things that happen before or after kernel decides to do a task switch").
The kernel is now ready to let another process run. The kernel analyzes the list of processes that are ready to run and chooses one.
Sort of; but it's no as simple as a list, and sometimes (e.g. high-priority real time thread unblocks and preempts a less important task) the kernel knows which task it needs to switch to without doing anything extra.
The kernel prepares the memory for this new process, and then prepares the CPU.
For memory; kernel mostly just loads a new "reference for new task's virtual address space" (e.g. a single mov cr3, ... instruction on 80x86). For CPU state, it's the reverse of "2. The kernel records the current state of the CPU ..." above (loading what was previously saved, where some CPU state isn't loaded and isn't saved).
The kernel tells the CPU how long the time slice for the new process will last.
Yes.
The kernel switches the CPU into user mode and hands control of the CPU to the process.
Not really. It's better (more accurate) to say after the kernel has finished doing a task switch, the kernel code for the new task does whatever it wants (and eventually may return to user-space); and that "things that happen after task switch finished" are not part of the task switch.

Context switch between kernel threads vs user threads

Copy pasted from this link:
Thread switching does not require Kernel mode privileges.
User level threads are fast to create and manage.
Kernel threads are generally slower to create and manage than the user threads.
Transfer of control from one thread to another within the same process requires a mode switch to the Kernel.
I never came across these points while reading standard operating systems reference books. Though these points sound logical, I wanted to know how they reflect in Linux. To be precise :
Can someone give detailed steps involved in context switching between user threads and kernel threads, so that I can find the step difference between the two.
Can someone explain the difference with actual context switch example or code. May be system calls involved (in case of context switching between kernel threads) and thread library calls involved (in case of context switching between user threads).
Can someone link me to Linux source code line (say on github) handling context switch.
I also doubt why context switch between kernel threads requires changing to kernel mode. Aren't we already in kernel mode for first thread?
Can someone give detailed steps involved in context switching between user threads and kernel threads, so that I can find the step difference between the two.
Let's imagine a thread needs to read data from a file, but the file isn't cached in memory and disk drives are slow so the thread has to wait; and for simplicity let's also assume that the kernel is monolithic.
For kernel threading:
thread calls a "read()" function in a library or something; which must cause at least a switch to kernel code (because it's going to involve device drivers).
the kernel adds the IO request to the disk driver's "queue of possibly many pending requests"; realizes the thread will need to wait until the request completes, sets the thread to "blocked waiting for IO" and switches to a different thread (that may belong to a completely different process, depending on global thread priorities). The kernel returns to the user-space of whatever thread it switch to.
later; the disk hardware causes an IRQ which causes a switch back to the IRQ handler in kernel code. The disk driver finishes up the work it had to do the for (currently blocked) thread and unblocks that thread. At this point the kernel might decide to switch to the "now unblocked" thread; and the kernel returns to the user-space of the "now unblocked" thread.
For user threading:
thread calls a "read()" function in a library or something; which must cause at least a switch to kernel code (because it's going to involve device drivers).
the kernel adds the IO request to the disk driver's "queue of possibly many pending requests"; realizes the thread will need to wait until the request completes but can't take care of that because some fool decided to make everything worse by doing thread switching in user space, so the kernel returns to user-space with "IO request has been queued" status.
after the pointless extra overhead of switching back to user-space; the user-space scheduler does the thread switch that the kernel could have done. At this point the user-space scheduler will either tell kernel it has nothing to do and you'll have more pointless extra overhead switching back to kernel; or user-space scheduler will do a thread switch to another thread in the same process (which may be the wrong thread because a thread in a different process is higher priority).
later; the disk hardware causes an IRQ which causes a switch back to the IRQ handler in kernel code. The disk driver finishes up the work it had to do for the (currently blocked) thread; but the kernel isn't able to do the thread switch to unblock the thread because some fool decided to make everything worse by doing thread switching in user space. Now we've got a problem - how does kernel inform the user-space scheduler that the IO has finished? To solve this (without any "user-space scheduler running zero threads constantly polls kernel" insanity) you have to have some kind of "kernel puts notification of IO completion on some kind of queue and (if the process was idle) wakes the process up" which (on its own) will be more expensive than just doing the thread switch in the kernel. Of course if the process wasn't idle then code in user-space is going to have to poll its notification queue to find out if/when the "notification of IO completion" arrives, and that's going to increase latency and overhead. In any case, after lots of stupid pointless and avoidable overhead; the user-space scheduler can do the thread switch.
Can someone explain the difference with actual context switch example or code. May be system calls involved (in case of context switching between kernel threads) and thread library calls involved (in case of context switching between user threads).
The actual low-level context switch code typically begins with something like:
save whichever registers are "caller preserved" according to the calling conventions on the stack
save the current stack top in some kind of "thread info structure" belonging to the old thread
load a new stack top from some kind of "thread info structure" belonging to the new thread
pop whichever registers are "caller preserved" according to the calling conventions
return
However:
usually (for modern CPUs) there's a relatively large amount of "SIMD register state" (e.g. for 80x86 with support for AVX-512 I think it's over 4 KiB of of stuff). CPU manufacturers often have mechanisms to avoid saving parts of that state if it wasn't changed, and to (optionally) postpone the loading of (pieces of) that state until its actually used (and avoid it completely if its not actually used). All of that requires kernel.
if it's a task switch and not just used for thread switches you might need some kind of "if virtual address space needs to change { change virtual address space }" on top of that
normally you want to keep track of statistics, like how much CPU time a thread has used. This requires some kind of "thread_info.time_used += now() - time_at_last_thread_switch;"; which gets difficulty/ugly when "process switching" is separated from "thread switching".
normally there's other state (e.g. pointer to thread local storage, special registers for performance monitoring and/or debugging, ...) that may need to be saved/loaded during thread switches. Often this state is not directly accessible in user code.
normally you also want to set a timer to expire when the thread has used too much time; either because you're doing some kind of "time multiplexing" (e.g. round-robin scheduler) or because its a cooperating scheduler where you need to have some kind of "terminate this task after 5 seconds of not responding in case it goes into an infinite loop forever" safe-guard.
this is just the low level task/thread switching in isolation. There is almost always higher level code to select a task to switch to, handle "thread used too much CPU time", etc.
Can someone link me to Linux source code line (say on github) handling context switch
Someone probably can't. It's not one line; it's many lines of assembly for each different architecture, plus extra higher-level code (for timers, support routines, the "select a task to switch to" code, for exception handlers to support "lazy SIMD state load", ...); which probably all adds up to something like 10 thousand lines of code spread across 50 files.
I also doubt why context switch between kernel threads requires changing to kernel mode. Aren't we already in kernel mode for first thread?
Yes; often you're already in kernel code when you find out that a thread switch is needed.
Rarely/sometimes (mostly only due to communication between threads belonging to the same process - e.g. 2 or more threads in the same process trying to acquire the same mutex/semaphore at the same time; or threads sending data to each other and waiting for data from each other to arrive) kernel isn't involved; and in some cases (which are almost always massive design failures - e.g. extreme lock contention problems, failure to use "worker thread pools" to limit the number of threads needed, etc) it's possible for this to be the dominant cause of thread switches, and therefore possible that doing thread switches in user space can be beneficial (e.g. as a work-around for the massive design failures).
Don't limit yourself to Linux or even UNIX, they are neither the first nor last word on systems or programming models. The synchronous execution model dates back to the early days of computing, and are not particularly well suited to larger scale concurrent and reactive programming.
Golang, for example, employs a great many lightweight user threads -- goroutines -- and multiplexes them on a smaller set of heavyweight kernel threads to produce a more compelling concurrency paradigm. Some other programming systems take similar approaches.

Difference between Kernel, Kernel-Thread and User-Thread

i'm not sure, if i totally understand the above mentioned differences, so i'd like to explain it on my own and you can interrupt me, as far as i get wrong:
"A kernel is the initial piece of code which creates kernel-threads. Kernel threads are processes managed by the kernel. user-threads are part of a process. If you have a single-threaded process, than the whole process itself would be a user-thread. User-Threads make system-calls and this system-calls are served by a specific kernel-Thread which belongs to the calling user-threads. So for ervery user-thread which make a system call, a kernel-thread is created and after the kernel-thread has done its job, it gives control back to the user-thread and then the kernel-threas is destroyed."
Would this be ok?
Thank you!
Many greetings from Germany!
I don't think that's a very good mental model for kernel vs user. I think it's useful to look at the implementation of these abstractions in order to fully understand them:
What is a Kernel?
A kernel is basically just a piece of memory. It was privileged enough to be loaded before anything else, thereby allowing it to set the CPU's interrupt vectors.
Interrupts control everything, including I/O, timers, and virtual memory. That means that the kernel gets to decide how all that is handled.
A library is also just a piece of memory, and you can very well look at the kernel as the "system call library", among other things. But because the kernel represents the hardware, that piece of memory is shared among everyone.
Kernel Mode vs User Mode
Kernel mode is the CPU's "natural" mode, with no restrictions (on x86 CPUS - "ring 0"). User mode (on x86 CPUs - "ring 3") is when the CPU is instructed to trigger an interrupt whenever certain instructions are used or whenever some memory locations are accessed. This allows the kernel to have the CPU execute specific kernel code when the user tries to access kernel memory or memory representing I/O ports or hardware memory such as the GPU's frame buffer.
Processes and Threads
A process is also just a piece of memory, consisting of its own heap and the memory used by libraries, among which is the kernel.
A thread (= a unit of scheduling) is just a stack with an ID that the kernel knows of and tracks. That's the call stack that the CPU uses when the thread is running. User threads have 2 stacks: one for user mode and one for kernel mode - but they still have the same ID.
Because the kernel controls timers, it sets up a timer to go off e.g. every 1 ms. When the timer triggers ("timer interrupt"), the CPU runs the callback that the kernel set up for that interrupt, where the kernel can see that the current thread has been running for a while and decide to unschedule it and schedule another thread instead.
Virtual Memory Context
By "virtual memory context" I mean all the memory that can be accessed by the CPU. This includes all the memory of the process - including the user-mode heap and memory of libraries, user-mode call stacks of all process threads, kernel-mode stack of all threads in the system, the kernel's heap memory, I/O ports, and hardware memory.
When an interrupt or a system call occur, the virtual memory context doesn't change, only a CPU flag is flipped (i.e. from ring 3 to ring 0) and the CPU is now back in its "natural" kernel mode where it can freely access kernel memory, I/O ports and hardware memory.
When a new process is created, what actually happens is that a new thread is created, and assigned a new virtual memory context. Therefore, every process starts as single-threaded. That thread can later ask the kernel via a system call to create more threads (= stacks) which share its virtual memory context (= process), or ask the kernel to create more threads, each with a new virtual memory context (= new processes).
Kernel Threads
Like any other library, the kernel can have its own background threads for optimization purposes. When such a need arises (which can happen in the memory context of any process when servicing a system call), the kernel will create new threads and give them a special memory context, which is a context that only contains the kernel's memory, with no access to memory of any process.
You're mixing up a few somewhat different concepts.
To follow from what you wrote, there is a Kernel, which is a piece of code that handles all internal operations of the Operating System. It does create kernel threads, but the Kernel threads are nothing special. They are just threads which run in "Kernel-Mode" and are not associated with any "User-Mode" process.
Now we have a concept which is lacking from your explanation and is the key to understand it better. Kernel-Mode (or sometimes called system mode), along with User-Mode make up CPU modes available to OS.
Kernel-Mode is a kind of trusted execution mode, which allows the code to access any memory and execute any instruction. It handles I/O and system interrupts.
User-Mode is a limited mode, which does not allow the executing code to access any memory address except those associated with the User-Mode process.
Also User-Mode cannot access I/O or those many OS related function (such as handle or process creation). For these operations, User-Mode code should call into Kernel-Mode, by a system call (as you have correctly mentioned).
A system call is a special CPU instruction which switches the CPU mode to Kernel-Mode and starts executing a special code provided by OS which dispatches different system calls. So, it means the work is NOT scheduled for a Kernel-Mode thread, instead the OS (kernel/trusted) code is executed in the context of the same User-Mode thread. The only thing that happens is that CPU mode changes to Kernel-Mode.
As for completing jobs in a Kernel-thread, I should say although in some cases, some operations (e.g. I/O) might be scheduled for a separate Kernel thread to complete, but the Kernel threads are not created and destroyed in the process of a system call.
Backed by:
10+ years of driver development experience
Also:
http://www.linfo.org/kernel_mode.html
https://learn.microsoft.com/en-us/windows-hardware/drivers/gettingstarted/user-mode-and-kernel-mode

Is a process kept in main memory while it is in the blocked/suspended state?

When a process P1 is in a blocked or suspended state, will the memory management system swap it out of main memory for room for an active process?
And if the process is determined to come back where is the Program's procedure call stack, Contents of program counter (PC) and Contents of program status word (PSW) stored? Does the OS keep it all in secondary memory or is part of the suspended/blocked process of P1 kept in main memory?
So I'm guessing when a process is swapped out of memory and put in a
suspended state, all of its resident pages are moved out. When the
process is resumed, all of the pages that were previously in main
memory are returned to main memory
Think in terms of pages, not processes.
Even an active process may have many pages evicted out of physical memory and into swap if the system is under memory pressure.
So, sure, a suspended process may have effectively all of its pages swapped out entirely.
But it is unlikely to have all pages swapped in simply because the process woke up. Doing so would be a waste of CPU, I/O and memory. Instead, pages will be brought back as needed (general case -- some pagers may bring back sets of pages heuristically).
If a process is active, then it won't be swapped out, so the dynamic state of the lowest call stack (all the register noise, red zone on stack, etc... ) isn't in play when the swap happens.
I.e. for a process to be swapped out the threads need to be blocked on something, typically a call into the kernel or into a system library that is blocking. Registers will be out of play, etc... Thus, the execution state that needs to be swapped out is pretty straightforward as the call return state will be preserved in the thread state itself (as the thread is blocked).
In fact, things like the PC and the PSW are preserved more as a part of the context switching subsystem than paging. I.e. on a typical system, you'll likely have several hundred, maybe thousands, of threads running at once across the N physical cores of the CPU. The concurrency support of the architecture is where you'll find how that state is maintained.

Memory addressing in assembly / multitasking

I understand how programs in machine code can load values from memory in to registers, perform jumps, or store values in registers to memory, but I don't understand how this works for multiple processes. A process is allocated memory on the fly, so must it use relative addressing? Is this done automatically (meaning there are assembly instructions that perform relative jumps, etc.), or does the program have to "manually" add the correct offset to every memory position it addresses.
I have another question regarding multitasking that is somewhat related. How does the OS, which isn't running, stop a thread and move on to the next. Is this done with timed interrupts? If so, then how can the values in registers be preserved for a thread. Are they saved to memory before control is given to a different thread? Or, rather than timed interrupts, does the thread simply choose a good time to give up control. In the case of timed interrupts, what happens if a thread is given processor time and it doesn't need it. Does it have to waste it, can it call the interrupt manually, or does it alert the OS that it doesn't need much time?
Edit: Or are executables edited before being run to compensate for the correct offsets?
That's not how it works. All modern operating systems virtualize the available memory. Giving every process the illusion that it has 2 gigabytes of memory (or more) and doesn't have to share it with anybody. The key component in a machine that does this is the MMU, nowadays built in the processor itself. Another core feature of this virtualization is that it isolates processes. One misbehaving one cannot bring another one down with it.
Yes, a clock tick interrupt is used to interrupt the currently running code. Processor state is simply saved on the stack. The operating system scheduler then checks if any other thread is ready to run and has a high enough priority to get first in line. Some extra code ensures that everybody gets a fair share. Then it just a matter of setting the MMU to resume execution on the other thread. If no thread is ready to run then the CPU gets physically turned off with the HALT instruction. To be woken again by the next clock interrupt.
This is ten-thousand foot view, it is well covered in any book about operating system design.
A process is allocated memory on the fly, so must it use relative addressing?
No, it can use relative or absolute addressing depending on what it is trying to address.
At least historically, the various different addressing modes were more about local versus remote memory. Relative addressing was for memory addresses close to the current address while absolute was more expensive but could address anything. With modern virtual memory systems, these distinctions may be no longer necessary.
A process is allocated memory on the fly, so must it use relative addressing? Is this done automatically (meaning there are assembly instructions that perform relative jumps, etc.), or does the program have to "manually" add the correct offset to every memory position it addresses.
I'm not sure about this one. This is taken care of by the compiler normally. Again, modern virtual memory systems make make this complexity unnecessary.
Are they saved to memory before control is given to a different thread?
Yes. Typically all of the state (registers, etc.) is stored in a process control block (PCB), a new context is loaded, the registers and other context is loaded from the new PCB, and execution begins in the new context. The PCB can be stored on the stack or in kernel memory or in can utilize processor specific operations to optimize this process.
Or, rather than timed interrupts, does the thread simply choose a good time to give up control.
The thread can yield control -- put itself back at the end of the run queue. It can also wait for some IO or sleep. Thread libraries then put the thread in wait queues and switch to another context. When the IO is ready or the sleep expires, the thread is put back into the run queue. The same happens with mutex locks. It waits for the lock in a wait queue. Once the lock is available, the thread is put back into the run queue.
In the case of timed interrupts, what happens if a thread is given processor time and it doesn't need it. Does it have to waste it, can it call the interrupt manually, or does it alert the OS that it doesn't need much time?
Either the thread can run (perform CPU instructions) or it is waiting -- either on IO or a sleep. It can ask to yield but typically it is doing so by [again] sleeping or waiting on IO.
I probably walked into this question quite late, but then, it may be of use to some other programmers. First - the theory.
The modern day operating system will virtualize the memory, and to do so, it maintains, within its system memory area, a series of page pointers. Each page is of a fixed size (usually 4K), and when any program seeks some memory, its allocated memory addresses that are virtualized using the memory page pointer. Its approximates the behaviour of "segment" registers in the prior generation of the processors.
Now when the scheduler decides to get another process running, it may or may not keep the previous process in memory. If it keeps it in memory, then all that the scheduler does is to save the entire register snapshot (now, including YMM registers - this bit was a complex issue earlier as there are no single instructions that saved the entire context : read up on XSAVE), and this has a fixed format (available in Intel SW manual). This is stored in the memory space of the scheduler itself, along with the information on the memory pages that were being used.
If however, the scheduler needs to "dump" the current process context that is about to go to sleep to the hard disk - this situation usually arises when the process that is waking up needs extraordinary amount of memory, then the scheduler writes the memory page files in the disk blocks (called pagefile - reserved area of memory - also the source of "old grandmother wisdom" that pagefile must be equal to size of real memory) and the scheduler preserves the memory page pointer addresses as offsets in the pagefile. When it wakes up, the scheduler reads from pagefile the offset address, allocates real memory and populates the memory page pointers, and then loads the contents from the disk blocks.
Now, to answer your specific questions :
1. Do u need to use only relative addressing, or you can use absolute?
And. You may use either - whatever u perceive to be as absolute is also relative as the memory page pointer relativizes that address in an invisible format. There is no really absolute memory address anywhere (including the io device memories) except the kernel of the operating system itself. To test this, u may unassemble any .EXE program, to see that the entry point is always CALL 0010 which clearly implies that each thread gets a different "0010" to start the execution.
How do threads get life and what if it surrenders the unused slice.
Ans. The threads usually get a slice - modern systems have 20ms as the usual standard - but this is sometimes changed in special purpose compilation for servers that do not have many hardware interrupts to deal with - in order of their position on the process queue. A thread usually surrenders its slice by calling function sleep(), which is a formal (and very nice way) to surrender your balance part of the time slice. Most libraries implementing asynchronous reads, or interrupt actions, call sleep() internally, but in many instances, top level programs also call sleep() - e.g. to create a time gap. An invocation to sleep will certainly change the process context - the CPU actually is not given the liberty to sleep using NOP.
The other method is to wait for an IO to complete, and this is handled differently. The program on asking for an IO process, will cede its time slice, and the process scheduler flags this thread to be in "WAITING FOR AN IO" state - and this thread will not be given a time slice by the processor till its intended IO is completed, or timed out. This feature helps programmers as they do not have to explicitly write a sleep_until_IO() kind of interface.
Trust this sets you going further in your explorations.

Resources