Where is Process Control Block saved? - switch-statement

My question is where is PCB of process A is saved when it's happening a context switch(so when the processor takes the process B). Somebody told me that is saved in the kernel memory, but I didn't understand at all..so is it saved in RAM , is it saved on the processor cache?

Different operating systems are built different ways. With that said, in general, when a context switch happens, the state of the process being switched out is saved and either a new context is loaded for a new process being started, or the previously saved context of an already-running process is loaded. The context is saved in RAM, any other storage would be much to slow to be practical.
The processor cache isn't addressable memory in any system I'm aware of, so "storing saved context in processor cache" isn't something the operating system can do directly.

Related

How process and thread related to virtual memory

I'm new to Linux and computer architecture, just some questions on how process and thread related to virtual memory and physical memory RAM.Below is my questions.
Q1-When there is two processes(process A and process B) running concurrently, if process A is running now, the process B's states like register values, heap objects etc have to be pushed to store on disk (Virtual Memory), and when the next context switch happens, process B will be "recovery" from disk to RAM, process A's state will be pushed to disk, is my understanding correct?
Q2- If my understanding in Q1 is correct, why not just save all processes on RAM too? normally we have large RAM like 16gb,32gb etc, how about just store every process's state on RAM, and when there is too many processes and RAM is going to run out, then further processes' states will be stored to disk?
Q3-How about threads? if there is multiple threads (e.g thread A and thread B), when thread A is running, does thread B's state will be pushed to stored on disk too?
is my understanding correct?
No, it's wrong. Waiting or blocked processes don't get swapped to disc. They wait in memory. Virtual memory is not on disc.
Also on a system with two processors, two processes are running concurrently, so both processes A and B can be running at the same time.
why not just save all processes on RAM too?
This is exactly what happens. All processes memory kindly waits in RAM until scheduler switches to this process.
Side note: If there is no RAM available and the system has swap available and this process is idle for some defined time, than it may get swapped on disc, ie. the processes memory may get moved to disc. But this doesn't happen immediately, it happens after a long time and in certain situation
will be pushed to stored on disk too?
No.
Virtual memory is not about physical location of the memory. It's the other way round - virtual memory is a of abstraction that allows system to modify the physical (maybe if any) location of the memory. A simplest explanation I give: there is a special cpu register that is added to each address upon dereferencing. A user space program does *(int*)4 but he doesn't get the value behind 4th byte in RAM, the special cpu register value is added to the pointer value upon dereferencing. The register value is configured by the system, can be different in different programs. So you can have exact same pointer values in two programs, but they both point to different locations. Of cause, this is over-over-simplification.

How is user address memory organized?

I always read that, at any given time, the processor can only run one process at a time. So one and only one process is in state running.
However, we can have a number of runnable processes. These are all of these processes who are waiting for the scheduler to schedule their execution.
At any given time, do all these runnable processes exist in user address space? Or has the currently running process in user address space, and it is only once they are scheduled that they are brought back to RAM from disk. In which case, does it mean that the kernel keeps the process task descriptor in its list of all runnable processes even if they are in disk? I guess you can tell I am confused.
If CPU supports virtual memory addressing, each process has a unique view of the memory. Two different processes that tries to read from the same memory address, will map to different location in physical memory, unless the memory maps tells otherwize (shared memory, like DLL files are mapped read only like this for instance)
If CPU does not support virtual memory, but only memory protection, the memory from the other processes will be protected away, so that the running process can only access its own memory.

Given the Process ID can I know whether it has accessed the cache memory of CPU recently?

I know the process ID of process X. After my process was preempted when it was scheduled again can I determine that process X was scheduled in between that time?
Can I know if process X updated the Cache Memory or not given its process ID?
Are there assembly code or API to do this in linux? Can anyone suggest coding examples or any technique?
It is not a "process" which access the CPU cache. It is any execution of machine instruction on the CPU core.
In particular, when a core is running in kernel mode, it is by definition not running in a process, and it is obviously using the CPU cache (since every memory access goes thru the cache)
So your question does not have any sense, if you speak of the CPU cache.
The file system cache (sometimes called page cache) is managed by the kernel, and you can't really attribute it to some specific process (e.g two processes reading the same file would use the same cached data). It is related to the mere action of accessing a file data (by whatever process doing that). See e.g. linuxatemyram
You might perhaps get some measure system-wide about CPU cache or file system cache, probably thru proc(5) (see also oprofile)
Can I know if process X updated the Cache Memory or not given its process ID?
If you are talking about CPU cache, then no. Both data cache and instruction cache are transparent to the system. There is no way to find out if it was update by a program x. But yes it will be used to speed up execution for sure.

Not able to understand fork() description

I was studying about virtual memory management from galvin ,I am unable to understand this statement :
In addition to separating logical memory from physical memory ,virtual
memory allows files and memory to be shared by two or more processes
through page sharing .This leads to the following benefits
Virtual memory can allow pages to be shared during process creation with fork() system call,thus speeding up process creation .
How can pages be shared with fork()? Please clarify.
I believe the text is referring to the copy-on-write optimisation done for fork().
Basically a fork() clones a process, duplicating its entire memory.
This can take a long time, especially for processes which use a lot of money. Moreover, it's very common for a fork() to be immediately followed by an exec(), rendering the previous copy pointless.
Instead of doing all of that work for every fork() modern Unixes create the new process, but don't copy all of the memory. They just point the virtual memory pages for both the original process and the new one to the same physical pages.
This greatly reduces the cost of a fork(), in terms of reduced copies and reduced memory usage.
The downside is that whenever either the fork()ed process or the original process write to a page the write raises an exception (because the physical pages are marked read-only) and the page is copied after all.
Fortunately it turns out this doesn't happen all that often.
pages may be shared by fork if two processes having same or different virtual address page share same physical memory frame. they have same entry of frame number in their page table

Thread context switch Vs. process context switch

Could any one tell me what is exactly done in both situations? What is the main cost each of them?
The main distinction between a thread switch and a process switch is that during a thread switch, the virtual memory space remains the same, while it does not during a process switch.
Both types involve handing control over to the operating system kernel to perform the context switch. The process of switching in and out of the OS kernel along with the cost of switching out the registers is the largest fixed cost of performing a context switch.
A more fuzzy cost is that a context switch messes with the processors cacheing mechanisms. Basically, when you context switch, all of the memory addresses that the processor "remembers" in its cache effectively become useless. The one big distinction here is that when you change virtual memory spaces, the processor's Translation Lookaside Buffer (TLB) or equivalent gets flushed making memory accesses much more expensive for a while. This does not happen during a thread switch.
Process context switching involves switching the memory address space. This includes memory addresses, mappings, page tables, and kernel resources—a relatively expensive operation. On some architectures, it even means flushing various processor caches that aren't sharable across address spaces. For example, x86 has to flush the TLB and some ARM processors have to flush the entirety of the L1 cache!
Thread switching is context switching from one thread to another in the same process (switching from thread to thread across processes is just process switching).Switching processor state (such as the program counter and register contents) is generally very efficient.
First of all, operating system brings outgoing thread in a kernel mode if it is not already there, because thread switch can be performed only between threads, that runs in kernel mode. Then the scheduler is invoked to make a decision about thread to which will be performed switching. After decision is made, kernel saves part of the thread context that is located in CPU (CPU registers) into the dedicated place in memory (frequently on the top of the kernel stack of outgoing thread). Then the kernel performs switch from kernel stack of outgoing thread on to kernel stack of the incoming thread. After that, kernel loads previously stored context of incoming thread from memory into CPU registers. And finally returns control back into user mode, but in user mode of the new thread.
In the case when OS has determined that incoming thread runs in another process, kernel performs one additional step: sets new active virtual address space.
The main cost in both scenarios is related to a cache pollution. In most cases, the working set used by the outgoing thread will differ significantly from working set which is used by the incoming thread. As a result, the incoming thread will start its life with avalanche of cache misses, thus flushing old and useless data from the caches and loading the new data from memory. The same is true for TLB (Translation Look Aside Buffer, which is on the CPU). In the case of reset of virtual address space (threads run in different processes) the penalty is even worse, because reset of virtual address space leads to the flushing of the entire TLB, even if new thread actually needs to load only few new entries. As a result, the new thread will start its time quantum with lots TLB misses and frequent page walking. Direct cost of threads switch is also not negligible (from ~250 and up to ~1500-2000 cycles) and depends on the CPU complexity, states of both threads and sets of registers which they actually use.
P.S.: Good post about context switch overhead: http://blog.tsunanet.net/2010/11/how-long-does-it-take-to-make-context.html
process switching: it is a transition between two memory resident of process in a multiprogramming environment;
context switching: it is a changing context from an executing program to an interrupt service routine (ISR).
In Thread Context Switching, the virtual memory space remains the same while it is not in the case of Process Context Switch. Also, Process Context Switch is costlier than Thread Context Switch.
I think main difference is when calling switch_mm() which handles memory descriptors of old and new task. In the case of threads, the virtual memory address space is unchanged (threads share virtual memory), so very little has to be done, and therefore less costly.
Though thread context switching needs to change the execution context (registers, stack pointers, program counters), they don't need to change address space as processes context switches do. There's an additional cost when you switch address space, more memory access (paging, segmentation, etc) and you have to flush TLB when entering or exiting a new process...
In short, the thread context switch does not assign a brand new set of memory and pid, it uses the same as the parent since it is running within the same process. A process one spawns a new process and thus assigns new mem and pid.
There is a loooooot more to it. They have written books on it.
As for cost, a process context switch >>>> thread as you have to reset all of the stack counters etc.
Assuming that The CPU the OS runs has got Some High Latency Devices Attached,
It makes sense to run another thread Of the Process's Address Space, while the high latency device responds back.
But, if the High Latency Device is responding faster than the time to need do set up of table + translation of Virtual To Physical memories for a NEW Process, then it is questionable if a switch is essential at all.
Also, HOT cache(data needed for running the process/thread is reachable in less time) is better choice.

Resources