In linux, in kvm environment, when a process in VM locks on some resource and is pre-empted, other processes of VM, which need that locked resource would spend time on spinlock. And the process would unlock the resource when it's allotted the PROCESSOR.
I would like to disable the scheduler from pre-emptying, until the process unlocks the resource. And this would reduce the cpu-time on spinlock.
How to achieve the above?? i.e.
How to findout if a process in VM has locked on some resource?
Then how
to inform scheduler to not to pre-empt the process until the resource is unlocked?
correct me if am wrong anywhere..
Thanks in advance..
Use spinlock_irq_save() call. It disables interrupts and preempting and locks a spinlock atomically.
See http://www.kernel.org/doc/Documentation/spinlocks.txt for use cases.
Related
Could scheduler automatically migrate the thread whose affinity has been set to a very busy cpu through sched_setaffinity to a free cpu?
Does sched_setaffinity implement a "hard affinity" or a "soft affinity"?
What i called "soft affinity" is that it tells scheduler the thread prefers to run on a particular cpu and could be migrated to other cpu if necessary.
What i called "hard affinity" is that it tells scheduler the thread must run on particular cpu no matter how long the thread has to wait for the cpu resources.
I clearly remember that there are apis which could provide "soft affinity" and "hard affinity" under windows indeed.Is there any api could provide "soft affinity" under linux?
No. If a process has affinity to one CPU only, it will only run on that CPU no matter what. In other words, this fits your definition of "hard affinity".
This feature can of course be a double edged sword if used incorrectly: setting the affinity of a task to a single CPU gives great benefit if the CPU is dedicated to that task only, but degrades performance if the CPU is not and it somehow gets under heavy load.
I'm a kernel noob including schedulers. I understand that there is a IO scheduler and a task scheduler and according to this post IO scheduler uses normal tasks that are handled by the task schedule in the end.
So if I run an user space thread that was assigned to an isolated core (using isolcpus) and it will do some IO operation, will the the
task created by the IO scheduler get executed on the isolated core ?
Since CFS seems to favor user interaction does this mean that CPU intensive threads might get a lower CPU time in the long run?
Isolating cores can help mitigate this issue?
Isolating cores can decrease the scheduling latency (the time it takes for a thread that was marked as runnable to get executed ) for
the threads that are pined to the isolated cores?
So if I run an user space thread that was assigned to an isolated core
(using isolcpus) and it will do some IO operation, will the the task
created by the IO scheduler get executed on the isolated core ?
What isolcpus is doing is taking that particular core out of kernel list of cpu where it can schedule tasks. So once you isolate a cpu from kernel's list of cpus it will never schedule any task on that core, no matter whether that core is idle or is being used by some other process/thread.
Since CFS seems to favor user interaction does this mean that CPU
intensive threads might get a lower CPU time in the long run?
Isolating cores can help mitigate this issue?
Isolating cpu has a different use altogether in my opinion. Basically if your applications has both fast threads(threads with no system calls, and are latency sensitive) and slow threads(threads with system calls) you would want to have dedicated cpu cores for your fast threads so that they are not interrupted by kernel's scheduling process and hence can run to their completion without any noise. Fast threads are usually latency sensitive. On the other hand slow threads or threads which are not really latency sensitive and are doing supporting logic for your application need not have dedicated cpu cores. As mentioned earlier isloting cpu servers a different purpose. We do all this all the time in our organization.
Isolating cores can decrease the scheduling latency (the time it takes
for a thread that was marked as runnable to get executed ) for the
threads that are pined to the isolated cores?
Since you are taking cpus from kernel's list of cpus this will surely impact other threads and processes, but then again you would want to pay extra thought and attention to what really is your latency sensitive code and you would want to separate it from your non-latency sensitive code.
Hope it helps.
I recently stumbled upon the question above but I am not sure if I understand what it is asking.
How would one avoid the use of scheduling policies?
I would think that there isn't any other way...
Scheduling policy has nothing to do with the resource allocation! Processes are scheduled basically, and hence allocated resources as such.
From "Resource allocation (computer)" description on Wikipedia :-
When the user opens any program this will be counted as a process, and
therefore requires the computer to allocate certain resources for it
to be able to run. Such resources could have access to a section of
the computer's memory, data in a device interface buffer, one or more
files, or the required amount of processing power.
I don't know how you got confused between them. All the process would, at a time or another, get scheduled at any point of time; unless the CPU is an unfair one.
EDIT :
How would one avoid the use of scheduling policies?
If there are more than one user-process to be executed, then one has to apply the scheduling policy so that the processes get executed in some order. There has to be a queue to hold all the processes. See a different case in BareMetal OS below.
Then, there is BareMetal OS which is single address space OS.
Multitasking on BareMetal is unusual for operating systems in this day
and age. BareMetal uses an internal work queue that all CPU cores
poll. A task added to the work queue will be processed by any
available CPU core in the system and will execute until completion,
which results in no context switch overhead.
So, BareMetal OS doesn't use any scheduling policy, it is based on polling of the work-queue by the cores.
Like in process management and memory management.
Are the scheduler and memory manager implemented as kernel threads that are run on the cpu the moment they are needed? If not, how does the kernel treat them?
Are they like processes, tasks, or some line of code that gets executed when needed?
Some are, some aren't. The terms "process management" and "memory management" are kind of broad and cover a fair bit of kernel code.
For memory management, a call to mmap() will just require changing some data structures and can be done by the current thread, but if pages are swapped out it will be done by kswapd, which is a kernel thread.
You might consider the scheduler a special case: since the scheduler is responsible for scheduling all threads, it itself is not a thread and does not execute on any thread (otherwise it would need to schedule itself... but how would it schedule itself, if it had to schedule itself first in order to do that?). You might think of the scheduler as running directly on each processor core when necessary.
I understood the mechanism of user threads mapping on kernel threads at thread level: now I'd like to understand the mechanism at process level.
An user thread can access the resources of his "father" process: when the user thread is mapped on a kernel thread, what is of the user process resources? And more:
We're talking of "kernel threads": threads of the same process share the resources of that process. Kernel threads have to work on different resources (the specific resources of the user process corrensponding to the user thread they're mapping). So each kernel thread belongs to a different "kernel process", that inherit the resources of the user process?
Sorry for my bad english, I hope you can understand.
From what I understand,
A thread is created at Kernel level, then for user-mode, it does a mode switch and the thread runs in user-mode. Now it can access it's resources in user-mode.
When a thread is running in kernel-mode, it can still access it's resources in user-mode.
You should check out these video's that explain how a thread is created and what the difference between user-mode and kernel-mode threads.
http://academicearth.org/courses/operating-systems-and-system-programming
Then there's also 'threads' that just run in kernel mode and cannot be accessed by a user-mode process.
I hope this helps.