I am playing around with kernel modules, and experimenting with linux fs objects and apis. sometimes I forget to release a lock of an object, that makes some tasks freezes, and increase cpu load because of holding the spin locks for long time. is there an easy way to just tell the kernel to release the lock without reboot the machine every time ?
Related
I wonder that at start up time, the kernel need to load device driver for initializing e.g. cpu clock. But at this time, the kernel has not initialized completely yet. So that we can use the mutex at this time (because device object use mutex as protect mechanism)? When will mutex be available to use?
For this, you need a small glance into the Linux kernel initialisation process.
The kernel is kicked off by a single process, running on a single core.
It detects the number of CPUs available and some other stuff, and configures the scheduler. It then triggers the scheduler.
Any driver loading or so will only happen after this point.
In fact, drivers are loaded way after the scheduler has been started up.
Some great insights into the topic of Linux initialisation:
Linux inside.
My question is related to knowledge on embedded Linux.
I just observed a strange reboot on my embedded project, which is very easy to reproduce.
When some condition is triggered, the system will like "freezing". I mean, its like encounter some infinite loop or be locked. Last for several seconds, system will quietly reboot. Not even core dump!!
I have no much clue about the cause. Generally will a lock or infinite loop can truly trigger Linux reboot? Or are there any things can freeze system and cause reboot with no core dump happens?
It is common on embedded systems to have a hardware watchdog; a timer implemented in hardware that resets the processor if it is allowed to expire.
Typically some software monitoring task continuously verifies the integrity of the system and restarts the hardware watchdog timer. If the monitoring task fails to run and the watchdog timer expires, the watchdog triggers a processor reset directly.
Your question is a bit hard to understand but yes, a "infinite loop" (the proper term is) in any application on any platform (including Linux) can crash a system. This happens obviously because an infinite loop can constantly take up memory and resources until there is none left. You mentioned you are doing embedded development (which can mean many different things) but usually means you are developing low-level applications built into Linux itself; these are more prone to crashing an OS than your average programming venture.
I have read that linux kernel is multi threaded and there can be multiple threads running concurrently in each core. In a SMP (symmetric multiprocessing) environment where a single OS manages all the processors/cores how is multithreading implemented?
Is that kernel threads are spawned and each dedicated to manage a core. If so when are these kernel threads created? Is it during bootup at kern_init() after the bootstrapping is complete and immediately after the Application processors are enabled by the bootstrap processor.
So does each core have its own scheduler(implemented by the core's kernel thread) that manages the tasks from a common pool shared by all kernel threads?
How does (direct) messaging between kernel threads residing on different cores happen when they need to intimate some events that another kernel thread might be interested in?
I also thought if one particular selected core with one kernel scheduler that on every system timer interrupt acquire a big kernel lock and decide/schedule what to run on each core?
So I would appreciate any clarity in the implementation details. Thanks in advance for your help.
Early in kernel startup, a thread is started for each core. It is set to the lowest possible priority and generally does nothing but reduce the CPU power and wait for an interrupt. When actual work needs to get done, it's either done by threads other than these threads or by hardware interrupts which interrupt either this thread or some other thread.
The scheduler is typically invoked either by a timer interrupt or by a thread transitioning from running to a state in which it's no longer ready to run. Kernel calls that transition a thread to a state in which it's no longer ready to run typically invoke the scheduler to let the core perform some other task.
From Wikipedia it says:
A kernel thread is the "lightest" unit of kernel scheduling. At least one kernel thread exists within each process.
I've learned that a process is a container that houses memory space, file handles, device handles, system resources, etc... and the thread is the one that really gets scheduled by the kernel.
So in single-threaded applications, is that one thread(main thread i believe) a kernel thread?
I assume you are talking about this article:
http://en.wikipedia.org/wiki/Kernel_thread
According to that article, in a single threaded application, since you have only one thread by definition, it has to be a kernel thread, otherwise it will not get scheduled and will not run.
If you had more than one thread in your application, then it would depend on how user mode multi threading is implemented (kernel threads, fibers, etc ...).
It's important to note however it would be a kernel thread running in user mode, when executing the application code (unless you make a system call). Any attempt to execute a protected instruction when running in user mode would cause a fault that will eventually lead to the process being terminated.
So kernel thread here not to be confused with supervisor/privileged mode and kernel code.
You can execute kernel code, but you have to go through a system call gate first.
No. In modern operating systems applications and the kernel run at different processor protection levels (often called rings). For example, Intel CPUs have four protection levels. Kernel code runs at Ring 0 (kernel mode) and is able to execute the most privileged processor instructions, whereas application code runs at Ring 3 (user mode) and is not allowed to execute certain operations. See http://en.wikipedia.org/wiki/Ring_(computer_security)
Lets say there are two processors on a machine. Thread A is running on P1 and Thread B is running on P2.
Thread A calls Sleep(10000);
Is it possible that when Thread A starts executing again, it runs on P2?
If yes, who decides this transition? If no, why not?
Does Processor store some data that which all threads it's running or OS binds each thread to Processor for its full lifetime ?
It is possible. This would be determined by the operating system process scheduler and may also be dependent on the application that is running. No information about previously running threads is kept by the processor, aside from whatever is in the cache.
This is dependent on many things, it behaves differently depending on the particular operating system. See also: Processor Affinity and Scheduling Algorithms. Under Windows you can pin a particular process to a processor core via the task manager.
Yes, it is possible. Though ultimately a thread inherits its CPU (or CPU core) from the process (executable.) In operating systems, which CPU or CPU core a process runs on for its current quanta (time slice) is decided by the Scheduler:
http://en.wikipedia.org/wiki/Scheduling_(computing)
-Oisin
The OS decides which processor to run the thread on, and it may easily change during the lifetime of that thread, especially if there is a context switch (caused by the sleep). It's completely possible if the system is loaded that both threads will be running on the same processor (or core), just at different times. Or if there isn't any load on the system, both threads may continue to run on separate processors.