Kernel threads VS CPU threads - multithreading

Is it safe to say CPU core threads and Kernel threads are the same?
I know Kernel threads are created by the Operating system at the Kernel Mode. This question arises from the fact that both CPU core threads and Kernel threads run in privileged environments. So I though maybe they are the same thing.
Is there a case where I can have more Kernel threads than CPU threads?

Related

How to avoid the impact of kernel threads and timers on the process?

The operating system runs multiple processes at the same time, and events such as peripheral interrupts, clock interrupts, and process scheduling will occur during the process of running.
The Linux operating system can use "isocpus" to isolate the CPU to ensure that the isolated CPU is not freely scheduled. But the kernel thread and timer cannot be restricted. How can we restrict the impact of the kernel thread and timer on the process?
Even modify the kernel.
Thank you.

The distribution of processing threads on the processor cores

The operating system independently distributes the processing of threads over the processor cores. The program has two threads. Initially, both threads are not loaded with work and are processed by one core. Later they are loaded with work. Will the operating system transfer the processing of one thread to another processor core?

How does multithreaded kernel work?

I have read that linux kernel is multi threaded and there can be multiple threads running concurrently in each core. In a SMP (symmetric multiprocessing) environment where a single OS manages all the processors/cores how is multithreading implemented?
Is that kernel threads are spawned and each dedicated to manage a core. If so when are these kernel threads created? Is it during bootup at kern_init() after the bootstrapping is complete and immediately after the Application processors are enabled by the bootstrap processor.
So does each core have its own scheduler(implemented by the core's kernel thread) that manages the tasks from a common pool shared by all kernel threads?
How does (direct) messaging between kernel threads residing on different cores happen when they need to intimate some events that another kernel thread might be interested in?
I also thought if one particular selected core with one kernel scheduler that on every system timer interrupt acquire a big kernel lock and decide/schedule what to run on each core?
So I would appreciate any clarity in the implementation details. Thanks in advance for your help.
Early in kernel startup, a thread is started for each core. It is set to the lowest possible priority and generally does nothing but reduce the CPU power and wait for an interrupt. When actual work needs to get done, it's either done by threads other than these threads or by hardware interrupts which interrupt either this thread or some other thread.
The scheduler is typically invoked either by a timer interrupt or by a thread transitioning from running to a state in which it's no longer ready to run. Kernel calls that transition a thread to a state in which it's no longer ready to run typically invoke the scheduler to let the core perform some other task.

Hardware thread vs soft threads?

I have read that in a multi core processor each core contains 2 hardware threads for example in dual core processor 4 hardware threads are running. Now if i create 2 threads in java are those threads going to map with 2 hardware threads or those 2 java threads are executed by single hardware thread of a particular core ?
That is dependent on a lot of things, however the 2 hardware threads per core you are referring to is the Intel HyperThreading technology. This technology enables the CPU to have two Thread Context's in memory and be executing simultaneously, sharing execution resources.
What threads run where is OS implemention dependent and mostly resolved by the Thread Scheduler algorithm of your OS.

Threads and CPU Affinity

Lets say there are two processors on a machine. Thread A is running on P1 and Thread B is running on P2.
Thread A calls Sleep(10000);
Is it possible that when Thread A starts executing again, it runs on P2?
If yes, who decides this transition? If no, why not?
Does Processor store some data that which all threads it's running or OS binds each thread to Processor for its full lifetime ?
It is possible. This would be determined by the operating system process scheduler and may also be dependent on the application that is running. No information about previously running threads is kept by the processor, aside from whatever is in the cache.
This is dependent on many things, it behaves differently depending on the particular operating system. See also: Processor Affinity and Scheduling Algorithms. Under Windows you can pin a particular process to a processor core via the task manager.
Yes, it is possible. Though ultimately a thread inherits its CPU (or CPU core) from the process (executable.) In operating systems, which CPU or CPU core a process runs on for its current quanta (time slice) is decided by the Scheduler:
http://en.wikipedia.org/wiki/Scheduling_(computing)
-Oisin
The OS decides which processor to run the thread on, and it may easily change during the lifetime of that thread, especially if there is a context switch (caused by the sleep). It's completely possible if the system is loaded that both threads will be running on the same processor (or core), just at different times. Or if there isn't any load on the system, both threads may continue to run on separate processors.

Resources