How JVM handles handles Thread scheduling? - multithreading

May I know JVM handles Thread scheduling? How JVM handles unused threads (i.e) Threads that are not in running state. Does too much of unused threads is burden to JVM?

This is rather an open-ended question but I'll give you a quick synopsis.
Java has always had the concept of threads in the language. Sometimes these map to OS threads but, where an OS does not support them (like Windows 95), the JVM can implement its own threading layer called green threads. Today you can ignore green threads so the lowest level scheduling of JVM threads is handled by the OS.
However, all Java threads have a priority associated with them which allows you to signal to the JVM and OS which threads you consider to be more important than others in the context of scheduling. This is a property of the Thread class and is accessed with getPriority/setPriority. There are rules about how this value can be changed according to the security policy and the maximum thread priority. Changing thread priorities to implement specific application behaviour is not recommended due to the different way platforms implement thread scheduling.
Java and the JVM make no guarantees about when threads will be scheduled. When there is competition for processing resources, threads with higher priority are generally executed in preference to threads with lower priority. Importantly, this is not a guarantee that the highest priority thread will always be running. Thread priorities should not be used (or relied on) to implement mutual exclusion.
Unused threads will be garbage collected just like any other objects. Because Threads are heavyweight objects that take time to create and remove, an application that intends to use lots of short-lived threads will typically implement some sort of thread pooling. Too many threads will place a burden on the JVM, more so than other objects, because of the link to the underlying OS threads.
For more detailed information on Java threads and concurrency, I recommend reading Brian Goetz's excellent book, "Java Concurrency in Practice".

Related

What is the difference between threads and lightweight threads?

I don't quite understand the difference between threads and lightweight threads. From an API perspective both types of threads are identical so where exactly does the difference come in. Is it at the implementation level where a lightweight thread is managed by a higher level runtime than the OS thread scheduler or is it something else? Also, is there set of heuristics that people use to decide which type of thread to use in specific scenarios?
In what context, lightweight threads could represent threads which are implemented by a library, for example threads can be simulated in a library by switching between lightweight threads at an event handling layer, these lightweight threads are queued up and processed by a singe OS thread, the advantage of this is that since context switching is handled in the library switching can occur when the processing of data is complete and so the data does not need to be loaded back into the CPU's cache next time this lightweight thread becomes active.
Lightweight threads could also refer to co-operative threads (or fibers), these are threads where you have to explicitly yield to give other lightweight threads a chance, this has the same advantage in that the context switching can occur at a place you know you have finished processing some data and so you know it will not be need again.
Alternativly Lightweight threads could mean normal OS threads and the non-lightweight threads could mean processes, process have at least one thread within them and also have there own memory and other resources, they are more expensive than threads because you can not share data between thread easily and it can be a more expensive operation for the OS to create processes.

erlang threading and OS threads correlation [duplicate]

Erlang is known for being able to support MANY lightweight processes; it can do this because these are not processes in the traditional sense, or even threads like in P-threads, but threads entirely in user space.
This is well and good (fantastic actually). But how then are Erlang threads executed in parallel in a multicore/multiprocessor environment? Surely they have to somehow be mapped to kernel threads in order to be executed on separate cores?
Assuming that that's the case, how is this done? Are many lightweight processes mapped to a single kernel thread?
Or is there another way around this problem?
Answer depends on the VM which is used:
1) non-SMP: There is one scheduler (OS thread), which executes all Erlang processes, taken from the pool of runnable processes (i.e. those who are not blocked by e.g. receive)
2) SMP: There are K schedulers (OS threads, K is usually a number of CPU cores), which executes Erlang processes from the shared process queue. It is a simple FIFO queue (with locks to allow simultaneous access from multiple OS threads).
3) SMP in R13B and newer: There will be K schedulers (as before) which executes Erlang processes from multiple process queues. Each scheduler has it's own queue, so process migration logic from one scheduler to another will be added. This solution will improve performance by avoiding excessive locking in shared process queue.
For more information see this document prepared by Kenneth Lundin, Ericsson AB, for Erlang User Conference, Stockholm, November 13, 2008.
I want to ammend previous answers.
Erlang, or rather the Erlang runtime system (erts), defaults the number of schedulers (OS threads) and the number of runqueues to number of processing elements on your platform. That is processors cores or hardware threads. You can change these settings in runtime using:
erlang:system_flag(schedulers_online, NP) -> PrevNP
The Erlang processes does not have any affinity to any schedulers yet. The logic balancing the processes between the schedulers follows two rules. 1) A starving scheduler will steal work from another scheduler. 2) Migration paths are setup to push processes from schedulers with lots of processes to schedulers with less work. This is done to assure fairness in reduction count (execution time) for each process.
Schedulers however can be locked to specific processing elements. This not done by default. To let erts do the scheduler->core affinity use:
erlang:system_flag(scheduler_bind_type, default_bind) -> PrevBind
Several other bind types can be found in the documentation. Using affinity can greatly improve performance in heavy load situations! Especially in high lock contention situations. Also, the linux kernel cannot handle hyperthreads to say the least. If you have hyperthreads on your platform you should really use this feature in erlang.
I'm purely guessing here, but I'd imagine that there's a small number of threads, which pick processes from a common process pool for execution. Once a process hits a blocking operation, the thread executing it puts it aside and picks another. When a process being executed causes another process to become unblocked, that newly unblocked process gets placed into the pool. I suppose a thread might also stop execution of a process even when it's not blocked at certain points to serve other processes.
I would like to add some input to what was described in the accepted answer.
Erlang Scheduler is the essential part of the Erlang Runtime System and provides its own abstraction and implementation of the conception of lightweight processes atop the OS threads.
Each Scheduler runs within a single OS thread. Normally, there are as many schedulers as CPU (cores) are on he hardware (it is configurable though and naturally does not bring much value when number of schedulers exceeds those of hardware cores). The system might also be configured that scheduler will not jump between OS threads.
Now, when the Erlang process is being created it is entirely the responsibility of the ERTS and Scheduler to manage life cycle and resources consumption as well as its memory footprint etc.
One of the core implementation details is that each process has a time budget of 2000 reductions available when the Scheduler picks up that process from the run queue. Each progress in the system (even I/O) is guaranteed to have a reductions budget. That is what actually makes ERTS a system with preemptive multitasking.
I would recommend a great blog post on that topic by Jesper Louis Andersen http://jlouisramblings.blogspot.com/2013/01/how-erlang-does-scheduling.html
As the short answer: Erlang processes are not OS threads and do not map to them directly. Erlang Schedulers are what runs on the OS threads and provide smart implementation of more finely grained Erlang processes hiding those details behind programmer's eyes.

Is having many threads in a JVM application expensive?

I'm currently learning about actors in Scala. The book recommends using the react method instead of receive, because it allows the system to use less threads.
I have read why creating a thread is expensive. But what are the reasons that, once you have the threads (which should hold for the actor system in Scala after the initialization), having them around is expensive?
Is it mainly the memory consumption? Or are there any other reasons?
Using many threads may be more expensive than you would expect because:
each thread consumes memory outside of heap which places restriction on how many threads can be created at all for JVM;
switch from one thread to another consumes some CPU time, so if you have activity which can be performed in a single thread, you will save CPU cycles;
there's JVM scheduler which has more work to do if there are more threads. Same applies to underlying OS scheduler;
at last, it makes little sense to use more threads than you have CPU cores for CPU-bound tasks and it makes little sense to use more I/O threads than you have I/O activities (e.g., network clients).
Besides the memory overhead of having a thread around (which may or may not be small), having more threads around will also usually mean that the schedule will have more elements to consider when the time comes for it to pick which thread will get the CPU next.
Some Operating Systems / JVMs may also have constraints on the amount of threads that can concurrently exist.
Eventually, it's an accumulation of small overheads that can eventually account to a lot. And none of this is actually specific to Java.
Having threads around is not "expensive". Of course, it kinda depends on how many we're talking about here. I'd suspect billions of threads would be a problem. I think generally speaking, having a lot of threads is considered expensive because you can do more parallel work so CPU goes up, memory goes up, etc... But if they are correctly managed (pooled for example to protect the system resources) then it's ok. The JVM does not necessarily use native threads so a Java thread is not necessarily mapped to an OS native threads (i.e. look at green threads for example, or lightweight threads). In my opinion, there's no implicit cost to threads in the JVM. The cost comes from poor thread management and overuse of the resources by carelessly assigning them work.

Preemptive threads Vs Non Preemptive threads

Can someone please explain the difference between preemptive Threading model and Non Preemptive threading model?
As per my understanding:
Non Preemptive threading model: Once a thread is started it cannot be stopped or the control cannot be transferred to other threads until the thread has completed its task.
Preemptive Threading Model: The runtime is allowed to step in and hand control from one thread to another at any time. Higher priority threads are given precedence over Lower priority threads.
Can someone please:
Explain if the understanding is correct.
Explain the advantages and disadvantages of both models.
An example of when to use what will be really helpful.
If i create a thread in Linux (system v or Pthread) without mentioning any options(are there any??) by default the threading model used is preemptive threading model?
No, your understanding isn't entirely correct. Non-preemptive (aka cooperative) threads typically manually yield control to let other threads run before they finish (though it is up to that thread to call yield() (or whatever) to make that happen.
Preempting threading is simpler. Cooperative threads have less overhead.
Normally use preemptive. If you find your design has a lot of thread-switching overhead, cooperative threads would be a possible optimization. In many (most?) situations, this will be a fairly large investment with minimal payoff though.
Yes, by default you'd get preemptive threading, though if you look around for the CThreads package, it supports cooperative threading. Few enough people (now) want cooperative threads that I'm not sure it's been updated within the last decade though...
Non-preemptive threads are also called cooperative threads. An example of these is POE (Perl). Another example is classic Mac OS (before OS X). Cooperative threads have exclusive use of the CPU until they give it up. The scheduler then picks another thread to run.
Preemptive threads can voluntarily give up the CPU just like cooperative ones, but when they don't, it will be taken from them, and the scheduler will start another thread. POSIX & SysV threads fall in this category.
Big advantages of cooperative threads are greater efficiency (on single-core machines, at least) and easier handling of concurrency: it only exists when you yield control, so locking isn't required.
Big advantages of preemptive threads are better fault tolerance: a single thread failing to yield doesn't stop all other threads from executing. Also normally works better on multi-core machines, since multiple threads execute at once. Finally, you don't have to worry about making sure you're constantly yielding. That can be really annoying inside, e.g., a heavy number crunching loop.
You can mix them, of course. A single preemptive thread can have many cooperative threads running inside it.
If you use non-preemptive it does not mean that process doesn't perform context switching when the process is waiting for I/O. The dispatcher will choose another process according to the scheduling model. We have to trust the process.
non-preemptive:
less context switching, less overhead that can be sensible in non-preemptive model
Easier to handle since it can be handled using a single-core processor
preemptive:
Advantage:
In this model, we have a priority that helps us to have more control over the running process
Better concurrency is a bonus
Handling system calls without blocking the entire system
Disadvantage:
Requires more complex algorithms for synchronization and critical section handling is inevitable.
The overhead that comes with it
In cooperative (non-preemptive) models, once a thread is given control it continues to run until it explicitly yields control or it blocks.
In a preemptive model, the virtual machine is allowed to step in and hand control from one thread to another at any time. Both models have their advantages and disadvantages.
Java threads are generally preemptive between priorities. A higher priority thread takes precedence over a lower priority thread. If a higher priority thread goes to sleep or blocks, then a lower priority thread can run (assuming one is available and ready to run).
However, as soon as the higher priority thread wakes up or unblocks, it will interrupt the lower priority thread and run until it finishes, blocks again, or is preempted by an even higher priority thread.
The Java Language Specification, occasionally allows the VMs to run lower priority threads instead of a runnable higher priority thread, but in practice this is unusual.
However, nothing in the Java Language Specification specifies what is supposed to happen with equal priority threads. On some systems these threads will be time-sliced and the runtime will allot a certain amount of time to a thread. When that time is up, the runtime preempts the running thread and switches to the next thread with the same priority.
On other systems, a running thread will not be preempted in favor of a thread with the same priority. It will continue to run until it blocks, explicitly yields control, or is preempted by a higher priority thread.
As for the advantages both derobert and pooria have highlighted them quite clearly.

Is there any comprehensive overview somewhere that discusses all the different types of threads?

Is there any comprehensive overview somewhere that discusses all the different types of threads and what their relationship is with the OS and the scheduler? I've heard so much contradicting information about whether you want certain types of threads, or whether thread pooling is a performance gain or a performance hit, or that threads are heavy weight so you should use these other kind of threads that don't map directly to real threads but then how is that different from thread pooling .... I'm paralyzed. How does anyone make sense of it? Assuming the use of a language that actually directly interacts with threads (I'm aware of concurrent languages, implicit parallelism, etc. as an alternative to needing to know this stuff but I'm curious about this at the moment)
Here is my brief summary, please comment and edit at will:
There are no hyperthreads, unless you're talking about Intel's hyperthreading in which case it's just virtual cores.
"Green" usually means "not OS-level" (scheduled/handled by a VM, which may or may not map those unto multiple OS-level threads or processes)
pthreads are an API (Posix Threads)
Kernel threads vs user threads is an implementation level (user threads are implemented in userland, so the kernel is not aware of them and neither is its scheduler), "threads" alone is generally an alias for "kernel threads"
Fibers are system-level coroutines. They're threads, except cooperatively multitasked rather than preemptively.
Well, like with most things, it's common to not just care unless threading is identified as a bottleneck. That is, just use the threading functionality that your platform provides in the usual manner and don't worry about the details, at least in the beginning.
But since you evidently want to know more: Usually, the operating system has a concept of a thread as a unit of execution, which is what the OS scheduler handles. Now, switching between OS-level threads requires a context switch, which can be expensive and can become a performance bottleneck. So instead of mapping programming-language threads directly to OS threads, some threading implementations do everything in user space, so that there is only one OS-level thread that is responsible for all the user-level threads in the application. This is more efficient both performance- and resource-wise, but it has the problem that if you actually have several physical processors, you cannot use more than one of them with user-level threads. So there's one more strategy of allocating threads: have multiple OS-level threads, the number of which relates to the number of physical processors you have, and have each of these be responsible for several user-level threads. These three strategies are often called 1:1 (user threads map 1-to-1 to OS threads), N:1 (all user threads map to 1 OS thread), and M:N (M user threads map to N OS threads).
Thread pooling is a slightly different thing. The idea behind thread pooling is to separate the execution resources from the actual execution, so that you have a number of threads (your resources) available in the thread pool, and when you need some task to be executed, you just pick one thread from the pool and hand the task over to it. So thread pooling is a way to design a multi-threaded application. Another way to design would be to identify the different tasks that will need to be performed (e.g., reading from a network, drawing the UI to the screen), and create a dedicated thread for these tasks. This is mostly orthogonal to whether the threads are user- or OS-level concepts.
Threads are the main building block of a Process in the Windows win32 architecture. You can ignore green threads, fibers, green fibers, pthreads (POSIX). Hyper threads don't exist. It is "hyper threading" which is a CPU architecture thing. You cannot code it. You can ignore it.
This leaves use with threads. Indeed. Only threads. A kernel thread is a thread of the kernel, which lives in the upper 2GB (sometimes upper 1GB) of the virtual memory addess space of a machine. You cannot touch it. So you can ignore it most of the time (unless you are writing kernel mode ring-0 code).
Only user threads are the ones you should be concerned about. They come in two flavors: main thread and auxiliary threads. Each process has at least one main thread, it is created for you when you create a process (CreateProcess API call). Auxiliary threads can do tasks that take long and otherwise interrupt the user experience. In C#/,NET you can use the BackgroundWorker class to easily create and manage threads.
Threads have several properties. This may have lead to all "kinds" of threads. But worker threads are probably the only ones you should be worried about when you start dealing with threads.
I learned a lot reading these slides.
I came across this after looking at Unicorn.

Resources