I had a look at the Lua book and learned that multi-threading in Lua is cooperative. What I couldn't find is some information about thread priorities.I guess that threads with the same priority run till completion, since multi-threading is cooperative, or a yield is done. What about a thread that has higher priority than another one?
Is it able to interrupt the one with lower priority or will it run next when the thread with lower priority has run till completion?
There are no native threads (preemptive multitasking) in Lua, there is however cooperative multitasking as you said.
The difference between preemptive and cooperative multitasking, is that in preemptive multitasking the "threads" are not necessarily allowed to run until completion, but can be preempted by other threads. This is done by the scheduler, which runs at regular intervals, switching one thread for another. This is where priorities come in. If a thread with higher priority wants to run, it can preempt an already running thread with lower priority, and the scheduler will chose that thread (depending on the scheduling strategy), next time the scheduler runs.
In cooperative multitasking there does not have to be a scheduler (though for practical reasons its usually a good idea to have one). There are however co-processes. A co-process is like a thread, except it can not be preempted. It can either run to completion, or yield to another co-process and allow that to run.
So back to your question, if you want priorities with cooperative multitasking, you need to write a scheduler, which decides which co-process to run, given its priority, and you need to write your co-process, so they give up processing once in a while, and turn back control to the scheduler.
Edit
To clarify, there is a slight difference between non-preemptive multitasking and cooperative multitasking. Non-preemptive multitasking is a bit broader, as it allows both static scheduling and cooperative multitasking.
Static scheduling means that a scheduler can schedule periodic tasks, which can then run when a task yields, maybe with a higher priority.
Cooperative multitasking is also a type of non-preemptive multitasking. However, here tasks are only scheduled by the tasks themselves, and control is explicitly yielded from on task to another, but which task it yields to, can be based on a priority.
In Lua threads cannot run in paralel (ie on multiple cores) within one Lua state. There's no concurrency, since it's cooperative multitasking. Only when one thread suspends execution (yields), can another thread resume. At no point can two Lua threads execute concurrently within one Lua state.
What you're talking about is preemption - a scheduler interrupting one thread to let another one execute.
Related
According to Wikipedia, coroutines are based on cooperative multitasking, which makes them less resource-hungry than threads. No context switch, no blocking, no expensive system calls, no critical sections and so on.
In other words, all those coroutine benefits seem to come from disallowing multithreading in the first place. This makes coroutines single-threaded by nature: concurrency is achieved, but no true parallelism.
Is it true? Is it possible to implement coroutines by using multiple threads instead?
Coroutines allow multitasking without multithreading, but they don't disallow multithreading.
In languages that support both, a coroutine that is put to sleep can be re-awakened in a different thread.
The usual arrangement for CPU-bound tasks is to have a thread pool with about twice as many threads as you have CPU cores. This thread pool is then used to execute maybe thousands of coroutines simultaneously. The threads share a queue of coroutines ready to execute, and whenever a thread's current coroutine blocks, it just gets another one to work on from the queue.
In this situation you have enough busy threads to keep your CPU busy, and you still have thread context switches, but not enough of them to waste significant resources. The number of coroutine context switches is thousands of times higher.
Multiple coroutines can be mapped to a single OS thread. But a single OS thread can only utilize 1 CPU. So you need multiple OS threads to utilize multiple CPUs.
So if a coroutine scheduler needs to utilize multiple CPUs (very likely), it needs to make use of multiple OS threads.
Have a look at the Go scheduler and look for MN scheduler.
From the Tanenbaum OS book it is mentioned the following:
"in user level threads, if a thread starts running, no other thread in that process will ever run unless the first thread voluntarily gives up the CPU".
That means threads are going to run one after the other (sequently) not in parallel. So what is the advantage of the user-level threads?
There are two concepts of multitasking in a single process multiple thread environment.
A single thread execute in time slice of the process. And that thread takes care of scheduling of other threads.
OS takes scheduling decision of process threads and might run them in parallel on different core.
You are talking about approach 1. Yes It has no advantage of multi-threading; but it let many threads / programs run one by one and give you "multitasking" (virtually).
In my understanding, pre-emptive multitasking is the case when a time-slice (e.g. a 1 millisecond time-slice) makes the scheduler (of the OS) pass (to the CPU) one thread to the CPU for a particular span of time (1 millisecond in this example) and then switches to another thread (executes it for 1 millisecond and then switches back to the first thread and so on - assuming that there are only two threads, for simplicity).
Reference: https://www.youtube.com/watch?v=hsERPf9k54U
In contrast to pre-emptive multi-tasking is the concept of priorities - the OS sets priorities of applications in numbers, e.g. 1 to 39 etc., on whatever basis - that is not the concern for now.
And the advantage of this is that if one application hangs, the time-slicer simply goes back to the other thread (let's say this thread belongs to a different application, and the first application has hanged) and continues to work normally. Then you can close the hanged app.
Reference: https://www.youtube.com/watch?v=hsERPf9k54U
Now I don't think this is particularly an advantage of this kind of multitasking. It should be the same thing in the preemptive multitasking, isn't it?
Thank you in advance.
Preemptive, multitasking and priority (scheduling) are different aspects of the OS concepts.
Preemptive, in the context of process scheduling, is a strategy in which the OS can preempt (take) the resources allocated for a process whenever it (the OS) need. In contrast, non-preemptive scheduling strategy cannot preempt (take back) the resources until the process finishes using them and release them.
A priority scheduling algorithm can be implemented with preemptive or non-preemptive strategy.
Can someone please explain the difference between preemptive Threading model and Non Preemptive threading model?
As per my understanding:
Non Preemptive threading model: Once a thread is started it cannot be stopped or the control cannot be transferred to other threads until the thread has completed its task.
Preemptive Threading Model: The runtime is allowed to step in and hand control from one thread to another at any time. Higher priority threads are given precedence over Lower priority threads.
Can someone please:
Explain if the understanding is correct.
Explain the advantages and disadvantages of both models.
An example of when to use what will be really helpful.
If i create a thread in Linux (system v or Pthread) without mentioning any options(are there any??) by default the threading model used is preemptive threading model?
No, your understanding isn't entirely correct. Non-preemptive (aka cooperative) threads typically manually yield control to let other threads run before they finish (though it is up to that thread to call yield() (or whatever) to make that happen.
Preempting threading is simpler. Cooperative threads have less overhead.
Normally use preemptive. If you find your design has a lot of thread-switching overhead, cooperative threads would be a possible optimization. In many (most?) situations, this will be a fairly large investment with minimal payoff though.
Yes, by default you'd get preemptive threading, though if you look around for the CThreads package, it supports cooperative threading. Few enough people (now) want cooperative threads that I'm not sure it's been updated within the last decade though...
Non-preemptive threads are also called cooperative threads. An example of these is POE (Perl). Another example is classic Mac OS (before OS X). Cooperative threads have exclusive use of the CPU until they give it up. The scheduler then picks another thread to run.
Preemptive threads can voluntarily give up the CPU just like cooperative ones, but when they don't, it will be taken from them, and the scheduler will start another thread. POSIX & SysV threads fall in this category.
Big advantages of cooperative threads are greater efficiency (on single-core machines, at least) and easier handling of concurrency: it only exists when you yield control, so locking isn't required.
Big advantages of preemptive threads are better fault tolerance: a single thread failing to yield doesn't stop all other threads from executing. Also normally works better on multi-core machines, since multiple threads execute at once. Finally, you don't have to worry about making sure you're constantly yielding. That can be really annoying inside, e.g., a heavy number crunching loop.
You can mix them, of course. A single preemptive thread can have many cooperative threads running inside it.
If you use non-preemptive it does not mean that process doesn't perform context switching when the process is waiting for I/O. The dispatcher will choose another process according to the scheduling model. We have to trust the process.
non-preemptive:
less context switching, less overhead that can be sensible in non-preemptive model
Easier to handle since it can be handled using a single-core processor
preemptive:
Advantage:
In this model, we have a priority that helps us to have more control over the running process
Better concurrency is a bonus
Handling system calls without blocking the entire system
Disadvantage:
Requires more complex algorithms for synchronization and critical section handling is inevitable.
The overhead that comes with it
In cooperative (non-preemptive) models, once a thread is given control it continues to run until it explicitly yields control or it blocks.
In a preemptive model, the virtual machine is allowed to step in and hand control from one thread to another at any time. Both models have their advantages and disadvantages.
Java threads are generally preemptive between priorities. A higher priority thread takes precedence over a lower priority thread. If a higher priority thread goes to sleep or blocks, then a lower priority thread can run (assuming one is available and ready to run).
However, as soon as the higher priority thread wakes up or unblocks, it will interrupt the lower priority thread and run until it finishes, blocks again, or is preempted by an even higher priority thread.
The Java Language Specification, occasionally allows the VMs to run lower priority threads instead of a runnable higher priority thread, but in practice this is unusual.
However, nothing in the Java Language Specification specifies what is supposed to happen with equal priority threads. On some systems these threads will be time-sliced and the runtime will allot a certain amount of time to a thread. When that time is up, the runtime preempts the running thread and switches to the next thread with the same priority.
On other systems, a running thread will not be preempted in favor of a thread with the same priority. It will continue to run until it blocks, explicitly yields control, or is preempted by a higher priority thread.
As for the advantages both derobert and pooria have highlighted them quite clearly.
How do you tell the thread scheduler in linux to not interrupt your thread for any reason? I am programming in user mode. Does simply locking a mutex acomplish this? I want to prevent other threads in my process from being scheduled when a certain function is executing. They would block and I would be wasting cpu cycles with context switches. I want any thread executing the function to be able to finish executing without interruption even if the threads' timeslice is exceeded.
How do you tell the thread scheduler in linux to not interrupt your thread for any reason?
Can't really be done, you need a real time system for that. The closes thing you'll get with linux is to
set the scheduling policy to a realtime scheduler, e.g. SCHED_FIFO, and also set the PTHREAD_EXPLICIT_SCHED attribute. See e.g. here , even now though, e.g. irq handlers and other other stuff will interrupt your thread and run.
However, if you only care about the threads in your own process not being able to do anything, then yes, having them block on a mutex your running thread holds is sufficient.
The hard part is to coordinate all the other threads to grab that mutex whenever your thread needs to do its thing.
You should architect your sw so you're not dependent on the scheduler doing the "right" thing from your app's point of view. The scheduler is complicated. It will do what it thinks is best.
Context switches are cheap. You say
I would be wasting cpu cycles with context switches.
but you should not look at it that way. Use the multi-threaded machinery of mutexes and blocked / waiting processes. The machinery is there for you to use...
You can't. If you could what would prevent your thread from never releasing the request and starving other threads.
The best you can do is set your threads priority so that the scheduler will prefer it over lower priority threads.
Why not simply let the competing threads block, then the scheduler will have nothing left to schedule but your living thread? Why complicate the design second guessing the scheduler?
Look into real time scheduling under Linux. I've never done it, but if you indeed do NEED this this is as close as you can get in user application code.
What you seem to be scared of isn't really that big of a deal though. You can't stop the kernel from interrupting your programs for real interrupts or of a higher priority task wants to run, but with regular scheduling the kernel does uses it's own computed priority value which pretty much handles most of what you are worried about. If thread A is holding resource X exclusively (X could be a lock) and thread B is waiting on resource X to become available then A's effective priority will be at least as high as B's priority. It also takes into account if a process is using up lots of cpu or if it is spending lots of time sleeping to compute the priority. Of course, the nice value goes in there too.