Can a runnable be preempted during its execution on AUTOSAR or OSEK? - runnable

In Autosar, runnables are mapped to tasks. Tasks can be preempted due to the priority.
Where is the point of preemption? Does the preemption happen between runnables?
Is a runnable atomic execution or it can be preempted during its execution?

The point of preemption is to ensure other tasks run at their scheduled intervals.
Since runnables run within the context of a task, and tasks can be preempted, this means that runnables themselves can be preempted. Another consequence is that runnables can only be preempted by runnables in other tasks (or interrupts). So if you have runnables A and B running in the same task, then A will never be preempted by B and vice versa - A and B are atomic with respect to one another.
Autosar further supports exclusive areas, a mechanism that is essentially a critical section. Multiple runnables can use the same exclusive area, and if one runnable enters an exclusive area with Rte_Enter then no other runnable can enter the same exclusive area until the first one leaves it with Rte_Exit.

In addition to the above answer , i would like to add the concept called Cooperative runnable placement in Autosar. According to the concept the runnbles which has constraints of data access are grouped as "Cooperative Runnables" where runnables in this group will never preempt each other, But can be preempted by runnables which are not belongs to this group.
Also runnables are executed in the context of task, if the task is preemptable then runnables belongs to the task are also preemptable.

Related

Once the thread is yielded, Operating system will switch execution to an idol process?

I am new to operating System so I could not understand this concept that
Once the thread is yielded, Operating system will switch execution to an idol process to change the priority of current calling thread. ???
if yes then how if no then how ??????
When a thread yields, the operating system might use that core to run any ready-to-run thread (either from the same process or some other process) that it believes should run. It may also switch immediately back to the yielding thread even if there are other ready-to-run threads because doing otherwise might require expensive inter-core synchronization.
The "how" is basically as follows:
The OS enters protected mode and calls the scheduler to see if there's another ready-to-run thread.
If there is, a context switch takes place and the user context to restore is switched to the context of the new thread and the old thread's context is stored.
The kernel switches back to user space, restoring the user context of the thread it wishes to run.
Some OSes have separate pools of ready-to-run threads for each core to avoid the scheduler having "one big lock" that slows down context switches due to inter-core synchronization. Such an OS might not actually yield if all ready-to-run threads are "owned" by other cores, or it might decide that this situation justifies inter-core synchronization and check the other cores to "steal" a ready-to-run thread (or "trade" threads).

Is there preemption with user-level threads

With user-level threads, can a low-priority thread be preempted to allow a high-priority thread run ?
My reasoning to this question in Modern Operating Systems was:
A user level thread is handled by the user level process. The user process can kick its threads on or off the CPU during its allotted time slice (quantum). However, the kernel cannot see the user level thread: it just knows that a particular user process is running in its allotted time slice.
When a thread enters its critical section, it requires a shared resource handled by the system. Thus, a system call is made.
However, when a thread makes a system call, all the other threads in the parent process are blocked. This means that a sibling thread cannot preempt the blocking thread.
Therefore, although preemption can happen to a user level thread, priority inversion cannot.
Edit: After learning a bit more, I found out that preemption at user-level threads is dependent on thread model (i.e. mapping of user level threads to kernel level threads) and their implementation. Will update once further info is acquired.
Based on my reading of the MOS by AT, I realised that the solution says so because preemption requires timer interrupts (The other way is yielding, which would never happen during critical section execution) and there are no timer interrupts at user level threads (at least it has not been discussed in the book). This doesn't mean that preemption cannot be implemented in user level threads, as discussed here.

What is the difference between user level threads and coroutines?

User-level threading involves N user level threads that run on a single kernel thread. What are the details of the user-level threading and how does this differ from coroutines?
Wikipedia has a quite in-depth summary on the subject: Thread (computing).
With Green threads there's a VM executing instructions that typically decides between to switch thread in-between two instructions.
With coroutines the two functions yield to each other at specified points, possibly passing values along, and typically requiring special language support. E.g. a producer yielding to a consumer, passing along an item.
The idea behind user-level threads is to have multiple different logical threads running in the same program but to have the user program handle the mapping from logical threads to kernel threads (Which actually get scheduled) rather than having the OS handle the entire mapping. This can improve performance by letting the user program handle scheduling. Conceptually, user threads are one implementation of preemptive multitasking, where multiple jobs are run to completion in parallel by having the threads periodically stopped while other threads run.
Coroutines, on the other hand, are a generalization of standard function call and return ("subroutines") where functions pass control back and forth to one another, communicating values as they switch between routines. The switching back and forth between coroutines is under the control of the coroutines themselves; control only passes from one coroutine to another if one of the coroutines explicitly yields a value to another. This is an example of cooperative multitasking, where multiple jobs are completed in parallel by having the individual steps in the task manually coordinate who gets to run and when.
Hope this helps!

Multiple tasks waiting on same semaphore

Two tasks with different priority are waiting on same semaphore, once semaphore gets released task with high priority gets scheduled ? or its random ?, am using SCHED_RR scheduler policy.
Generally speaking, I know of no rule which waiting task gets woken up first when a semaphore is released, so it is up to the scheduler's choice. The "priority" of the tasks probably only is relevant for the scheduler in case of the normal scheduling mechanism, not the synchronizing due to semaphores.
If you are using SCHED_RR then scheduler runs tasks with highest priority, and runs such tasks in the first place. If there is task with SCHED_RR and it in state TASK_RUNNING, it will run.
On uniprocessor system, if exist task with SCHED_RR and TASK_RUNNING then only this task will be executing. But on multi-core system, task with lower priority could be scheduled on another processor.
In my opinion, task with higher priority and SCHED_RR scheduled first, but there is no waranty that this task gets semaphore first, because this processor might do more important work, such as handle interrupts.
Again, this is my only opinion, and I'm fairly new to linux kernel. It would be great to have somebody more experienced to approve it.
Edit:
Scheduler is not important for semaphore. It just wakes up one task regardless of it priority.
So, you can obtain lock first if your task first tries to obtain lock (it's hard and not safe). Or you could manage semaphore queue by yourself.

Thread priorities in Lua

I had a look at the Lua book and learned that multi-threading in Lua is cooperative. What I couldn't find is some information about thread priorities.I guess that threads with the same priority run till completion, since multi-threading is cooperative, or a yield is done. What about a thread that has higher priority than another one?
Is it able to interrupt the one with lower priority or will it run next when the thread with lower priority has run till completion?
There are no native threads (preemptive multitasking) in Lua, there is however cooperative multitasking as you said.
The difference between preemptive and cooperative multitasking, is that in preemptive multitasking the "threads" are not necessarily allowed to run until completion, but can be preempted by other threads. This is done by the scheduler, which runs at regular intervals, switching one thread for another. This is where priorities come in. If a thread with higher priority wants to run, it can preempt an already running thread with lower priority, and the scheduler will chose that thread (depending on the scheduling strategy), next time the scheduler runs.
In cooperative multitasking there does not have to be a scheduler (though for practical reasons its usually a good idea to have one). There are however co-processes. A co-process is like a thread, except it can not be preempted. It can either run to completion, or yield to another co-process and allow that to run.
So back to your question, if you want priorities with cooperative multitasking, you need to write a scheduler, which decides which co-process to run, given its priority, and you need to write your co-process, so they give up processing once in a while, and turn back control to the scheduler.
Edit
To clarify, there is a slight difference between non-preemptive multitasking and cooperative multitasking. Non-preemptive multitasking is a bit broader, as it allows both static scheduling and cooperative multitasking.
Static scheduling means that a scheduler can schedule periodic tasks, which can then run when a task yields, maybe with a higher priority.
Cooperative multitasking is also a type of non-preemptive multitasking. However, here tasks are only scheduled by the tasks themselves, and control is explicitly yielded from on task to another, but which task it yields to, can be based on a priority.
In Lua threads cannot run in paralel (ie on multiple cores) within one Lua state. There's no concurrency, since it's cooperative multitasking. Only when one thread suspends execution (yields), can another thread resume. At no point can two Lua threads execute concurrently within one Lua state.
What you're talking about is preemption - a scheduler interrupting one thread to let another one execute.

Resources