AUTOSAR Wdg and ISO26262 - autosar

[Background]
AUTOSAR Wdg requires the refresh hardware Wdg in ISR context(SWS_Wdg_00166), the purpose is just for "minimum timing jitter" and "minimum latencies" for window Wdg and compatible with old WdgM.
But my understanding is that window Wdg purpose is to find the system clock jitter (exmple:CPU PLL), which is required by ISO26262 AnnexD(Clock jitter.
AUTOSAR Wdg strategy delete the concept of "Wdg window" from upperlayer and encapsulate it in hardware timer, because as long as the WdgM alive, Wdg_SetTriggerCondition is invoked within the Wdg timeout period, Wdg driver shall refresh HW Wdg in hardware timer ISR, at WdgM level, the same as preivous Toggle Wdg.
[Question]
If using AUTOSAR standard to develop Functional Safety Software, how to process above Wdg requirement?
If this is obeyed, ISO26262 is not satisfied.
If this is ignored, AUTOSAR standard is not satisfied.
Who can give me some suggestion?
or
Is there any way to submit this to AUTOSAR?

I am not entirely sure if I understand your question completely, but I think you're misunderstanding the purpose of the windowed watchdog.
WDG, configured via WDGM, ensures simple is-alive monitoring of your ECU. With some of the configuration options present (such as watchdog checkpoints), it can achieve simple program flow monitoring. The windowing of the watchdog is there just to make sure that you not only kick the watchdog but also observe some timing requirements. To take the simplest case, if you kick the watchdog only from one task and that is supposed to run every 5 ms, windowing can guarantee that the system will detect a failure if the task runs every 1 ms, or every 15 ms.
Autosar by itself is not necessarily sufficient to make software that is safe in the ISO 26262 meaning. You need to know what ASIL you're targeting, and then to design the system to achieve that level. Typically, you'd rely not just on ISO 26262 but also on a safety manual provided by the manufacturer of your hardware. That is likely to specify additional requirements you have to implement, completely independent of Autosar.

First of all, make sure that your WdgM is designed and developed for your required ASIL level. (Your BSW vendor will provide you this information)
If your system is ASIL-B, so your WdgM has to fulfill ASIL-B Requirements.
The problem that you are mentioning (WdgM triggers wdg cyclically, independent of the trigger-timing from SW-C) comes from the fact that the WdgM has to consider several SW-C and also maybe sequence monitoring etc.
Mentioning the Watchdog-Window should make it obvious that the WdgM cannot trigger the (external) watchdog every time any of the SW-C triggers the WdgM.

I'm hardly understanding your [Background] part, but if you look at BSW requirement SRS_Wdg_12019, it says
SRS_Wdg_12019: The watchdog driver shall provide a watchdog trigger routine.
and that is satisfied by SWS_Wdg_00166 amongst others. SWS_Wdg_00166 says
SWS_Wdg_00166: The routine servicing an internal watchdog shall be
implemented as an interrupt routine driven by a hardware timer
Further reading reveleas:
As already stated by SWS_Wdg_00162 and SWS_Wdg_00166, the time base for
triggering the watchdog shall be provided by means of a hardware. This ensures
minimum timing jitter.
These two requirements SWS_Wdg_00162 and SWS_Wdg_00166 also imply that
servicing of the watchdog hardware is done directly from a timer ISR. This ensures minimum latencies.
ISO26262 compliance can be achieved not only with ASR features like Wdg, but you need a Window Watchdog for sure.
I think you should urgently seek AUTOSAR and ISO26262 classes.

Related

How do I test and / or benchmark traditional Linux Kernel vs Linux Kernel with RT Preempt patch?

I am working on a project to contrast and observe the performance gain with Preempt RT patch for Linux.
What kind of C programs should I look to execute on the two different kernels to gain good understanding of the benifits that Preempt RT patch offers.
Looking for suggestions on the programs.
To compare/demonstrate specifically the scheduling characteristics perhaps implement a system where:
An interrupt is generated via a digital input IN
The interrupt handler passes the input event via a semaphore to a high priority user process.
The user process, on receipt of the semaphore creates a (say) 10ms pulse on a digital output OUT.
Then:
Drive IN with a series of pulses from a signal generator
Attach an oscilloscope to IN and OUT.
Trigger the scope on the active (interrupt generating) edge of IN
Measure the time and variance between the interrupt-edge on IN and the start of the pulse on OUT.
Trigger the scope on the rising edge of the pulse on OUT.
Measure the length and variance of the pulse width.
Most modern scopes have a "persistence" feature where the trace is not cleared between sweeps. That is useful for measuring the variance.
If you lack a scope or a signal/function generator, you could use a switch, and software timestamps in the ISR and in the user process to log event times. But you would need to ensure in the user task that no preemption occurs between capturing the time and setting the OUT state by using a critical section, and will likely need to debounce the switch. That, in the case would simply be a matter of not setting the semaphore if the last event timestamp was less than say 20ms ago.
If PREEMPT-RT is doing its job, the tests should exhibit lower latency, greater precision and less variance than with the default scheduler regardless of the load of other (lower priority) processes running. If that still does not meet your requirements you may need a real RTOS.
If this characteristic is not your application requires, then you may not need or benefit from PREEMPT-RT and inappropriate allocation of process priorities or poor task design may even cause your application to fail to meet requirements. To make PREEMPT-RT work you have to know what you are doing; it does not magically make your system "real-time"; rather it facilitates the implementation of real-time systems.

How to chose priority for threads?

Considering one core , when multiple request is arrived at server at the same timestamp, and all have the same priority ,for which request the thread would be allotted first ?
Ex: CPU has single core and has 2 thread. Now the 4 people has made the request (process) A,B,C,D to any server & server need to assign threads in the message queue in order to process those request. But which 2 process would be given chance first to assign those 2 threads ?
Assumption they all have arrived at same timestamp and have equal priority.
TUSHAR, there is a bit of a language gap occurring here. Considering you chose, kernel, and didn't seem to think it was something to do with algebra, I am going to translate your question:
In a single CPU system, when multiple interrupts are asserted
simultaneously, and all have the same priority, which handler would be
serviced first?
The first bit of info is that most interrupt controllers are little more than a priority encoder with some extra glue. As such, they have no notion of same priority, but that is less important than you might think.
Real Time Operating Systems, in particular, seek to disassociate their implementation with the hardware, and may even dynamically adjust interrupt priorities to suit the current workload. The key here is that the OS spends a minimal time at the mercy of the interrupt controller, and chooses what to do based upon its state. As the system designer, you can choose what happens.
Time Sharing Operating Systems also have some control over this; but typically less as they strive for maximum throughput rather than predictable response. As such, they might do anything from first-in-first-served, random-served, or even random-starved.
So the answer to your question depends upon your environment. For the most part, if you have a very simple environment (eg. an executive like vxWorks or freeRTOS), expect it to follow the dictates of the interrupt controller. If you have a more sophisticated device OS (eg. INTEGRITY or QNX) it is up to your configuration. If you have Linux/winDOS, there are likely 320 control knobs that all result in burning the toast.

What makes a kernel/OS real-time?

I was reading this article, but my question is on a generic level, I was thinking along the following lines:
Can a kernel be called real time just because it has a real time scheduler? Or in other words, say I have a linux kernel, and if I change the default scheduler from O(1) or CFS to a real time scheduler, will it become an RTOS?
Does it require any support from the hardware? Generally I have seen embedded devices having an RTOS (eg VxWorks, QNX), do these have any special provisions/hw to support them? I know RTOS process's running time is deterministic, but then one can use longjump/setjump to get the output in determined time.
I'd really appreciate some input/insight on it, if I am wrong about something, please correct me.
After doing some research, talking to poeple (Jamie Hanrahan, Juha Aaltonen #linkedIn Group - Device Driver Experts) and ofcourse the input from #Jim Garrison, this what I can conclude:
In Jamie Hanrahan's words-
What makes a kernel real time?
The sine qua non of a real time OS -
The ability to guarantee a maximum latency between an external interrupt and the start of the interrupt handler.
Note that the maximum latency need not be particularly short (e.g. microseconds), you could have a real time OS that guaranteed an absolute maximum latency of 137 milliseconds.
A real time scheduler is one that offers completely predictable (to the developer) behavior of thread scheduling - "which thread runs next".
This is generally separate from the issue of a guaranteed maximum latency to responding to an interrupt (since interrupt handlers are not necessarily scheduled like ordinary threads) but it is often necessary to implement a real-time application. Schedulers in real-time OSs generally implement a large number of priority levels. And they almost always implement priority inheritance, to avoid priority inversion situations.
So, it is good to have a guaranteed latency for an interrupt and predictability of thread scheduling, then why not make every OS real time?
Because an OS suited for general purpose use (servers and/or desktops) needs to have characteristics that are generally at odds with real-time latency guarantees.
For example, a real-time scheduler should have completely predictable behavior. That means, among other things, that whatever priorities have been assigned to the various tasks by the developer should be left alone by the OS. This might mean that some low-priority tasks end up being starved for long periods of time. But the RT OS has to shrug and say "that's what the dev wanted." Note that to get the correct behavior, the RT system developer has to worry a lot about things like task priorities and CPU affinities.
A general-purpose OS is just the opposite. You want to be able to just throw apps and services on it, almost always things written by many different vendors (instead of being one tightly integrated system as in most R-T systems), and get good performance. Perhaps not the absolute best possible performance, but good.
Note that "good performance" is not just measured in interrupt latency. In particular, you want CPU and other resource allocations that are often described as "fair", without the user or admin or even the app developers having to worry much if at all about things like thread priorities and CPU affinities and NUMA nodes. One job might be more important than another, but in a general-purpose OS, that doesn't mean that the second job should get no resources at all.
So the general purpose OS will usually implement time-slicing among threads of equal priority, and it may adjust the priorities of threads according to their past behavior (e.g. a CPU hog might have its priority reduced; an I/O bound thread might have its priority increased, so it can keep the I/O devices working; a CPU-starved thread might have its priority boosted so it can get a little bit of CPU time now and then).
Can a kernel be called real time just because it has a real time scheduler?
No, an RT scheduler is a necessary component of an RT OS, but you also need predictable behavior in other parts of the OS.
Does it require any support from the hardware?
In general, the simpler the hardware the more predictable its behavior is. So PCI-E is less predictable than PCI, and PCI is less predictable than ISA, etc. There are specific I/O buses that were designed for (among other things) easy predictability of e.g. interrupt latency, but a lot of R-T requirements can be met these days with commodity hardware.
The specific description of real-time is that processes have minimum response time guarantees. This is often not sufficient for the application, and even less important than determinism. This is especially hard to achieve with modern feature rich OS's. Consider:
If I want to command some hardware or a machine at precise points in time, I need to be able to generate command signals at those specific moments, often with far sub millisecond accuracy. Generally if you compile let's say a C-code that runs a loop that waits for "half a millisecond" and does something, the wait time is not exactly half a millisecond, it is a little bit more, since the way common OS's handle this, is that they put the process aside at least up until the correct time has passed, after which the scheduler might (at some point) pick it up again.
What is seriously problematic is not that the time t is not exactly half a second but that it cannot be known in advance how much more it is. This inaccuracy is not constant nor deterministic.
This has surprising consequences when doing physical automation. For example it is impossible to command a stepper motor accurately with any typical OS without using dedicated hardware through kernel interfaces and telling them how long time steps you really want. Because of this, a single AVR module can command several motors accurately, but a Raspberry Pi (that absolutely stomps the AVR in terms of clockspeed) cannot manage more than 2 with any typical OS.

In linux, how to make sure a sequence of code is executed without any interruption

I have a routine that toggles the GPIO pin high/low, and have delay between the highs and lows (using udelay), and then samples the GPIO state for some period. I need to make sure this part of the code is executed without being pre-empted by the scheduler or by any possible interrupts. I am running the code on a dual core ARM system so it should be SMP. Is Spin_Lock_IrqSave() safe enough for such purpose? I got a feeling my code is still somehow being interrupted occasionally but no proof yet.
Thanks a lot.
If you want to disable preemption, use preempt_disable() and preempt_enable().
If you want to disable interrupts, use local_irq_disable() and local_irq_enable()
spin_lock_irqsave will normally do both of these, though some "real-time" enhancements sometimes allow spinlocks to schedule, so it is always best to say what you mean.

How "Real-Time" is Linux 2.6?

I am looking at moving my product from an RTOS to embedded Linux. I don't have many real-time requirements, and the few RT requirements I have are on the order of 10s of milliseconds.
Can someone point me to a reference that will tell me how Real-Time the current version of Linux is?
Are there any other gotchas from moving to a commercial RTOS to Linux?
You can get most of your answers from the Real Time Linux wiki and FAQ
What are real-time capabilities of the stock 2.6 linux kernel?
Traditionally, the Linux kernel will only allow one process to preempt another only under certain circumstances:
When the CPU is running user-mode code
When kernel code returns from a system call or an interrupt back to user space
When kernel code code blocks on a mutex, or explicitly yields control to another process
If kernel code is executing when some event takes place that requires a high priority thread to start executing, the high priority thread can not preempt the running kernel code, until the kernel code explicitly yields control. In the worst case, the latency could potentially be hundreds milliseconds or more.
The Linux 2.6 configuration option CONFIG_PREEMPT_VOLUNTARY introduces checks to the most common causes of long latencies, so that the kernel can voluntarily yield control to a higher priority task waiting to execute. This can be helpful, but while it reduces the occurences of long latencies (hundreds of milliseconds to potentially seconds or more), it does not eliminate them. However unlike CONFIG_PREEMPT (discussed below), CONFIG_PREEMPT_VOLUNTARY has a much lower impact on the overall throughput of the system. (As always, there is a classical tradeoff between throughput --- the overall efficiency of the system --- and latency. With the faster CPU's of modern-day systems, it often makes sense to trade off throughput for lower latencies, but server class systems that do not need minimum latency guarantees may very well chose to use either CONFIG_PREEMPT_VOLUNTARY, or to stick with the traditional non-preemptible kernel design.)
The 2.6 Linux kernel has an additional configuration option, CONFIG_PREEMPT, which causes all kernel code outside of spinlock-protected regions and interrupt handlers to be eligible for non-voluntary preemption by higher priority kernel threads. With this option, worst case latency drops to (around) single digit milliseconds, although some device drivers can have interrupt handlers that will introduce latency much worse than that. If a real-time Linux application requires latencies smaller than single-digit milliseconds, use of the CONFIG_PREEMPT_RT patch is highly recommended.
They also have a list of "Gotcha's" as you called them in the FAQ.
What are important things to keep in
mind while writing realtime
applications?
Taking care of the following during
the initial startup phase:
Call mlockall() as soon as possible from main().
Create all threads at startup time of the application, and touch each page of the entire stack of each thread. Never start threads dynamically during RT show time, this will ruin RT behavior.
Never use system calls that are known to generate page faults, such as
fopen(). (Opening of files does the
mmap() system call, which generates a
page-fault).
If you use 'compile time global variables' and/or 'compile time global
arrays', then use mlockall() to
prevent page faults when accessing
them.
more information: HOWTO: Build an
RT-application
They also have a large publications page you might want to checkout.
Have you had a look at Xenomai? It will let you run "hard real time" processes above Linux, while still allowing you to access the regular Linux APIs for all the non-real-time needs.
There are two fundamentally different approaches to achieve real-time capabilities with Linux.
Patch the existing kernel with things like the rt-preempt patches. This will eventually lead to a fully preemptive kernel
Dual kernel approach (like xenomai, RTLinux, RTAI,...)
There are lots of gotchas moving from a RTOS to Linux.
Maybe you don't really need real-time?
I'm talking about real-time Linux in my training sessions:
https://rlbl.me/elisa
https://rlbl.me/elisa-en-pdf
https://rlbl.me/intely
https://rlbl.me/intely-en-pdf
https://rlbl.me/entirety-en-all-pdf
The answer is probably "good enough".
If you're running an embedded system, you probably have control of all or most of the software on the box.
Stock Linux 2.6 has several features suitable for low-latency tasks - chiefly these are:
Scheduling policies
Memory locking
Assuming you're using a single-core machine, if you have just one task which has set its scheduling policy to SCHED_FIFO or SCHED_RR (it doesn't matter which if you have just one task), AND locked all its memory in with mlockall(), then it WILL get scheduled as soon as it is ready to run.
Then the only thing you'd have to worry about was some non-preemptable part of the kernel taking longer than your acceptable latency to complete - which is unlikely to happen in an embedded system unless something bad happens, such as extreme memory pressure, or your drivers are dodgy.
I guess "try it and see" is a good answer, but that's probably rather complicated in your case (and might involve writing device drivers etc).
Look at the doc for sched_setscheduler for some good info.

Resources