Watchdog for a single process [Linux] [duplicate] - linux

This question already has answers here:
How to use Linux software watchdog?
(9 answers)
Closed 6 years ago.
I need to make sure that a chosen process is not hanged. I thought I'd program this process to write to some /proc file that will be periodically monitored by some other process/module. If there is no change in the file for some time the application would be considered hanged. Just as a watchdog in uC.
However I don't know if this is the best approach. As I'm not really much into deep Linux engineering I thought it is better to ask which way is the easiest before starting to learn writing modules, /proc filesystem, etc. Ha!
I've found some information on Monit (https://mmonit.com/monit/). Maybe this would be better?
What would you recommend to be the best way to implement the "watchdog" functionality here?
Thanks a lot!
Paweł

An OS independent solution is to create a watchdog thread that runs periodically and supports one or more software watchdogs, which are simply implemented as status bits or bytes. The process in question is responsible for patting the watchdog (clearing the status). The watchdog thread is a loop which checks the status. If it has been cleared, it sets it. If it has not been cleared, it alarms. You can adjust the timing so that the status is not checked each time through the loop.
This solution is quite flexible. You can also tie it into the hardware watchdog, patting the hw watchdog only if all software watchdogs have been patted.

Related

Can a Linux process/thread terminate without pass through do_exit()?

To verify the behavior of a third party binary distributed software I'd like to use, I'm implementing a kernel module whose objective is to keep track of each child this software produces and terminates.
The target binary is a Golang produced one, and it is heavily multi thread.
The kernel module I wrote installs hooks on the kernel functions _do_fork() and do_exit() to keep track of each process/thread this binary produces and terminates.
The LKM works, more or less.
During some conditions, however, I have a scenario I'm not able to explain.
It seems like a process/thread could terminate without passing through do_exit().
The evidence I collected by putting printk() shows the process creation but does not indicate the process termination.
I'm aware that printk() can be slow, and I'm also aware that messages can be lost in such situations.
Trying to prevent message loss due to slow console (for this particular application, serial tty 115200 is used), I tried to implement a quicker console, and messages have been collected using netconsole.
The described setup seems to confirm a process can terminate without pass through the do_exit() function.
But because I wasn't sure my messages couldn't be lost on the printk() infrastructure, I decided to repeat the same test but replacing printk() with ftrace_printk(), which should be a leaner alternative to printk().
Still the same result, occasionally I see processes not passing through the do_exit(), and verifying if the PID is currently running, I have to face the fact that it is not running.
Also note that I put my hook in the do_exit() kernel function as the first instruction to ensure the function flow does not terminate inside a called function.
My question is then the following:
Can a Linux process terminate without its flow pass through the do_exit() function?
If so, can someone give me a hint of what this scenario can be?
After a long debug session, I'm finally able to answer my own question.
That's not all; I'm also able to explain why I saw the strange behavior I described in my scenario.
Let's start from the beginning: monitoring a heavily multithreading application. I observed rare cases where a PID that suddenly stops exists without observing its flow to pass through the Linux Kernel do_exit() function.
Because this my original question:
Can a Linux process terminate without pass through the do_exit() function?
As for my current knowledge, which I would by now consider reasonably extensive, a Linux process can not end its execution without pass through the do_exit() function.
But this answer is in contrast with my observations, and the problem leading me to this question is still there.
Someone here suggested that the strange behavior I watched was because my observations were somehow wrong, alluding my method was inaccurate, as for my conclusions.
My observations were correct, and the process I watched didn't pass through the do_exit() but terminated.
To explain this phenomenon, I want to put on the table another question that I think internet searchers may find somehow useful:
Can two processes share the same PID?
If you'd asked me this a month ago, I'd surely answered this question with: "definitively no, two processes can not share the same PID."
Linux is more complex, though.
There's a situation in which, in a Linux system, two different processes can share the same PID!
https://elixir.bootlin.com/linux/v4.19.20/source/fs/exec.c#L1141
Surprisingly, this behavior does not harm anyone; when this happens, one of these two processes is a zombie.
updated to correct an error
The circumstances of this duplicate PID are more intricate than those described previously. The process must flush the previous exec context if a threaded process forks before invoking an execve (the fork copies also the threads). If the intention is to use the execve() function to execute a new text, the kernel must first call the flush_old_exec()  function, which then calls the de_thread() function for each thread in the process other than the task leader. Except the task leader, all the process' threads are eliminated as a result. Each thread's PID is changed to that of the leader, and if it is not immediately terminated, for example because it needs to wait an operation completion, it keeps using that PID.
end of the update
That was what I was watching; the PID I was monitoring did not pass through the do_exit() because when the corresponding thread terminated, it had no more the PID it had when it started, but it had its leader's.
For people who know the Linux Kernel's mechanics very well, this is nothing to be surprised for; this behavior is intended and hasn't changed since 2.6.17.
Current 5.10.3, is still this way.
Hoping this to be useful to internet searchers; I'd also like to add that this also answers the followings:
Question: Can a Linux process/thread terminate without pass through do_exit()? Answer: NO, do_exit() is the only way a process has to end its execution — both intentional than unintentional.
Question: Can two processes share the same PID? Answer: Normally don't. There's some rare case in which two schedulable entities have the same PID.
Question: Do Linux kernel have scenarios where a process change its PID? Answer: yes, there's at least one scenario where a Process changes its PID.
Can a Linux process terminate without its flow pass through the do_exit() function?
Probably not, but you should study the source code of the Linux kernel to be sure. Ask on KernelNewbies. Kernel threads and udev or systemd related things (or perhaps modprobe or the older hotplug) are probable exceptions. When your /sbin/init of pid 1 terminates (that should not happen) strange things would happen.
The LKM works, more or less.
What does that means? How could a kernel module half-work?
And in real life, it does happen sometimes that your Linux kernel is panicking or crashes (and it could happen with your LKM, if it has not been peer-reviewed by the Linux kernel community). In such a case, there is no more any notion of processes, since they are an abstraction provided by a living Linux kernel.
See also dmesg(1), strace(1), proc(5), syscalls(2), ptrace(2), clone(2), fork(2), execve(2), waitpid(2), elf(5), credentials(7), pthreads(7)
Look also inside the source code of your libc, e.g. GNU libc or musl-libc
Of course, see Linux From Scratch and Advanced Linux Programming
And verifying if the PID is currently running,
This can be done is user land with /proc/, or using kill(2) with a 0 signal (and maybe also pidfd_send_signal(2)...)
PS. I still don't understand why you need to write a kernel module or change the kernel code. My intuition would be to avoid doing that when possible.

Profiling methods for highly time sensitive applications

I am working in an embedded Linux environment debugging a highly timing sensitive issue related to the pairing/binding of Zigbee devices.
Our architecture is such that data read from Zigbee Front End Module via SPI interface and then passed from Kernel space to user space for processing. The processed data and response is then passed back to kernel space and clocked out over the SPI interface again.
The Zigbee 802.15.4 timing requirements specifies that we need to respond within 19.5ms and we frequently have situations where we respond just outside of this window which results in a failure and packet loss on the network.
The Linux kernel is not running with pre-emption enabled and it may not be possible to enable preemption either.
My suspicion is that since the kernel is not preemptible there is another task/process which is using the ioctl() interface and this holds off the Zigbee application just long enough that the 19.5ms window is exceeded.
I have tried the following tools
oprofile - not much help here since it profiles the entire system and the application is not actually very busy during this time since it moves such small amounts of data
strace - too much overhead, I don't have much experience using it though so maybe the output can be refined. The overhead affects the performance so much that the application does not funciton at all
Are there any other lightweight methods of profiling a system like this?
Is there anyway to catch when an ioctl call is pended on another task/thread? (assuming this is the root cause of the issue)
Good question.
Here's an idea. Don't think of it as profiling.
Think of catching it in the act.
I would investigate creating a watchdog timer to go off after the 16.5ms interval.
Whenever you are successful, reset the timer.
That way, it will only go off when there's a failure.
At that point, I would try to take a stack sample of the process, or possibly another process that might be blocking it.
That's an adaptation of this technique.
It will take some work, but I'd be surprised if there's any tool that will tell you exactly what's going on, short of an in-circuit-emulator.
LTTng is the tool you are looking for. Like Oprofile, it profiles the entire system, but you will be able to see exactly what is going on with each process and kernel thread, in a timeline fashion. You will be able to view the interaction of the threads and scheduler around the point of interest, that is, when you miss your Zigbee deadline. You may have to get clever and use some method of triggering (or more likely, stopping) the LTTng trace once you've detected the missed packet, or you might get lucky and catch it right away just using the command line tools to start and stop tracing.
You may have to do some work to get there, for example you'll have to invest some time and energy in 1) enabling your kernel to run LTTng if it doesn't have it already, and 2) learning how to use it. It is a powerful tool, and useful for a variety of profiling and analysis tasks. Most commercial embedded Linux vendors have complete end-to-end LTTng products and configuration if you have that option. If not, you should be able to find plenty of useful help and examples on line. LTTng has been around for a very long time! Happy hunting!

Why the init process creates different processes?Why cannot it create different threads? [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 8 years ago.
Improve this question
As we all know that init process is the first process which is being executed and results in the creation of further inevitable processes.Now, the question is that ,what could be the matter of primarily concern that would have lead the init process to create processes rather than threads.
Sounds like you are talking about how most Unix systems start up. Well, let's forget about init for a moment and ask, why would any process create another process instead of creating another thread?
The whole point of having an operating system at all is to allow different "programs" to co-exist and share the hardware without clobbering one another. Threads, by definition, exist in the same address space. There is no way to protect one thread's data structures from being clobbered by another thread in the same process except to write the code in such a way that the threads cooperate with one another. Threads within a process are always part of the same "program".
The Unix init process spawns other processes because many of the services in a Unix system are provided by different programs. And why is that? Partly it's for historical reasons. Nobody had ever heard of "threads" when Unix was new. But also, it's a convenient way to organize components that are not related to one another. The person who writes the ssh daemon does not have to worry about whether it plays nicely with the cron daemon or not. Since they are separate programs, the operating system automatically hides them/protects them from each other the same way it hides/protects user programs.
The main problem is that threads in the same address space are not protected from other threads. If however, such a protection exists (like in Java), then using threads instead of processes make sense.
I know at least one operating system where all system activities were performed by threads in single system process - the one for Elbrus1 and 2 computers (modern Elbruses are operated by Linux). This was possible because of tagged memory architecture, inherited from Burrows machines. Probably Burrows machines worked that way.

about Process control block in OS

I recently reviewed OS concepts.
About Process control block, is there just a global ONE on one OS , or there is one PCB for each process?
Also, does this PCB only exist in RAM?
[I assume my question is target on Linux or Unix.]
Thanks,
Answering one question at a time:
is there one PCB per process? YES. broadly speaking, Process control blocks are supposed to contain information(Scheduling, Memory, Time Accounting and others) of a process. This informed is used in various task related activities
PCB in linux is implemented as a structure known as task_struct(Please check the code at http://lxr.linux.no/linux+v3.12.6/include/linux/sched.h#L1023)
You can read a more about tasks and their internal # http://linuxgazette.net/133/saha.html
Its basically more complicated than in memory or on disk. As far as I know, It is architecture dependent. Please check other answers : Where is task_struct stored?
I think this answers your question directly

Linux process stuck on wait for event even when event occurs

I have a very strange system behavior in a few places which can be described in short: There is a process either in user or kernel space which waits for event, and although the event occurs, the process does not wakes up.
I will describe this below, but since the problem is in many different places (at least 4) I am starting to look for a system problem and not a local one something like preemption flag (already checked and not the problem) that will make the difference.
The system is Linux working on Freescale IMX6 which is brand new and still in beta phase. Same code is working well on many other Linux systems.
The system is running 2 separate processes, one is showing video using gstreamer playing from a file, using new image processor which has never been used. If this process runs alone the system can run over-night.
Another process is working with digital tuner connected over USB. The process only gets the device version in a loop, again when running alone can run over-night.
If these 2 process are running at the same time on the system, one is stuck within a few minutes. If we change the test parameters (like the periodic get version timing) the other process will get stuck.
The processes always stuck on wait for event (either wait_event_interruptible in kernel driver, or in user space on pthread_cond_wait). The event itself occurs and I have logs to see that. But the process does not wake up.
Trying to kill that process makes in Zombie. I managed to find one place with a very specific timing problem where condition check was misplaced and could cause this kind of stuck if the process was switched in the right place. It solved one problem and I got to another with the same characteristic. Anyway the bug that was found could not explain why it happens so often, it could explain theoretical bug which will stuck once in a lot of time, but not this fast.
Anyway - something in the system cause this to show up very fast even if the problem is real. Again - this code (except for the display driver which is new) is working in other systems, and even on the same system when working alone. Those processes are not related and not working with one another, the common about them is the machine they are running on.
It is probably has something to do with system resources (memory use 100M out of 1G, CPU usage is 5%), scheduler behavior or something on the system configuration. Anyone has ideas what could cause these kind of problems?
If it's a brand new port of Linux, then it may be that you actually have a real kernel bug - or a hardware bug if it's new hardware.
However, you need really good evidence, so strace, ftrace and perhaps even some instrumentation of the relevant kernel code to show this to someone who can actually fix the problem - I'm guessing that since you are asking this question in the way you are, that you are not a regular kernel hacker.
Sorry if this isn't really the answer you were looking for.

Resources