Interrupt command line execution in Fortran without killing main program - linux

I have a Fortran program that runs a series of identical calculations on a number of different input data. After doing these calculations the code then always writes a GNUplot script that does some diagnostic plotting (nothing too difficult) and runs it using execute_command_line in Linux.
This usually works well, but after some time I think there must be a memory leak of some kind that works cumulative, because the GNUplotting becomes slower and slower. At some point it virtually stalls.
My question is therefore: Is it possible to interrupt the call to execute_command_line using the keyboard without killing the main Fortran program? Needless to say, CTRL-C kills everything, which is not what I want: I want the main program to continue.
I have been playing with the optional flag wait=.true. but this does not help.
Also, I know that the memory leak has to be fixed (or whatever the cause is), but for now I would like to first see the diagnostic output.

The only solution I have been able to come up with is kind of a workaround:
Modify the shell script so that it
runs the Fortran program in the background: ./mpirun prog_name options &
gets the PID of this proces: proc_PID=$!
waits for the process: wait $proc_PID
traps an interrupt signal: trap handler SIGINT
lets the handler send a SIGURS1 signal: function handler() { kill -SIGUSR1 $proc_PID }
modify the Fortran code so that it catches the SIGUSR1 signal and does what you want with it. For example by having a look here.
By running the mpi process in the background you avoid killing mpirun with SIGINT, which cannot be trapped but you send instead a SIGURS1, which is properly propagated to the mpi processes where it can be handled with directly.
As a side note, however, I realized that this will not solve my problem as my problem was related to an external call to gnuplot using execute_command_line. Since I had a cumulative memory leak, at some point this call started taking for ever because memory resources became scarcer. So the only thing I could have done is manually killing the gnuplot process.
Better, of course, was fixing the memory leak, which I did.

Related

Can a Linux process/thread terminate without pass through do_exit()?

To verify the behavior of a third party binary distributed software I'd like to use, I'm implementing a kernel module whose objective is to keep track of each child this software produces and terminates.
The target binary is a Golang produced one, and it is heavily multi thread.
The kernel module I wrote installs hooks on the kernel functions _do_fork() and do_exit() to keep track of each process/thread this binary produces and terminates.
The LKM works, more or less.
During some conditions, however, I have a scenario I'm not able to explain.
It seems like a process/thread could terminate without passing through do_exit().
The evidence I collected by putting printk() shows the process creation but does not indicate the process termination.
I'm aware that printk() can be slow, and I'm also aware that messages can be lost in such situations.
Trying to prevent message loss due to slow console (for this particular application, serial tty 115200 is used), I tried to implement a quicker console, and messages have been collected using netconsole.
The described setup seems to confirm a process can terminate without pass through the do_exit() function.
But because I wasn't sure my messages couldn't be lost on the printk() infrastructure, I decided to repeat the same test but replacing printk() with ftrace_printk(), which should be a leaner alternative to printk().
Still the same result, occasionally I see processes not passing through the do_exit(), and verifying if the PID is currently running, I have to face the fact that it is not running.
Also note that I put my hook in the do_exit() kernel function as the first instruction to ensure the function flow does not terminate inside a called function.
My question is then the following:
Can a Linux process terminate without its flow pass through the do_exit() function?
If so, can someone give me a hint of what this scenario can be?
After a long debug session, I'm finally able to answer my own question.
That's not all; I'm also able to explain why I saw the strange behavior I described in my scenario.
Let's start from the beginning: monitoring a heavily multithreading application. I observed rare cases where a PID that suddenly stops exists without observing its flow to pass through the Linux Kernel do_exit() function.
Because this my original question:
Can a Linux process terminate without pass through the do_exit() function?
As for my current knowledge, which I would by now consider reasonably extensive, a Linux process can not end its execution without pass through the do_exit() function.
But this answer is in contrast with my observations, and the problem leading me to this question is still there.
Someone here suggested that the strange behavior I watched was because my observations were somehow wrong, alluding my method was inaccurate, as for my conclusions.
My observations were correct, and the process I watched didn't pass through the do_exit() but terminated.
To explain this phenomenon, I want to put on the table another question that I think internet searchers may find somehow useful:
Can two processes share the same PID?
If you'd asked me this a month ago, I'd surely answered this question with: "definitively no, two processes can not share the same PID."
Linux is more complex, though.
There's a situation in which, in a Linux system, two different processes can share the same PID!
https://elixir.bootlin.com/linux/v4.19.20/source/fs/exec.c#L1141
Surprisingly, this behavior does not harm anyone; when this happens, one of these two processes is a zombie.
updated to correct an error
The circumstances of this duplicate PID are more intricate than those described previously. The process must flush the previous exec context if a threaded process forks before invoking an execve (the fork copies also the threads). If the intention is to use the execve() function to execute a new text, the kernel must first call the flush_old_exec()  function, which then calls the de_thread() function for each thread in the process other than the task leader. Except the task leader, all the process' threads are eliminated as a result. Each thread's PID is changed to that of the leader, and if it is not immediately terminated, for example because it needs to wait an operation completion, it keeps using that PID.
end of the update
That was what I was watching; the PID I was monitoring did not pass through the do_exit() because when the corresponding thread terminated, it had no more the PID it had when it started, but it had its leader's.
For people who know the Linux Kernel's mechanics very well, this is nothing to be surprised for; this behavior is intended and hasn't changed since 2.6.17.
Current 5.10.3, is still this way.
Hoping this to be useful to internet searchers; I'd also like to add that this also answers the followings:
Question: Can a Linux process/thread terminate without pass through do_exit()? Answer: NO, do_exit() is the only way a process has to end its execution — both intentional than unintentional.
Question: Can two processes share the same PID? Answer: Normally don't. There's some rare case in which two schedulable entities have the same PID.
Question: Do Linux kernel have scenarios where a process change its PID? Answer: yes, there's at least one scenario where a Process changes its PID.
Can a Linux process terminate without its flow pass through the do_exit() function?
Probably not, but you should study the source code of the Linux kernel to be sure. Ask on KernelNewbies. Kernel threads and udev or systemd related things (or perhaps modprobe or the older hotplug) are probable exceptions. When your /sbin/init of pid 1 terminates (that should not happen) strange things would happen.
The LKM works, more or less.
What does that means? How could a kernel module half-work?
And in real life, it does happen sometimes that your Linux kernel is panicking or crashes (and it could happen with your LKM, if it has not been peer-reviewed by the Linux kernel community). In such a case, there is no more any notion of processes, since they are an abstraction provided by a living Linux kernel.
See also dmesg(1), strace(1), proc(5), syscalls(2), ptrace(2), clone(2), fork(2), execve(2), waitpid(2), elf(5), credentials(7), pthreads(7)
Look also inside the source code of your libc, e.g. GNU libc or musl-libc
Of course, see Linux From Scratch and Advanced Linux Programming
And verifying if the PID is currently running,
This can be done is user land with /proc/, or using kill(2) with a 0 signal (and maybe also pidfd_send_signal(2)...)
PS. I still don't understand why you need to write a kernel module or change the kernel code. My intuition would be to avoid doing that when possible.

Elegant way of killing a Linux program

I am running a Linux program that uses a lot of memory. If I terminate it manually using Ctrl-C, it will do the necessary memory clean-up. Now I'm trying to terminate the program using a script. What is an elegant way to do so? I'm hoping to do something similar to Ctrl-C so it can do the memory clean-up. Will using the "kill -9" command do this?
What do you mean by memory clean-up?
Keep in mind that memory will be freed anyway, regardless of the killing signal.
Default kill signal - SIGTERM (15) gives application a chance to do some additional work but it has to be implemented with a signal handler.
Signal handling in c++

Can the operating system restart a process that is stuck in infinite loop?

The other day, when doing testing on a Linux server, we observed that under some conditions, one process could die and then started again. After checking the code, we found it was caused by an infinite loop.
This aroused my curiosity how the process went dead and then got started? Is it the OS who detects and determines the abnormal process and get it restarted? If yes, how does that work?
Let's assume you won't be able to fix your code... And let's ignore all crazy options like attaching gdb via script or so.
You can either check CPU usage (most accidental infinite loops that I've done used 100% of CPU for hours :) ), or (more likely option) use strace to check what the software is doing right now and implement your own signature tracing (if those 20 APIs repeats 20 times let's assume infinite loop or so).
For example:
#!/bin/bash
strace -p`cat your_app.pid` | ./your_signature_evaluator
# Or
strace -p12345 | ./your_signature_evaluator
As for automatic system recognition... It seems normal that program crashes after calling things in loop uncontrollably (for example malloc() until you deplete memory, opening files...), but I've (and correct me in comment if I'm wrong) never seen system (kernel) restarting the app. I think you've either:
have conditions (signal handling, whatever) inside program that helps to recover
you're running a watchdog (check every 20 seconds that <pid> is running and if not start new instance)
you're running distribution that provides service/program configuration with restart if stopped
But I really doubt that Linux would be so nice to your application on it's own.
If it could the person that wrote that kernel will have solved the halting problem
PS: Vytor - Web servers are in an infinite loop and do not use 100% CPU.

Can I coredump a process that is blocking on disk activity (preferably without killing it)?

I want to dump the core of a running process that is, according to /proc/<pid>/status, currently blocking on disk activity. Actually, it is busy doing work on the GPU (should be 4 hours of work, but it has taken significantly longer now). I would like to know how much of the process's work has been done, so it'd be good to be able to dump the process's memory. However, as far as I know, "blocking on disk activity" means that it's not possible to interrupt the process in any way, and coredumping a process e.g. using gdb requires interrupting and temporarily stopping the process in order to attach via ptrace, right?
I know that I could just read /proc/<pid>/{maps,mem} as root to get the (maybe inconsistent) memory state, but I don't know any way to get hold of the process's userspace CPU register values... they stay the same while the process is inside the kernel, right?
You can probably run gcore on your program. It's basically a wrapper around GDB that attaches, uses the gcore command, and detaches again.
This might interrupt your IO (as if it received a signal, which it will), but your program can likely restart it if written correctly (and this may occur in any case, due to default handling).

Program stalls during long runs

Fixed:
Well this seems a bit silly. Turns out top was not displaying correctly and programs actually continue to run. Perhaps the CPU time became too large to display? Either way, the program seems to be working fine and this whole question was moot.
Thanks (and sorry for the silly question).
Original Q:
I am running a simulation on a computer running Ubuntu server 10.04.3. Short runs (<24 hours) run fine, but long runs eventually stall. By stall, I mean that the program no longer gets any CPU time, but it still holds all information in memory. In order to run these simulations, I SSH and nohup the program and pipe any output to a file.
Miscellaneous information:
The system is definitely not running out of RAM. The program does not need to read or write to the hard drive until completion; the computation is done completely in memory. The program is not killed, as it still has a PID after it stalls. I am using openmp, but have increased the max number of processes and the max time is unlimited. I am finding the largest eigenvalues of a matrix using the ARPACK fortran library.
Any thoughts on what is causing this behavior or how to resume my currently stalled program?
Thanks
I assume this is an OpenMP program from your tags, though you never actually state this. Is ARPACK threadsafe?
It sounds like you are hitting a deadlock (more common in MPI programs than OpenMP, but it's definitely possible). The first thing to do is to compile with debugging flags on, then the next time you find this problem, attach with a debugger and find out what the various threads are doing. For gdb, for instance, some instructions for switching between threads are shown here.
Next time your program "stalls", attach GDB to it and do thread apply all where.
If all your threads are blocked waiting for some mutex, you have a
deadlock.
If they are waiting for something else (e.g. read), then you need to figure out what prevents the operation from completing.
Generally on UNIX you don't need to rebuild with debug flags on to get a meaningful stack trace. You wouldn't get file/line numbers, but they may not be necessary to diagnose the problem.
A possible way of understanding what a running program (that is, a process) is doing is to attach a debugger to it with gdb program *pid* (which works well only when the program has been compiled with debugging enabled with -g), or to use strace on it, using strace -p *pid*. the strace command is an utility (technically, a specialized debugger built above the ptrace system call interface) which shows you all the system calls done by a program or a process.
There is also a variant, called ltrace that intercepts the call to functions in dynamic libraries.
To get a feeling of it, try for instance strace ls
Of course, strace won't help you much if the running program is not doing any system calls.
Regards.
Basile Starynkevitch

Resources