spawn daemon process with make - linux

I'm fiddling with libfuse and I find useful the rules make mount which executes the userspace fuse daemon and make umount to unmount the directory. Unfortunately if I start the daemon in the make mount rule, this gets killed as soon as make exits (when the rule is completed).
Is it possible to spawn a daemon from a make rule such that the daemon persists the exit of make?

Make is the wrong tool for the job here. It shouldn't be used as a supervisor for other processes and anything it starts should end when it does.
That being said you can easily unhitch processes so that kill signals are not propagated when processes terminate. Running your fuse daemon prefixed by nohup … should stop the signals from reaching the child process and it will go on it's merry way.

Related

How do I make sure a process launched by Docker entrypoint can be killed?

I'm building a Docker image that launches a long-running Java process. I want to make sure that it can be killed together with the container (e.g. by using Ctrl+C) yet still perform cleanup.
If I'm using exec java -jar in my entrypoint, it works as expected.
If I'm simply executing java -jar, the process cannot be killed.
However exec makes the container exit even on success, and that is a problem if this command is not the last one in the entrypoint. For example, if some file conversion or cleanup follows, it will not get executed:
exec java -jar "./lib/Saxon-HE-${SAXON_VER}.jar" -s:"$json_xml" -xsl:"$STYLESHEET" base-uri="$base"
rm "$json_xml"
I think the explanation is that using exec the process (java in this case) becomes PID=1 and receives the kill signals, while without exec it gets some other PID and does not receive the signals and therefore can't be killed.
So my question is two-fold:
is there a workaround that allows the process to be killed without exiting the container on success as exec does?
how do I make sure the cleanup after exec (rm in this case) gets executed even if the process is killed/exits?
You could create an entrypoint bash script that traps the CTRL-C signal, kills (or gracefully stops?) the java process and does your cleanup afterwards.
Example (not tested):
#!/bin/bash
# trap ctrl-c and call ctrl_c()
trap ctrl_c INT
function ctrl_c() {
echo "Trapped CTRL-C"
# Do something here. Kill Java?
}
java -jar "./lib/Saxon-HE-${SAXON_VER}.jar" -s:"$json_xml" -xsl:"$STYLESHEET" base-uri="$base"
Add it to your docker image and make it your entrypoint
FROM java:8
ADD docker-entrypoint.sh /docker-entrypoint.sh
ENTRYPOINT ["/docker-entrypoint.sh"]
I use tini. Here is the reference link
You can just build another program which will manage the java child process and the cleanup, for example using the same java, go or rust to write such. I'm sure these languages have a proper process control tools and you can catch the CTRL-C and other events to stop the internal child process and do the cleanup. Probably it would even take less time compared to searching for the tools which will have limited behavior anyways. Also it may be even worth to open source it for such problems.

Does adding '&' makes it run as a daemon?

I am aware that adding a '&' in the end makes it run as a background but does it also mean that it runs as a daemon?
Like:
celery -A project worker -l info &
celery -A project worker -l info --detach
I am sure that the first one runs in a background however the second as stated in the document runs in the background as a daemon.
I would love to know the main difference of the commands above
They are different!
"&" version is background , but not run as daemon, daemon process will detach with terminal.
in C language ,daemon can write in code :
fork()
setsid()
close(0) /* and /dev/null as fd 0, 1 and 2 */
close(1)
close(2)
fork()
This ensures that the process is no longer in the same process group as the terminal and thus won't be killed together with it. The IO redirection is to make output not appear on the terminal.(see:https://unix.stackexchange.com/questions/56495/whats-the-difference-between-running-a-program-as-a-daemon-and-forking-it-into)
a daemon make it to be in its own session, not be attached to a terminal, not have any file descriptor inherited from the parent open to anything, not have a parent caring for you (other than init) have the current directory in / so as not to prevent a umount... while "&" version do not
Yes the process will be ran as a daemon, or background process; they both do the same thing.
You can verify this by looking at the opt parser in the source code (if you really want to verify this):
. cmdoption:: --detach
Detach and run in the background as a daemon.
https://github.com/celery/celery/blob/d59518f5fb68957b2d179aa572af6f58cd02de40/celery/bin/beat.py#L12
https://github.com/celery/celery/blob/d59518f5fb68957b2d179aa572af6f58cd02de40/celery/platforms.py#L365
Ultimately, the code below is what detaches it in the DaemonContext. Notice the fork and exit calls:
def _detach(self):
if os.fork() == 0: # first child
os.setsid() # create new session
if os.fork() > 0: # pragma: no cover
# second child
os._exit(0)
else:
os._exit(0)
return self
Not really. The process started with & runs in the background, but is attached to the shell that started it, and the process output goes to the terminal.
Meaning, if the shell dies or is killed (or the terminal is closed), that process will be sent a HUG signal and will die as well (if it doesn't catch it, or if its output goes to the terminal).
The command nohup detaches a process (command) from the shell and redirects its I/O, and prevents it from dying when the parent process (shell) dies.
Example:
You can see that by opening two terminals. In one run
sleep 500 &
in the other one run ps -ef to see the list of processes, and near the bottom something like
me 1234 1201 ... sleep 500
^ ^
process id parent process (shell)
close the terminal in which sleep sleeps in the background, and then do a ps -ef again, the sleep process is gone.
A daemon job is usually started by the system (its owner may be changed to a regular user) by upstart or init.

What happens to other processes when a Docker container's PID1 exits?

Consider the following, which runs sleep 60 in the background and then exits:
$ cat run.sh
sleep 60&
ps
echo Goodbye!!!
$ docker run --rm -v $(pwd)/run.sh:/run.sh ubuntu:16.04 bash /run.sh
PID TTY TIME CMD
1 ? 00:00:00 bash
5 ? 00:00:00 sleep
6 ? 00:00:00 ps
Goodbye!!!
This will start a Docker container, with bash as PID1. It then fork/execs a sleep process, and then bash exits. When the Docker container dies, the sleep process somehow dies too.
My question is: what is the mechanism by which the sleep process is killed? I tried trapping SIGTERM in a child process, and that appears to not get tripped. My presumption is that something (either Docker or the Linux kernel) is sending SIGKILL when shutting down the cgroup the container is using, but I've found no documentation anywhere clarifying this.
EDIT The closest I've come to an explanation is the following quote from baseimage-docker:
If your init process is your app, then it'll probably only shut down itself, not all the other processes in the container. The kernel will then forcefully kill those other processes, not giving them a chance to gracefully shut down, potentially resulting in file corruption, stale temporary files, etc. You really want to shut down all your processes gracefully.
So at least according to this, the implication is that when the container exits, the kernel will sending a SIGKILL to all remaining processes. But I'd still like clarity on how it decides to do that (i.e., is it a feature of cgroups?), and ideally a more authoritative source would be nice.
OK, I seem to have come up with some more solid evidence that this is, in fact, the Linux kernel doing the terminating. In the clone(2) man page, there's this useful section:
CLONE_NEWPID (since Linux 2.6.24)
The first process created in a new namespace (i.e., the process
created using the CLONE_NEWPID flag) has the PID 1, and is the
"init" process for the namespace. Children that are orphaned
within the namespace will be reparented to this process rather than
init(8). Unlike the traditional init process, the "init" process of a
PID namespace can terminate, and if it does, all of the processes in
the namespace are terminated.
Unfortunately this is still vague on how exactly the processes in the namespace are terminated, but perhaps that's because, unlike a normal process exit, no entry is left in the process table. Whatever the case is, it seems clear that:
The kernel itself is killing the other processes
They are not killed in a way that allows them any chance to do cleanup, making it (almost?) identical to a SIGKILL

Simulating a process stuck in a blocking system call

I'm trying to test a behaviour which is hard to reproduce in a controlled environment.
Use case:
Linux system; usually Redhat EL 5 or 6 (we're just starting with RHEL 7 and systemd, so it's currently out of scope).
There're situations where I need to restart a service. The script we use for stopping the service usually works quite well; it sends a SIGTERM to the process, which is designed to handle it; if the process doesn't handle the SIGTERM within a timeout (usually a couple of minutes) the script sends a SIGKILL, then waits a couple minutes more.
The problem is: in some (rare) situations, the process doesn't exit after a SIGKILL; this usually happens when it's badly stuck on a system call, possibly because of a kernel-level issue (corrupt filesystem, or not-working NFS filesystem, or something equally bad requiring manual intervention).
A bug arose when the script didn't realize that the "old" process hadn't actually exited and started a new process while the old was still running; we're fixing this with a stronger locking system (so that at least the new process doesn't start if the old is running), but I find it difficult to test the whole thing because I haven't found a way to simulate an hard-stuck process.
So, the question is:
How can I manually simulate a process that doesn't exit when sending a SIGKILL to it, even as a privileged user?
If your process are stuck doing I/O, You can simulate your situation in this way:
lvcreate -n lvtest -L 2G vgtest
mkfs.ext3 -m0 /dev/vgtest/lvtest
mount /dev/vgtest/lvtest /mnt
dmsetup suspend /dev/vgtest/lvtest && dd if=/dev/zero of=/mnt/file.img bs=1M count=2048 &
In this way the dd process will stuck waiting for IO and will ignore every signal, I know the signals aren't ignore in the latest kernel when processes are waiting for IO on nfs filesystem.
Well... How about just not sending SIGKILL? So your env will behave like it was sent, but the process didn't quit.
Once a proces is in "D" state (or TASK_UNINTERRUPTIBLE) in a kernel code path where the execution can not be interrupted while a task is processed, which means sending any signals to the process would not be useful and would be ignored.
This can be caused due to device driver getting too many interrupts from the hardware, getting too many incoming network packets, data from NIC firmware or blocked on a HDD performing I/O. Normally if this happens very quickly and threads remain in this state for very short span of time.
Therefore what you need to be doing is look at the syslog and sar reports during the time when the process was stuck in D-state. If you find stack traces in the log, try to search kernel.bugzilla.org for similar issues or seek support from the Linux vendor.
I would code the opposite way. Have your server process write its pid in e.g. /var/run/yourserver.pid (this is common practice). Have the starting script read that file and test that the process does not exist e.g. with kill of signal 0, or with
yourserver_pid=$(cat /var/run/yourserver.pid)
if [ -f /proc/$yourserver_pid/exe ]; then
You could improve that by readlink /proc/$yourserver_pid/exe and comparing that to /usr/bin/yourserver
BTW, having a process still alive a few seconds after a SIGKILL is a serious situation (the common case when it could happen is if the process is stuck in a D state, waiting for some NFS server), and you probably should detect and syslog it (e.g. with logger in your script).
I also would try to first send SIGTERM, wait a few seconds, send SIGQUIT, wait a few seconds, and at last send SIGKILL and only a few seconds later test that the server process has gone
A bug arose when the script didn't realize that the "old" process hadn't actually exited and started a new process while the old was still running;
This is the bug in the OS/kernel level, not in your service script. The situation is rare and is hard to simulate because the OS is supposed to kill the process when SIGKILL signal happens. So I guess your goal is to let your script work well under a buggy kernel. Is that correct?
You can attach gdb to the process, SIGKILL won't remove such process from processlist but it will flag it as zombie, which might still be acceptable for your purpose.
void#tahr:~$ ping 8.8.8.8 > /tmp/ping.log &
[1] 3770
void#tahr:~$ ps 3770
PID TTY STAT TIME COMMAND
3770 pts/13 S 0:00 ping 8.8.8.8
void#tahr:~$ sudo gdb -p 3770
...
(gdb)
Other terminal
void#tahr:~$ ps 3770
PID TTY STAT TIME COMMAND
3770 pts/13 t 0:00 ping 8.8.8.8
sudo kill -9 3770
...
void#tahr:~$ ps 3770
PID TTY STAT TIME COMMAND
3770 pts/13 Z 0:00 [ping] <defunct>
First terminal again
(gdb) quit

How to kill this immortal nginx worker?

I have started nginx and when I stop like root
/etc/init.d/nginx stop
after that I type
ps aux | grep nginx
and get response like tcp LISTEN 2124 nginx WORKER
kill -9 2124 # tried with kill -QUIT 2124, kill -KILL 2124
and after I type again
ps aux | grep nginx
and get response like tcp LISTEN 2125 nginx WORKER
and so on.
How to kill this immortal Chuck Norris worker ?
After kill -9 there's nothing more to do to the process - it's dead (or doomed to die). The reason it sticks around is because either (a) it's parent process hasn't waited for it yet, so the kernel holds the process table entry to keep it's status until the parent does so, or (b) the process is stuck on a system call into the kernel that is not finishing (which usually means a buggy driver and/or hardware).
If the first case, getting the parent to wait for the child, or terminating the parent, should work. Most programs don't have a clear way to make them "wait for a child", so that may not be an option.
In the second case, the most likely solution is to reboot. There may be tools that could clear such a condition, but that's not common. Depending on just what that kernel processing is doing, it may be possible to get it to unblock by other means - but that requires knowledge of that processing. For example, if the process is blocked on a kernel lock that some other process is somehow holding indefinitely, terminating that other process could aleviate the problem.
Note that the ps command can distinguish these two states as well. These show up in the 'Z' state. See the ps man page for more info: http://linux.die.net/man/1/ps. They may also show up with the text "defunct".
I had the same issue.
In my case gitlab was the responsible to bring the nginx workers.
when i completelly removed gitlab from my server i got able to kill the nginx workers.
ps -aux | grep "nginx"
Search for the workers and check at the first column who is bringing them up.
kill or unistall the responsible and kill the workers again, they will stop spawning ;D
I was having similar issue.
Check if you are using any auto-healer like Monit or Supervisor which runs the worker whenever you try to stop them. If Yes Disable them.
My workers were being spawned due to changes I forget i made in update-rc.d in Ubuntu.
So I installed sysv-rc-conf which gives a clean interface control of what processes are on reboot, you can disable from there and I assure you no Chuck Noris Resurrection :D

Resources