Does linux kill background processes if we close the terminal from which it has started? - linux

I have an embedded system, on which I do telnet and then I run an application in background:
./app_name &
Now if I close my terminal and do telnet from other terminal and if I check then I can see this process is still running.
To check this I have written a small program:
#include<stdio.h>
main()
{
while(1);
}
I ran this program in my local linux pc in background and I closed the terminal.
Now, when I checked for this process from other terminal then I found that this process was also killed.
My question is:
Why undefined behavior for same type of process?
On which it is dependent?
Is it dependent on version of Linux?

Who should kill jobs?
Normally, foreground and background jobs are killed by SIGHUP sent by kernel or shell in different circumstances.
When does kernel send SIGHUP?
Kernel sends SIGHUP to controlling process:
for real (hardware) terminal: when disconnect is detected in a terminal driver, e.g. on hang-up on modem line;
for pseudoterminal (pty): when last descriptor referencing master side of pty is closed, e.g. when you close terminal window.
Kernel sends SIGHUP to other process groups:
to foreground process group, when controlling process terminates;
to orphaned process group, when it becomes orphaned and it has stopped members.
Controlling process is the session leader that established the connection to the controlling terminal.
Typically, the controlling process is your shell. So, to sum up:
kernel sends SIGHUP to the shell when real or pseudoterminal is disconnected/closed;
kernel sends SIGHUP to foreground process group when the shell terminates;
kernel sends SIGHUP to orphaned process group if it contains stopped processes.
Note that kernel does not send SIGHUP to background process group if it contains no stopped processes.
When does bash send SIGHUP?
Bash sends SIGHUP to all jobs (foreground and background):
when it receives SIGHUP, and it is an interactive shell (and job control support is enabled at compile-time);
when it exits, it is an interactive login shell, and huponexit option is set (and job control support is enabled at compile-time).
See more details here.
Notes:
bash does not send SIGHUP to jobs removed from job list using disown;
processes started using nohup ignore SIGHUP.
More details here.
What about other shells?
Usually, shells propagate SIGHUP. Generating SIGHUP at normal exit is less common.
Telnet or SSH
Under telnet or SSH, the following should happen when connection is closed (e.g. when you close telnet window on PC):
client is killed;
server detects that client connection is closed;
server closes master side of pty;
kernel detects that master pty is closed and sends SIGHUP to bash;
bash receives SIGHUP, sends SIGHUP to all jobs and terminates;
each job receives SIGHUP and terminates.
Problem
I can reproduce your issue using bash and telnetd from busybox or dropbear SSH server: sometimes, background job doesn't receive SIGHUP (and doesn't terminate) when client connection is closed.
It seems that a race condition occurs when server (telnetd or dropbear) closes master side of pty:
normally, bash receives SIGHUP and immediately kills background jobs (as expected) and terminates;
but sometimes, bash detects EOF on slave side of pty before handling SIGHUP.
When bash detects EOF, it by default terminates immediately without sending SIGHUP. And background job remains running!
Solution
It is possible to configure bash to send SIGHUP on normal exit (including EOF) too:
Ensure that bash is started as login shell. The huponexit works only for login shells, AFAIK.
Login shell is enabled by -l option or leading hyphen in argv[0]. You can configure telnetd to run /bin/bash -l or better /bin/login which invokes /bin/sh in login shell mode.
E.g.:
telnetd -l /bin/login
Enable huponexit option.
E.g.:
shopt -s huponexit
Type this in bash session every time or add it to .bashrc or /etc/profile.
Why does the race occur?
bash unblocks signals only when it's safe, and blocks them when some code section can't be safely interrupted by a signal handler.
Such critical sections invoke interruption points from time to time, and if signal is received when a critical section is executed, it's handler is delayed until next interruption point happens or critical section is exited.
You can start digging from quit.h in the source code.
Thus, it seems that in our case bash sometimes receives SIGHUP when it's in a critical section. SIGHUP handler execution is delayed, and bash reads EOF and terminates before exiting critical section or calling next interruption point.
Reference
"Job Control" section in official Glibc manual.
Chapter 34 "Process Groups, Sessions, and Job Control" of "The Linux Programming Interface" book.

When you close the terminal, shell sends SIGHUP to all background processes – and that kills them. This can be suppressed in several ways, most notably:
nohup
When you run program with nohup it catches SIGHUP and redirect program output.
$ nohup app &
disown
disown tells shell not to send SIGHUP
$ app &
$ disown
Is it dependent on version of linux?
It is dependent on your shell. Above applies at least for bash.

AFAIK in both cases the process should be killed. In order to avoid this you have to issue a nohup like the following:
> nohup ./my_app &
This way your process will continue executing. Probably the telnet part is due to a BUG similar to this one:
https://bugzilla.redhat.com/show_bug.cgi?id=89653

In order completely understand whats happening you need to get into unix internals a little bit.
When you are running a command like this
./app_name &
The app_name is sent to background process group. You can check about unix process groups here
When you close bash with normal exit it triggers SIGHUP hangup signal to all its jobs. Some information on unix job control is here.
In order to keep your app running when you exit bash you need to make your app immune to hangup signal with nohup utility.
nohup - run a command immune to hangups, with output to a non-tty
And finally this is how you need to do it.
nohup app_name & 2> /dev/null;

In modern Linux--that is, Linux with systemd--there is an additional reason this might happen which you should be aware of: "linger".
systemd kills processes left running from a login shell, even if the process is properly daemonized and protected from HUP. This is the default behavior in modern configurations of systemd.
If you run
loginctl enable-linger $USER
you can disable this behavior, allowing background processes to keep running. The mechanisms covered by the other answers still apply, however, and you should also protect your process against them.
enable-linger is permanent until it is re-disabled. You can check it with
ls /var/lib/systemd/linger
This may have files, one per username, for users who have enable-linger. Any user listed in the directory has the ability to leave background processes running at logout.

Related

Running two node servers with one bash script and receive console logs

I have a bash script:
node web/dist/web/src/app.js & node api/dist/api/src/app.js &
$SHELL
It successfully starts both my node servers. However:
I do not receive any output (from console.log etc) in my terminal window
If I cancel by (Ctrl +C) the processes are not exited, so then I annoyingly have to manually do a taskkill /F /PID etc afterwards.
Is there anyway around this?
The reason you can't stop your background jobs with Ctrl+C is because signals (SIGINT in this case) are received only by the foreground process.
When your foreground process (the non-interactive main script) exits, its children processes become orphans which are immediately adopted by the init process. To kill them, you need their PIDs. (When you run a background process in an interactive shell, it will receive the SIGHUP, and probably exit, when shell exits.)
The solution in your case is to make your script wait for its children, using the shell built-in wait command. wait will ensure your script receives the SIGINT, which you can then handle (with trap) and kill the background jobs (with kill 0):
#!/bin/bash
trap 'kill 0' EXIT
node app1.js &
node app2.js &
wait
By setting trap on EXIT (special pseudo-signal in bash), you'll ensure background processes will terminate whenever your main script exits (either by Ctrl+C/SIGINT, or by any other signal like SIGTERM, SIGHUP, SIGKILL). The kill 0 command kills all processes in the current process group.
Regarding the output -- on Linux, background processes will inherit the standard output/error from shell (if not redirected), and continue to write to your TTY/terminal. If that's not working on Windows, I'm not sure why not.
However, even if your background processes somehow lost their way to your TTY, you can, as a workaround, append to a log file:
node app1.js >>/path/to/file.log 2>&1 &
node app2.js >>/path/to/file.log 2>&1 &
and then tail -f that log file, either in this, or some other terminal:
tail -f /path/to/file.log

How to handle shell script if the terminal closes abruptly or terminal lost network connection?

I have big bash script running but if due to some reason if the terminal closes (the one which is running the script) or due to some network issues the SSH connection lost or if user willingly gave Clt+c then how to capture the above scenario. I want to log some message saying the script exited due to above reason.
To avoid terminal and network disconnection, use nohup. From man nohup:
NAME
nohup - run a command immune to hangups, with output to a non-tty
Ctrl+C won't be available for the user anymore, but the script can still be killed with kill. Then use trap to catch the signal.
You can also use screen or tmux.

Will the script on remote server keep running after ssh timeout?

I'm running some script on remote server using ssh. The task is downloading images to remote server. I'm wondering will the script keep running after I log out the ssh session? Why? Could anyone explain in detail?
If you want the script keep running after logout you need to detach it from the terminal and run it in the background:
nohup ./script.sh &
If you close the terminal where you launched a process in, the process will receive SIGHUP and unless it handles it this means the process will get terminated. HUP means hang up, like in a phone call.
The nohup command can be used to start a process and prevent it from SIGHUP signals getting send to it. An alternative would be to use the bash builtin disown, which does basically the same:
./script.sh &
disown %1
Note that the 1 represents the job id. If you running multiple processes in the background you need to specify the correct job id.

How can I keep my Linux program running after I exit ssh of my non-root user?

I've searched, googled, sat in IRC for a week and even talked to a friend who is devoutly aligned with linux but I haven't yet received a solid answer.
I have written a shell script that runs as soon as I log into my non-root user and runs basically just does "./myprogram &" without quotation. When I exit shh my program times out and I am unable to connect to it until I log back in. How can I keep my program running after I exit SSH of my non-root user?
I am curious if this has to be done on the program level or what? My apologizes if this does not belong here, I am not sure where it goes to be perfectly honest.
Beside using nohup, you can run your program in terminal multiplexer like screen or tmux. With them, you can reattach to sessions, which is for example quite helpful if you need to run terminal-based interactive programs or long time running scripts over a unstable ssh connections.
boybu is a nice enhancement of screen.
Try nohup: http://linux.die.net/man/1/nohup
Likely your program receives a SIGHUP signal when you exit your ssh session.
There's two signals that can cause your program to die after your ssh session ends: SIGHUP and SIGPIPE.
SIGHUP will be sent to your program because the parent process (ssh) has died. You can get around this either by using the program nohup (i.e. nohup ./myprogram &) or by using the shell builtin disown (./myprogram& disown)
SIGPIPE will be sent to your program if it tries to write to stdout or stderr after the ssh session has been disconnected. To get around this, redirect them to a file or /dev/null, i.e. nohup ./myprogram >/dev/null 2>/dev/null &
You might also want to use the batch (or at) command, in addition to the other answers (nohup, screen, ...). And ssh has a -f option which might interest you.

How to know from a bash script if the user abruptly closes ssh session

I have a bash script that acts as the default shell for a user loging in trough ssh.
It provides a menu with several options one of wich is sending a file using netcat.
The netcat of the embedded linux I'm using lacks the -w option, so if the user closes the ssh connection without ever sending the file, the netcat command waits forever.
I need to know if the user abruptly closes the connection so the script can kill the netcat command and exit gracefully.
Things I've tried so far:
Trapping the SIGHUP: it is not issued. The only signal issued i could find is SIGCONT, but I don't think it's reliable and portable.
Playing with the -t option of the read command to detect a closed stdin: this would work if not for a silly bug in the embedded read command (only times out on the first invocation)
Edit:
I'll try to answer the questions in the comments and explain the situation further.
The code I have is:
nc -l -p 7576 > /dev/null 2>> $LOGFILE < $TMP_DIR/$BACKUP_FILE &
wait
I'm ignoring SIGINT and SIGTSTP, but I've tried to trap all the signals and the only one received is SIGCONT.
Reading the bash man page I've found out that the SIGHUP should be sent to both script and netcat and that the SIGCONT is sent to stopped jobs to ensure they receive the SIGHUP.
I guess the wait makes the script count as stopped and so it receives the SIGCONT but at the same time the wait somehow eats up the SIGHUP.
So I've tried changing the wait for a sleep and then both SIGHUP and SIGCONT are received.
The question is: why is the wait blocking the SIGHUP?
Edit 2: Solved
I solved it polling for a closed stdin with the read builtin using the -t option. To work around the bug in the read builtin I spawn it in a new bash (bash -c "read -t 3 dummy").
Does the Parent PiD change? If so you could look up the parent in the process list and make sure the process name is correct.
I have written similar applications. It would be helpful to have more of the code in your shell. I think there may be a way of writing your overall program differently which would address this issue.

Resources