I'm trying to start a test server via ssh but it always dies once i disconnect from ssh.
Is there a way to start a process (run the server) so it doesn't die upon the end of my ssh session?
As an alternative to nohup, you could run your remote application inside a terminal multiplexor, such as GNU screen or tmux.
Using these tools makes it easy to reconnect to a session from another host, which means that you can kick a long build or download off before you leave work and check on its status when you get home. For instance. I find this particularly useful when doing development work on servers that are very remote (in a different country) with unreliable connectivity between me and them, if the connection drops, I can simply reconnect and carry on without losing any state.
If you're SSHing to a Linux distro that has systemd, you can use systemd-run to launch a process in the background (in systemd's terms, "a transient service"). For example, assuming you want to ping something in the background:
systemd-run --unit=pinger ping 10.8.178.3
The benefit you'll get with systemd over just running a process with nohup is that systemd will track the process and its children, keep logs, remember the exit code and allow you to cleanly kill the process and all its children. Examples:
See the status and the last lines of output:
systemctl status pinger
Stream the output:
journalctl -xfu pinger
Kill:
systemctl kill pinger
Yes; you can use the nohup command to swallow the HUP ("hangup") signal that is sent to your program when you hang up your SSH session.
Alternatively, if you're writing the server yourself, you can code it to register a handler for the HUP signal, and swallow it inside the program (rather than using an external nohup program that does the same).
In addition to the other replies, you could start your test server thru batch (or at) but as Brian answered you should call daemon
And you could pass the -f option to ssh
As an alternative to nohup, screen, et al. you could revise your server to invoke daemon to detach it from the terminal. This is the idiomatic way to write services for linux.
See also daemon(3).
Related
I have an embedded system, on which I do telnet and then I run an application in background:
./app_name &
Now if I close my terminal and do telnet from other terminal and if I check then I can see this process is still running.
To check this I have written a small program:
#include<stdio.h>
main()
{
while(1);
}
I ran this program in my local linux pc in background and I closed the terminal.
Now, when I checked for this process from other terminal then I found that this process was also killed.
My question is:
Why undefined behavior for same type of process?
On which it is dependent?
Is it dependent on version of Linux?
Who should kill jobs?
Normally, foreground and background jobs are killed by SIGHUP sent by kernel or shell in different circumstances.
When does kernel send SIGHUP?
Kernel sends SIGHUP to controlling process:
for real (hardware) terminal: when disconnect is detected in a terminal driver, e.g. on hang-up on modem line;
for pseudoterminal (pty): when last descriptor referencing master side of pty is closed, e.g. when you close terminal window.
Kernel sends SIGHUP to other process groups:
to foreground process group, when controlling process terminates;
to orphaned process group, when it becomes orphaned and it has stopped members.
Controlling process is the session leader that established the connection to the controlling terminal.
Typically, the controlling process is your shell. So, to sum up:
kernel sends SIGHUP to the shell when real or pseudoterminal is disconnected/closed;
kernel sends SIGHUP to foreground process group when the shell terminates;
kernel sends SIGHUP to orphaned process group if it contains stopped processes.
Note that kernel does not send SIGHUP to background process group if it contains no stopped processes.
When does bash send SIGHUP?
Bash sends SIGHUP to all jobs (foreground and background):
when it receives SIGHUP, and it is an interactive shell (and job control support is enabled at compile-time);
when it exits, it is an interactive login shell, and huponexit option is set (and job control support is enabled at compile-time).
See more details here.
Notes:
bash does not send SIGHUP to jobs removed from job list using disown;
processes started using nohup ignore SIGHUP.
More details here.
What about other shells?
Usually, shells propagate SIGHUP. Generating SIGHUP at normal exit is less common.
Telnet or SSH
Under telnet or SSH, the following should happen when connection is closed (e.g. when you close telnet window on PC):
client is killed;
server detects that client connection is closed;
server closes master side of pty;
kernel detects that master pty is closed and sends SIGHUP to bash;
bash receives SIGHUP, sends SIGHUP to all jobs and terminates;
each job receives SIGHUP and terminates.
Problem
I can reproduce your issue using bash and telnetd from busybox or dropbear SSH server: sometimes, background job doesn't receive SIGHUP (and doesn't terminate) when client connection is closed.
It seems that a race condition occurs when server (telnetd or dropbear) closes master side of pty:
normally, bash receives SIGHUP and immediately kills background jobs (as expected) and terminates;
but sometimes, bash detects EOF on slave side of pty before handling SIGHUP.
When bash detects EOF, it by default terminates immediately without sending SIGHUP. And background job remains running!
Solution
It is possible to configure bash to send SIGHUP on normal exit (including EOF) too:
Ensure that bash is started as login shell. The huponexit works only for login shells, AFAIK.
Login shell is enabled by -l option or leading hyphen in argv[0]. You can configure telnetd to run /bin/bash -l or better /bin/login which invokes /bin/sh in login shell mode.
E.g.:
telnetd -l /bin/login
Enable huponexit option.
E.g.:
shopt -s huponexit
Type this in bash session every time or add it to .bashrc or /etc/profile.
Why does the race occur?
bash unblocks signals only when it's safe, and blocks them when some code section can't be safely interrupted by a signal handler.
Such critical sections invoke interruption points from time to time, and if signal is received when a critical section is executed, it's handler is delayed until next interruption point happens or critical section is exited.
You can start digging from quit.h in the source code.
Thus, it seems that in our case bash sometimes receives SIGHUP when it's in a critical section. SIGHUP handler execution is delayed, and bash reads EOF and terminates before exiting critical section or calling next interruption point.
Reference
"Job Control" section in official Glibc manual.
Chapter 34 "Process Groups, Sessions, and Job Control" of "The Linux Programming Interface" book.
When you close the terminal, shell sends SIGHUP to all background processes – and that kills them. This can be suppressed in several ways, most notably:
nohup
When you run program with nohup it catches SIGHUP and redirect program output.
$ nohup app &
disown
disown tells shell not to send SIGHUP
$ app &
$ disown
Is it dependent on version of linux?
It is dependent on your shell. Above applies at least for bash.
AFAIK in both cases the process should be killed. In order to avoid this you have to issue a nohup like the following:
> nohup ./my_app &
This way your process will continue executing. Probably the telnet part is due to a BUG similar to this one:
https://bugzilla.redhat.com/show_bug.cgi?id=89653
In order completely understand whats happening you need to get into unix internals a little bit.
When you are running a command like this
./app_name &
The app_name is sent to background process group. You can check about unix process groups here
When you close bash with normal exit it triggers SIGHUP hangup signal to all its jobs. Some information on unix job control is here.
In order to keep your app running when you exit bash you need to make your app immune to hangup signal with nohup utility.
nohup - run a command immune to hangups, with output to a non-tty
And finally this is how you need to do it.
nohup app_name & 2> /dev/null;
In modern Linux--that is, Linux with systemd--there is an additional reason this might happen which you should be aware of: "linger".
systemd kills processes left running from a login shell, even if the process is properly daemonized and protected from HUP. This is the default behavior in modern configurations of systemd.
If you run
loginctl enable-linger $USER
you can disable this behavior, allowing background processes to keep running. The mechanisms covered by the other answers still apply, however, and you should also protect your process against them.
enable-linger is permanent until it is re-disabled. You can check it with
ls /var/lib/systemd/linger
This may have files, one per username, for users who have enable-linger. Any user listed in the directory has the ability to leave background processes running at logout.
I'm running some script on remote server using ssh. The task is downloading images to remote server. I'm wondering will the script keep running after I log out the ssh session? Why? Could anyone explain in detail?
If you want the script keep running after logout you need to detach it from the terminal and run it in the background:
nohup ./script.sh &
If you close the terminal where you launched a process in, the process will receive SIGHUP and unless it handles it this means the process will get terminated. HUP means hang up, like in a phone call.
The nohup command can be used to start a process and prevent it from SIGHUP signals getting send to it. An alternative would be to use the bash builtin disown, which does basically the same:
./script.sh &
disown %1
Note that the 1 represents the job id. If you running multiple processes in the background you need to specify the correct job id.
I'm connecting to my ubuntu server using ssh. I start an encoding program using a command. However, it seems that when my ssh session closes (because I started it on a laptop which went to sleep). Is there a way to avoid this (of course preventing my laptop from sleeping is not a permanent solution).
Run your command with nohup or use screen
nohup is better when your program generate some loging output because it's forward to file and then you can check it, but with screen you can detach ssh session and when you log again you can restore your work-space. For encoding I'll use nohup. It is easier and you probably run it in background, so you really don't need detaching
Screen is the best for you.
screen -S some_name
than run it. Detach it with: ctrl+a d
Next time, attach it with:
screen -rd some_name
You can have more runnning screens. To show the list of them:
screen -ls
Install "screen" on your ubuntu server, that way you can start any program in your session, disconnect the output of the program from your current session and exit.
Later when you connect again to your server, you can restore the program which will continue running and see its progress.
I've searched, googled, sat in IRC for a week and even talked to a friend who is devoutly aligned with linux but I haven't yet received a solid answer.
I have written a shell script that runs as soon as I log into my non-root user and runs basically just does "./myprogram &" without quotation. When I exit shh my program times out and I am unable to connect to it until I log back in. How can I keep my program running after I exit SSH of my non-root user?
I am curious if this has to be done on the program level or what? My apologizes if this does not belong here, I am not sure where it goes to be perfectly honest.
Beside using nohup, you can run your program in terminal multiplexer like screen or tmux. With them, you can reattach to sessions, which is for example quite helpful if you need to run terminal-based interactive programs or long time running scripts over a unstable ssh connections.
boybu is a nice enhancement of screen.
Try nohup: http://linux.die.net/man/1/nohup
Likely your program receives a SIGHUP signal when you exit your ssh session.
There's two signals that can cause your program to die after your ssh session ends: SIGHUP and SIGPIPE.
SIGHUP will be sent to your program because the parent process (ssh) has died. You can get around this either by using the program nohup (i.e. nohup ./myprogram &) or by using the shell builtin disown (./myprogram& disown)
SIGPIPE will be sent to your program if it tries to write to stdout or stderr after the ssh session has been disconnected. To get around this, redirect them to a file or /dev/null, i.e. nohup ./myprogram >/dev/null 2>/dev/null &
You might also want to use the batch (or at) command, in addition to the other answers (nohup, screen, ...). And ssh has a -f option which might interest you.
Somehow this isn't yielded by a google search.
I'm scripting a server in node.js. I start the server by executing its script with the node program:
node myserver.js
But the server staying up is dependent on my ssh session. How can I make this (and all such processes) persistent? Init.d?
Use the nohup command:
From http://en.wikipedia.org/wiki/Nohup
nohup is a POSIX command to ignore the HUP (hangup) signal, enabling the command to keep running after the user who issues the command has logged out. The HUP (hangup) signal is by convention the way a terminal warns depending processes of logout.
Try this:
nohup node myserver.js &
Have you tried GNU screen? Using it, a process can continue to run when you end your ssh session. nohup is a standard *nix command that will allow you to do the same, albeit in a more limited way.
Use screen. Type screen from the terminal and then launch your process. If you disconnect you can reconnect to the ssh session, by typing 'screen -ls' (to see active screens) and 'screen -r' to reconnect.
The program needs to run in a daemonized mode. Here's a good post for doing this in Ubuntu.
nohup is good to run the job under. If the job is already running, you can try disown -h (at least in bash)