How to make a program continue to run after log out from ssh? [duplicate] - linux

This question already has answers here:
Closed 12 years ago.
Possible Duplicate:
Prevent a background process from being stopped after closing SSH client
I have a program that takes a lot of time to finish.
It is running as root over ssh.
I want it to continue to run after I logout,is this possible and how would I achieve this?

Assuming that you have a program running in the foreground, press ctrl-Z, then:
[1]+ Stopped myprogram
$ disown -h %1
$ bg 1
[1]+ myprogram &
$ logout
If there is only one job, then you don't need to specify the job number. Just use disown -h and bg.
Explanation of the above steps:
You press ctrl-Z. The system suspends the running program, displays a job number and a "Stopped" message and returns you to a bash prompt.
You type the disown -h %1 command (here, I've used a 1, but you'd use the job number that was displayed in the Stopped message) which marks the job so it ignores the SIGHUP signal (it will not be stopped by logging out).
Next, type the bg command using the same job number; this resumes the running of the program in the background and a message is displayed confirming that.
You can now log out and it will continue running..

You should try using nohup and running it in the background:
nohup sleep 3600 &

I would try the program screen.

Start in the background:
./long_running_process options &
And disown the job before you log out:
disown

You want nohup. See http://nixcraft.com/linux-software/313-ssh-nohup-connection.html

You could use screen, detach and reattach

Related

How do I stop a scirpt running in the background in linux?

Let's say I have a silly script:
while true;do
touch ~/test_file
sleep 3
done
And I start the script into the background and leave the terminal:
chmod u+x silly_script.sh
./silly_script.sh &
exit
Is there a way for me to identify and stop that script now? The way I see it is, that every command is started in it's own process and I might be able to catch and kill one command like the 'sleep 3' but not the execution of the entire script, am I mistaken? I expected a process to appear with the scripts name, but it does not. If I start the script with 'source silly_script.sh' I can't find a process by the name of 'source'. Do I need to identify the instance of bash, that is executing the script? How would I do that?
EDIT: There have been a few creative solutions, but so far they require the PID of the script execution to be stored right away, or the bash session to not be left with ^D or exit. I understand, that this way of running scripts should maybe be avoided, but I find it hard to believe, that any low privilege user could, even by accident, start an annoying script into the background, that is for instance filling the drive with garbage files or repeatedly starting new instances of some software and even the admin has no other option, than to restart the server, because a simple script can hide it's identifier without even trying.
With the help of the fine people here I was able to derive the answer I needed:
It is true, that the script runs every command in it's own process, so for instance killing the sleep 3 command won't do anything to the script being run, but through a command like the sleep 3 you can find the bash instance running the script, by looking for the parent process:
So after doing the above, you can run ps axf to show all processes in a tree form. You will then find this section:
18660 ? S 0:00 /bin/bash
18696 ? S 0:00 \_ sleep 3
Now you have found the bash instance, that is running the script and can stop it: kill 18660
(Of course your PID will be different from mine)
The jobs command will show you all running background jobs.
You can kill background jobs by id using kill, e.g.:
$ sleep 9999 &
[1] 58730
$ jobs
[1]+ Running sleep 9999 &
$ kill %1
[1]+ Terminated sleep 9999
$ jobs
$
58730 is the PID of the backgrounded task, and 1 is the task id of it. In this case kill 58730 and kill %1` would have the same effect.
See the JOB CONTROL section of man bash for more info.
When you exit, the backgrounded job will get a kill signal and die (assuming that's how it handles the signal - in your simple example it is), unless you disown it first.
That kill will propogate to the sleep process, which may well ignore it and continue sleeping. If this is the case you'll still see it in ps -e output, but with a parent pid of 1 indicating its original parent no longer exists.
You can use ps -o ppid= <pid> to find the parent of a process, or pstree -ap to visualise the job hierarchy and find the parent visually.

Difference between ctrl-z and "&" in linux

My understanding is that when you are running any command (say sleep 10) within a given shell (say bash), what's happening under the hood is that a fork system call is called, and sleep 10 is now running as a child process with the parent being the bash shell I executed the sleep.
Now, if I want to send sleep to the background, I would either do sleep 10 & or run sleep 10 and press ctrl+z so the process is sent to the background. pstree shows that using any of these options, sleep keeps being a child process of the bash shell.
Now my question is, when doing this through SSH, I noted the following:
If I do: sleep 999 & and sleep 888 <- followed by a ctrl+z, and the close the ssh session, only sleep 999 & survived.
Why is this? I actually was expecting one of these:
both processes gets terminated because the parent process is gone
both processes gets associated to init as the parent process.
or run sleep 10 and press ctrl+z so the process is sent to the background
No, not really. Didn't you see that big message that says "[1]+ Stopped sleep 10"? ctrl+z stops the process and returns you to the current shell. You can now type fg to continue process, or type bg to continue the process in the background. Research "bash job control" and see bash manual Job Control Basics.
Why is this?
Stopped processes are first continued SIGCONT and then SIGTERM is sent to them so they can terminate.
Bash manual is available online: https://www.gnu.org/savannah-checkouts/gnu/bash/manual/bash.html .

Difference between nohup and ctrl+z command in Linux

I want to run jobs in the background so that I can logout from terminal once any job is started. I know two ways
1) run job and then press ctrl+z and then enter bg
mysqldump -uroot -p dbname > dbname.sql
2) using nohup to run job and then press ctrl+z and then enter bg
nohup mysqldump -u root -p dbname > dbname.sql 2>&1
I want to know the difference between above two commands and which one is best in which scenario.
Running your process as a job constrains your job to the current session. So if you run
$ start_running_job &
$ exit
The job will stop when you exit.
Pressing Control-Z has the same effect as the lines above.
Running nohup places the job so that it survives the end of the current session. So if you run
$ nohup start_running_job &
$ exit
The job will continue running.
I agree with answer provided...
"nohup" basically runs your command/script in back-end mode on server itself and it is not related to your current login session while putting "&" just puts the script in background and remains connected to your current login session. In any case you can use fg to bring back script/command.
Which is good for you, for this it depends how you are accessing your server and for how long your script is gonna take to complete. Say your script will take more amount of time and your current login session remains ideal it will close down and script will get killed if you have only given &. So much of the time just give nohup and don't worry it.

linux - running process background

I want to run a process in a remote linux server and keep that process alive after close the putty terminal,
what is the correct command?
You have two options:
Use GNU screen, which will allow you to run the command and detach it from your terminal, and later re-attach it to a different session. I use it for long-running processes whose output I want to be able to monitor at any time. Screen is a truly powerful tool and I would highly recommend spending some time to learn it.
Run the command as nohup some-command &, which will run the command in the background, detach it from the console, and redirect its output into nohup.out. It will swallow SIGHUPs that are sent to the process. (When you close the terminal or log out, SIGHUP is sent to all processes that were started by the login shell, and the default action the kernel will take is to kill the process off. This is why appending & to put the process in the background is not enough for it to survive a logout.)
don't use that nohup junk, i hate seeing that on servers; screen is a wasting pile of bits and rot -- use tmux.
if you want to background a process, double fork like every other daemon since the beginning of time:
# ((exec sleep 30)&)
# grep PPid /proc/`pgrep sleep`/status
PPid: 1
# jobs
# disown
bash: disown: current: no such job
enjoy.
The modern and easy to use approach that allows managing multiple processes and has a nice terminal UI is hapless utility.
Install with pip install hapless (or python3 -m pip install hapless) and just run
$ hap run my-command # e.g. hap run python my_long_running_script.py
$ hap status # check all the launched processes
See docs for more info.
A command launched enclosed in parenthesis
(command &)
will survive the death of the originating shell.

How to run infinitely script in background on Linux?

I have a PHP script with infinite loop. I need this script running forever. So, I run
php /path/to/script.php > /dev/null &
And it works in background in my current user's security context. But when I close terminal window (log off), of course, CentOS Linux kills my program.
I see two guesses: run from a different user in background or make a daemon. I need help in each situation.
Thanks a lot!
nohup is your friend.
nohup command &
I think the general solution to that is nohup:
nohup is a POSIX command to ignore the HUP (hangup) signal, enabling the command to keep running after the user who issues the command has logged out. The HUP (hangup) signal is by convention the way a terminal warns depending processes of logout.
nohup is most often used to run commands in the background as daemons. Output that would normally go to the terminal goes to a file called nohup.out if it has not already been redirected. This command is very helpful when there is a need to run numerous batch jobs which are inter-dependent.
nohup is your friend.
You could:
Install screen and run the command from there. screen is a persistent terminal session that you can leave running.
Write an init/upstart (whatever you use) script so it loads on boot
Use the pear lib system_daemon
Use cron if batch work fits the scenario better (just remember to check for running instances before you launch another, iff concurrency is an issue)
Edit: or as everybody else and their brother has just said, nohup
Using command
nohup your_command &
For example
nohup phantomjs highcharts-convert.js -host 127.0.0.1 -port 3003 &
here "phantomjs highcharts-convert.js -host 127.0.0.1 -port 3003" was my command

Resources