Difference between nohup and ctrl+z command in Linux - linux

I want to run jobs in the background so that I can logout from terminal once any job is started. I know two ways
1) run job and then press ctrl+z and then enter bg
mysqldump -uroot -p dbname > dbname.sql
2) using nohup to run job and then press ctrl+z and then enter bg
nohup mysqldump -u root -p dbname > dbname.sql 2>&1
I want to know the difference between above two commands and which one is best in which scenario.

Running your process as a job constrains your job to the current session. So if you run
$ start_running_job &
$ exit
The job will stop when you exit.
Pressing Control-Z has the same effect as the lines above.
Running nohup places the job so that it survives the end of the current session. So if you run
$ nohup start_running_job &
$ exit
The job will continue running.

I agree with answer provided...
"nohup" basically runs your command/script in back-end mode on server itself and it is not related to your current login session while putting "&" just puts the script in background and remains connected to your current login session. In any case you can use fg to bring back script/command.
Which is good for you, for this it depends how you are accessing your server and for how long your script is gonna take to complete. Say your script will take more amount of time and your current login session remains ideal it will close down and script will get killed if you have only given &. So much of the time just give nohup and don't worry it.

Related

Bash script how to run a command remotely and then exit the remote terminal

I'm trying to execute the command:
ssh nvidia#ubuntu-ip-address "/opt/ads2/arm-linux64/bin/ads2 svcd&"
This works so far except that it hangs in the remote terminal when "/opt/ads2/arm-linux64/bin/ads2 svcd&" is executed, unless i enter ctrl+c. So I'm looking for a command that, after executing the command, exits from the remote terminal and continue executing the local bash script.
thanks in advance
When you run a command in background on a terminal, regardless of weather it be local or remotely, if you attempt to logout most systems will warn you have running jobs. One further attempt to logout and your jobs get killed as you exit.
In order to avoid this you need to detach your running jobs from terminal.
if job is already running you can
disown -h <jobspec ar reported by jobs>
If you want to run something in background and then exit leaving it running you can use nohup
nohup command &
This is certainly ok on init systems ... not sure if it works exactly like this on systems that use systemd.

How can place a job of linux terminal to background after enter password?

I use this command in linux terminal to connect to a server and use it as proxy :
ssh -N -D 7070 root#ip_address
it's get the password and connect and everything is Ok but how can I put this process in background ?
I used CTRL+Z but it stop not put this process in background ...
CTRL-Z is doing exactly what it should, which is stop the process. If you then want to put it in the background, the shell command for doing that is bg:
$ ssh -N -D 7070 -l user 192.168.1.51
user#192.168.1.51's password:
^Z
[1]+ Stopped ssh -N -D 7070 -l mjfraioli 192.168.1.51
$ bg
[1]+ ssh -N -D 7070 -l user 192.168.1.51 &
That way you can enter the password interactively, and only once that is complete, stop it and put it into the background.
Try adding an ampersand to the end of your command:
ssh -N -D 7070 root#ip_address &
Explanation:
This trailing ampersand directs the shell to run the command in the background, that is, it is forked and run in a separate sub-shell, as a job, asynchronously. The shell will immediately return the return status of 0 for true and continue as normal, either processing further commands in a script or returning the cursor focus back to the user in a Linux terminal.
The shell will print out the forked process’s job number and process ID (PID) like so:
$ ./myscript.py &
[1] 1337
The stdout of the forked process will still be attached to the parent, so any output will still appear in your terminal.
After a process is forked using a single trailing ampersand &, its process ID (PID) is stored in a special variable $!. This can be used later to refer to the process:
$ echo $!
1337
Once a process is forked, it can be seen in the jobs list:
$ jobs
[1]+ Running ./myscript.py &
And it can be brought back to the command line before it finishes with the foreground command:
fg
The foreground command takes an optional argument of the job number, if you have forked multiple processes.
A single ampersand & can also delimit a list of commands to be run asynchronously.
./script.py & ./script2.py & ./script3.py &
In this example, all 3 python scripts are run at the same time, in separate sub-shells. Their stdout will still be attached to the parent shell, so if running this from a Linux terminal, you will still see the outputs.
This can also be used as a quick hack to take advantage of multiple cores with shell scripts, but be warned, it is a hack!
To detach a process completely from the shell, you may want to pipe the stdout and stderr to a file or to /dev/null. A nice way of doing this is with the nohup command.
source for above explanation: http://bashitout.com/2013/05/18/Ampersands-on-the-command-line.html
You can add option -f to make the ssh command run in background.
So the answer is ssh -f -D port username#hostname -N.

Disabling Hanging Script

When launching a bash script in LINUX, the script succeeds and is successful, yet the terminal hangs. I must always input CTRL+C to end the program. I am able to type in the terminal and press enter, but the script does not respond.
I can not change the script files, but can I launch it so that it disables waiting for the user? Any troubleshooting tips to disable this behaviour?
You can execute the script with & at the end, this will give the control back to the shell (executes the script as a background process).
./script.sh &
If you want to stop the script, you need to get its process id and then kill it. To get the process id, either execute ps aux | grep script where script is your script name, or execute echo $! right after you launched the script. When you have the process id, you can kill the process with kill 1234 where 1234 is the process id.
If the execution time of the script can be estimated, you can kill it automatically after a certain amount of time:
bash -c '(sleep 5m; kill $$ 2> /dev/null) & exec script' &
In this command sleep 5m is the time after the process will be killed, and script is the name of your script (or the command).
For example if the script's execution time is 30 seconds on average, then you can set the timeout to a minute or two to give it some extra time in case the execution is slower than usual. Note that this command doesn't guarantee that the script finished its execution, so use it with care.

Use SSH to start a background process on a remote server, and exit session

I am using SSH to start a background process on a remote server. This is what I have at the moment:
ssh remote_user#server.com "nohup process &"
This works, in that the process does start. But the SSH session itself does not end until I hit Ctr-C.
When I hit Ctr-C, the remote process continues to run in the background.
I would like to place the ssh command in a script that I can run locally, so I would like the ssh session to exit automatically once the remote process has started.
Is there a way to make this happen?
The "-f" option to ssh tells ssh to run the remote command in the background and to return immediately. E.g.,
ssh -f user#host "echo foo; sleep 5; echo bar"
If you type the above, you will get your shell prompt back immediately, you will then see "foo" output. Five seconds later you will then see "bar" output. In the meantime, you could have been using the shell.
When using nohup, make sure you also redirect stdin, stdout and stderr:
ssh user#server 'DISPLAY=:0 nohup xeyes < /dev/null > std.out 2> std.err &'
In this way you will be completely detached from the remote process. Be carefull with using ssh -f user#host... since that will only put the ssh process in the background on the calling side. You can verify this by running a ps -aux | grep ssh on the calling machine and this will show you that the ssh call is still active, but just put in the background.
In my example above I use DISPLAY=:0 since xeyes is an X11 program and I want it started on the remote machine.
You could use screen to run your process on this screen, detach from screen Ctrl-a :detach and exit your current session without problem. Then you can reconnect to SSH and attach to this screen again to continue with your task or check if is finished.
Or you can send the command to an already running screen. Your local script should look like this:
ssh remote_user#server.com
screen -dmS new_screen sh
screen -S new_screen -p 0 -X stuff $'nohup process \n'
exit
For more info see this tutorial
Well this question is almost 10 years old, but I recently had to launch a very long script (taking several hours to complete) on a remote server and I found a way using the crontab.
If can edit your user's crontab on the remote server, connect with ssh to the server, edit the crontab and add an entry that will start your script the next minute. Let's say it's 15h03. Add this line :
4 15 * * * /path/to/your/script.sh
save your crontab, wait a minute for the script to be launched. Then edit again your crontab to remove this entry.
You can then safely exit ssh, even shut down your computer while the script is running.

How to make a program continue to run after log out from ssh? [duplicate]

This question already has answers here:
Closed 12 years ago.
Possible Duplicate:
Prevent a background process from being stopped after closing SSH client
I have a program that takes a lot of time to finish.
It is running as root over ssh.
I want it to continue to run after I logout,is this possible and how would I achieve this?
Assuming that you have a program running in the foreground, press ctrl-Z, then:
[1]+ Stopped myprogram
$ disown -h %1
$ bg 1
[1]+ myprogram &
$ logout
If there is only one job, then you don't need to specify the job number. Just use disown -h and bg.
Explanation of the above steps:
You press ctrl-Z. The system suspends the running program, displays a job number and a "Stopped" message and returns you to a bash prompt.
You type the disown -h %1 command (here, I've used a 1, but you'd use the job number that was displayed in the Stopped message) which marks the job so it ignores the SIGHUP signal (it will not be stopped by logging out).
Next, type the bg command using the same job number; this resumes the running of the program in the background and a message is displayed confirming that.
You can now log out and it will continue running..
You should try using nohup and running it in the background:
nohup sleep 3600 &
I would try the program screen.
Start in the background:
./long_running_process options &
And disown the job before you log out:
disown
You want nohup. See http://nixcraft.com/linux-software/313-ssh-nohup-connection.html
You could use screen, detach and reattach

Resources