I'm stuck with an exercise. I'm writing a bash script where I need to start execution of bash script on a remote machine and then instantly continue scripts execution. I have tried using this:
ssh user#host 'nohup bash -s > /dev/null 2>&1&' < local_script.sh
This however seems to be doing nothing. I have searched the web and can't find answer. All help will be highly appreciated.
I think you need to leave the ssh session open for this to work properly. If the ssh session closes then the remote process looses it's stdin. So bash won't be able to read load_script.sh before ssh closes.
ssh user#host 'bash -s > /dev/null 2>&1' < local_script.sh &
Anything else requires you to send the script first, then kick it off.
Related
I'm executing a bash script in Node.js like this:
const script = child_process.spawn('local_script.sh');
const stdout = fs.createWriteStream('stdout');
const stderr = fs.createWriteStream('stderr');
script.stdout.pipe(stdout);
script.stderr.pipe(stderr);
script.on('close', function(code) {
console.log('Script exited with code', code);
});
My local_script.sh uploads a script to my remote server and executes it:
#!/bin/bash
FILE=/root/remote_script.sh
HOST=123.456.78.9
scp remote_script.sh root#${HOST}:${FILE}
ssh root#${HOST} bash ${FILE}
Finally, my remote_script.sh is supposed to open an SSH tunnel (and perform some other actions that are not relevant for this question):
#!/bin/bash
REDIS_HOST=318.353.31.3
ssh -f -n root#${REDIS_HOST} -L 6379:127.0.0.1:6379 -N &
The problem is that even though I'm opening the SSH tunnel in the background, it seems my remote_script.sh never exits, because the Node.js close event is never called. If I don't open the SSH tunnel, it exits and emits the event as expected.
How can I make sure the script exits cleanly after opening the SSH tunnel? Note that I want the tunnel to persist after the script finishes.
I haven't tested this, but my guess is that the backgrounded ssh session (remote -> REDIS) is keeping the remote tty alive, and thus preventing the local -> remote session from closing. Try changing remote_script.sh to this:
#!/bin/bash
redis_host=318.353.31.3
nohup ssh -f -n "root#${redis_host}" -L 6379:127.0.0.1:6379 -N >/dev/null 2>&1 &
BTW, note that I switched the variable name to lowercase; there are a number of all-caps variables with special meanings (including HOST), and re-using any of them can have weird effects, so lower- or mixed-case variables are preferred for script use. Also, I double-quoted the variable reference, which won't matter in this case but is a good general habit for those cases where it does matter.
The way I managed to solve it is by using ssh -a root${HOST} bash ${FILE} in my local_script.sh. Note the -a flag which disables my forwarding ssh agent.
The important clue was that when I ran remote_script.sh on my remote machine directly, it would do everything as expected including a clean exit, but when I would try to logout, that's where it would hang.
Apparently ssh doesn't want to terminate while there are still active connections. When I typed ~#, which shows active ssh connections, it did indeed show my forwarding ssh agent.
I can run a local script on a remote server using the -s option, like so:
# run a local script without nohup...
ssh $SSH_USER#$SSH_HOST "bash -s" < myLocalScript.sh;
And I can run a remote script using nohup, like so:
# run a script on server with nohup...
ssh $SSH_USER#$SSH_HOST "nohup bash myRemoteScript.sh > results.out 2>&1 &"
But can I run my local script with nohup on the remote server? I expect the script to take many hours to complete so I need something like nohup. I know I can copy the script to the server and execute it but then I have to make sure I delete it once the script is complete, would rather not have to do that if possible.
I've tried the following but it's not working:
# run a local script without nohup...
ssh $SSH_USER#$SSH_HOST "nohup bash -s > results.out 2>&1 &" < myLocalScript.sh;
You shouldn't have to do anything special - Once you kick off a script on another machine, it should finish running even if you terminate the connection:
For example
ssh $SSH_USER#$SSH_HOST "bash -s > results.out 2>&1" < myLocalScript.sh &
# Please wait a few seconds for the connection to be established
kill $! # Optional: Kill the last process
If you want to test it, try a simple script like this
# myLocalScript.sh
echo 'File created - sleeping'
sleep 30
echo 'Finally done!'
The results.out file should immediately be created on the other machine with "File created - sleeping" in it. You can actually kill the local ssh process with kill <your_pid>, and it will still keep running on the other machine, and after 30 seconds, print "Finally done!" into the file, and exit.
I place this ssh call in the following a shell script on our Linux box named "tstz" and then call it with the linux "at" command in order to schedule it for later execution.
tstz script:
#! /bin/ksh
/usr/bin/ssh -tt <remote windows server> pmcmds ${fl} ${wf} < /dev/null >/tmp/test1.log 2>&1
at command syntax:
at -f tstz now + 1 minute
The ssh call executes remote command as expected, but the ssh connection closes immediately before the remote command has completed. I need the connection to stay open until the remote command has completed and then return control to the tstz script with an exit status.
This is the error I get in the /tmp/test1.log:
tcgetattr: Inappropriate ioctl for device
^[[2JConnection to dc01nj2dwifdv02.nj.core.him closed.^M
NOTE: When using the "at" command to schedule tstz, if I don't use -tt, the ssh command will not execute the remoted command "pmcmds ${fl} ${wf}". I believe this is because a terminal is required. I can however run tstz from the Linux command prompt in the foreground without the -tt on the ssh command line and it runs as expected.
Any help would be greately appreciated. Thanks!
As I understand you need to specify a command to execute on the REMOTE machine after successfully connecting to the server, not on LOCAL machine.
I use following command
ssh -i "key.pem" ec2-user#ec2-XX-XX-XX-XX.us-west-2.compute.amazonaws.com -t 'command; bash -l -c "sudo su"'
where you should replace "sudo su" with your own command, I guess with "pmcmds DFD_ETIME wf_TESTa"
So, try to issue, maybe
/usr/bin/ssh -tt <remote windows server> 'command; bash -l -c "pmcmds DFD_ETIME wf_TESTa"'
P.S. I have discovered interesting service on google called "explainshell"
which helped me to understand that "command;" keyword is crucial inside quotes.
I am using SSH to start a background process on a remote server. This is what I have at the moment:
ssh remote_user#server.com "nohup process &"
This works, in that the process does start. But the SSH session itself does not end until I hit Ctr-C.
When I hit Ctr-C, the remote process continues to run in the background.
I would like to place the ssh command in a script that I can run locally, so I would like the ssh session to exit automatically once the remote process has started.
Is there a way to make this happen?
The "-f" option to ssh tells ssh to run the remote command in the background and to return immediately. E.g.,
ssh -f user#host "echo foo; sleep 5; echo bar"
If you type the above, you will get your shell prompt back immediately, you will then see "foo" output. Five seconds later you will then see "bar" output. In the meantime, you could have been using the shell.
When using nohup, make sure you also redirect stdin, stdout and stderr:
ssh user#server 'DISPLAY=:0 nohup xeyes < /dev/null > std.out 2> std.err &'
In this way you will be completely detached from the remote process. Be carefull with using ssh -f user#host... since that will only put the ssh process in the background on the calling side. You can verify this by running a ps -aux | grep ssh on the calling machine and this will show you that the ssh call is still active, but just put in the background.
In my example above I use DISPLAY=:0 since xeyes is an X11 program and I want it started on the remote machine.
You could use screen to run your process on this screen, detach from screen Ctrl-a :detach and exit your current session without problem. Then you can reconnect to SSH and attach to this screen again to continue with your task or check if is finished.
Or you can send the command to an already running screen. Your local script should look like this:
ssh remote_user#server.com
screen -dmS new_screen sh
screen -S new_screen -p 0 -X stuff $'nohup process \n'
exit
For more info see this tutorial
Well this question is almost 10 years old, but I recently had to launch a very long script (taking several hours to complete) on a remote server and I found a way using the crontab.
If can edit your user's crontab on the remote server, connect with ssh to the server, edit the crontab and add an entry that will start your script the next minute. Let's say it's 15h03. Add this line :
4 15 * * * /path/to/your/script.sh
save your crontab, wait a minute for the script to be launched. Then edit again your crontab to remove this entry.
You can then safely exit ssh, even shut down your computer while the script is running.
My server deployment script triggers a long-running process through SSH, like so:
ssh host 'install.sh'
Since my internet connection at home is not the best, I can sometimes be disconnected while the install.sh is running. (This is easily simulated by closing the terminal window.) I would really like for the install.sh script to keep running in those cases, so that I don't end up with interrupted apt-get processes and similar nuisances.
The reason why install.sh gets killed seems to be that stdout and stderr are closed when the SSH session is yanked, so writing to them fails. (It's not an issue of SIGHUP, by the way -- using nohup makes no difference.) If I put touch ~/1 && echo this fails && touch ~/2 into install.sh, only ~/1 is created.
So running ssh host 'install.sh &> install.out' solves the problem, but then I lose any "live" progress and error output.
So my question is: What's an easy/idiomatic way to run a process through SSH so that it doesn't crash if SSH dies, but so that I can still see the output as it runs?
Solutions I have tried:
When I run things manually, I use screen for cases like this, but I don't think it will be of much help here because I need to run install.sh automatically from a shell script. Screen seems to be made for interactive use (it complains "Must be connected to a terminal.").
Using install.sh 2>&1 | tee install.out didn't help either (silly of me to think it might).
You can redirect stdout/stderr into install.out and then tail -f it. The following snippet actually works:
touch install.out && # so tail does not bark (race condition)
(install.sh < /dev/null &> install.out &
tail --pid "$!" -F install.out)
But surely there must a less awkward way to do the same thing?
Try using screen:
screen ./install.sh
If your ssh session gets interrupted, you can simply reattach to the session via another ssh connection:
screen -x
You can provide a terminal to your ssh session using the -t switch:
ssh -t server screen ./install.sh
install.sh 2>&1 | tee install.out
if the only issue is not getting stderr. You didn't say exactly why the tee wasn't acceptable. You may need the other nohup/stdin tweaks.