My server deployment script triggers a long-running process through SSH, like so:
ssh host 'install.sh'
Since my internet connection at home is not the best, I can sometimes be disconnected while the install.sh is running. (This is easily simulated by closing the terminal window.) I would really like for the install.sh script to keep running in those cases, so that I don't end up with interrupted apt-get processes and similar nuisances.
The reason why install.sh gets killed seems to be that stdout and stderr are closed when the SSH session is yanked, so writing to them fails. (It's not an issue of SIGHUP, by the way -- using nohup makes no difference.) If I put touch ~/1 && echo this fails && touch ~/2 into install.sh, only ~/1 is created.
So running ssh host 'install.sh &> install.out' solves the problem, but then I lose any "live" progress and error output.
So my question is: What's an easy/idiomatic way to run a process through SSH so that it doesn't crash if SSH dies, but so that I can still see the output as it runs?
Solutions I have tried:
When I run things manually, I use screen for cases like this, but I don't think it will be of much help here because I need to run install.sh automatically from a shell script. Screen seems to be made for interactive use (it complains "Must be connected to a terminal.").
Using install.sh 2>&1 | tee install.out didn't help either (silly of me to think it might).
You can redirect stdout/stderr into install.out and then tail -f it. The following snippet actually works:
touch install.out && # so tail does not bark (race condition)
(install.sh < /dev/null &> install.out &
tail --pid "$!" -F install.out)
But surely there must a less awkward way to do the same thing?
Try using screen:
screen ./install.sh
If your ssh session gets interrupted, you can simply reattach to the session via another ssh connection:
screen -x
You can provide a terminal to your ssh session using the -t switch:
ssh -t server screen ./install.sh
install.sh 2>&1 | tee install.out
if the only issue is not getting stderr. You didn't say exactly why the tee wasn't acceptable. You may need the other nohup/stdin tweaks.
Related
I'd like to run a linux console command from a terminal, preventing it from accessing the TTY by itself (which will, for example, happen often when the console command tries to request a password from the user - this should just fail). The closest I get to a solution is using this wrapper:
temp=`mktemp -d`
echo "$#" > $temp/run.sh
mkfifo $temp/out $temp/err
setsid sh -c "sh $temp/run.sh > $temp/out 2> $temp/err" &
cat $temp/err 1>&2 &
cat $temp/out
rm -f $temp/out $temp/err $temp/run.sh
rmdir $temp
This runs the command as expected without TTY access, but passing the stdout/stderr output through the FIFO pipes does not work for some reason. I end up with no output at all even though the process wrote to stdout or stderr.
Any ideas?
Well, thank you all for having a look. Turns out that the script already contained a working approach. It just contained a typo which caused it to fail. I corrected it in the question so it may serve for future reference.
I'm executing a bash script in Node.js like this:
const script = child_process.spawn('local_script.sh');
const stdout = fs.createWriteStream('stdout');
const stderr = fs.createWriteStream('stderr');
script.stdout.pipe(stdout);
script.stderr.pipe(stderr);
script.on('close', function(code) {
console.log('Script exited with code', code);
});
My local_script.sh uploads a script to my remote server and executes it:
#!/bin/bash
FILE=/root/remote_script.sh
HOST=123.456.78.9
scp remote_script.sh root#${HOST}:${FILE}
ssh root#${HOST} bash ${FILE}
Finally, my remote_script.sh is supposed to open an SSH tunnel (and perform some other actions that are not relevant for this question):
#!/bin/bash
REDIS_HOST=318.353.31.3
ssh -f -n root#${REDIS_HOST} -L 6379:127.0.0.1:6379 -N &
The problem is that even though I'm opening the SSH tunnel in the background, it seems my remote_script.sh never exits, because the Node.js close event is never called. If I don't open the SSH tunnel, it exits and emits the event as expected.
How can I make sure the script exits cleanly after opening the SSH tunnel? Note that I want the tunnel to persist after the script finishes.
I haven't tested this, but my guess is that the backgrounded ssh session (remote -> REDIS) is keeping the remote tty alive, and thus preventing the local -> remote session from closing. Try changing remote_script.sh to this:
#!/bin/bash
redis_host=318.353.31.3
nohup ssh -f -n "root#${redis_host}" -L 6379:127.0.0.1:6379 -N >/dev/null 2>&1 &
BTW, note that I switched the variable name to lowercase; there are a number of all-caps variables with special meanings (including HOST), and re-using any of them can have weird effects, so lower- or mixed-case variables are preferred for script use. Also, I double-quoted the variable reference, which won't matter in this case but is a good general habit for those cases where it does matter.
The way I managed to solve it is by using ssh -a root${HOST} bash ${FILE} in my local_script.sh. Note the -a flag which disables my forwarding ssh agent.
The important clue was that when I ran remote_script.sh on my remote machine directly, it would do everything as expected including a clean exit, but when I would try to logout, that's where it would hang.
Apparently ssh doesn't want to terminate while there are still active connections. When I typed ~#, which shows active ssh connections, it did indeed show my forwarding ssh agent.
I send a tee command from host 1 to host 2:
ssh user#host2 '/path/run |& tee myFile.txt'
I use tee so that I get the output of the binary to be added to myFile.txt
The problem I have then is after a bit of time, I want to regain control of my local host without having a lot of printout. So I do CTRL+C. This lets the process on host2 continue to run, which is what I want, but it stops the tee process itself, so the file is not populated.
I tried to replace |& tee myFile.txt' by 2>&1 myFile.txt' & but it did not help.
How can I ensure that the file continues to be populated on host2, while regaining control to my session on host1?
If you want to record the results in some file (work with IO redirection inside of the nohup), you need to enclose all the pipeline in the nohup. It does not use shell expansions, since argument is only COMMAND ARGS, so using a sh is a good way:
ssh user#host2 'nohup sh -c "/path/run |& tee myFile.txt" &'
but note that nohup will disconnect the terminal from the command ant it might fail. Useful would be to redirect it directly to the file:
ssh user#host2 'nohup sh -c "/path/run &> myFile.txt" &'
Inspiration from the SO answer.
use nohup, screen or tmux for backgrounding a process.
I'm stuck with an exercise. I'm writing a bash script where I need to start execution of bash script on a remote machine and then instantly continue scripts execution. I have tried using this:
ssh user#host 'nohup bash -s > /dev/null 2>&1&' < local_script.sh
This however seems to be doing nothing. I have searched the web and can't find answer. All help will be highly appreciated.
I think you need to leave the ssh session open for this to work properly. If the ssh session closes then the remote process looses it's stdin. So bash won't be able to read load_script.sh before ssh closes.
ssh user#host 'bash -s > /dev/null 2>&1' < local_script.sh &
Anything else requires you to send the script first, then kick it off.
I'm running a game server on a remote server where I use a detached screen instance to leave it running.
I'm now creating a script that can be used to shut down the server, back up all the vital files and start it up again, however I'm having a few difficulties with dealing with the screen.
I assumed that I could just switch into the detached screen in the script (after the server had already been shut down) by calling screen -r in the script.
But that doesn't seem to work because if I run the script from outside screen it just launches the server in that session.
screen -r
cd ~/servers/StarMade/
sh StarMade-dedicated-server-linux.sh
screen -d
This is what I thought would do the trick but it doesn't. Maybe somebody can help me out here. I'm not a bash expert. In fact this is propably my first bash script that doesn't include "Hello World". Thanks.
Your script, as in your example, will get executed by your sell, not the one in the screen. You need to tell the running screen to read a file and execute it - that's what the -X option is for.
Try
tempfile=$(mktemp)
cat > $tempfile <<EOF
cd ~/servers/StarMade/
sh StarMade-dedicated-server-linux.sh
EOF
screen -X readbuf $tempfile
screen -X paste .
rm -f $tempfile
You can leave screen running in a 2nd terminal session to see what happens.