SSH tunnel prevents my spawned script from exiting - node.js

I'm executing a bash script in Node.js like this:
const script = child_process.spawn('local_script.sh');
const stdout = fs.createWriteStream('stdout');
const stderr = fs.createWriteStream('stderr');
script.stdout.pipe(stdout);
script.stderr.pipe(stderr);
script.on('close', function(code) {
console.log('Script exited with code', code);
});
My local_script.sh uploads a script to my remote server and executes it:
#!/bin/bash
FILE=/root/remote_script.sh
HOST=123.456.78.9
scp remote_script.sh root#${HOST}:${FILE}
ssh root#${HOST} bash ${FILE}
Finally, my remote_script.sh is supposed to open an SSH tunnel (and perform some other actions that are not relevant for this question):
#!/bin/bash
REDIS_HOST=318.353.31.3
ssh -f -n root#${REDIS_HOST} -L 6379:127.0.0.1:6379 -N &
The problem is that even though I'm opening the SSH tunnel in the background, it seems my remote_script.sh never exits, because the Node.js close event is never called. If I don't open the SSH tunnel, it exits and emits the event as expected.
How can I make sure the script exits cleanly after opening the SSH tunnel? Note that I want the tunnel to persist after the script finishes.

I haven't tested this, but my guess is that the backgrounded ssh session (remote -> REDIS) is keeping the remote tty alive, and thus preventing the local -> remote session from closing. Try changing remote_script.sh to this:
#!/bin/bash
redis_host=318.353.31.3
nohup ssh -f -n "root#${redis_host}" -L 6379:127.0.0.1:6379 -N >/dev/null 2>&1 &
BTW, note that I switched the variable name to lowercase; there are a number of all-caps variables with special meanings (including HOST), and re-using any of them can have weird effects, so lower- or mixed-case variables are preferred for script use. Also, I double-quoted the variable reference, which won't matter in this case but is a good general habit for those cases where it does matter.

The way I managed to solve it is by using ssh -a root${HOST} bash ${FILE} in my local_script.sh. Note the -a flag which disables my forwarding ssh agent.
The important clue was that when I ran remote_script.sh on my remote machine directly, it would do everything as expected including a clean exit, but when I would try to logout, that's where it would hang.
Apparently ssh doesn't want to terminate while there are still active connections. When I typed ~#, which shows active ssh connections, it did indeed show my forwarding ssh agent.

Related

Linux script for probing ssh connection in a loop and start log command after connect

I have a host machine that gets rebooted or reconnected quite a few times.
I want to have a script running on my dev machine that continuously tries to log into that machine and if successful runs a specific command (tailing the log data).
Edit: To clarify, the connection needs to stay open. The log command keeps tailing until I stop it manually.
What I have so far
#!/bin/bash
IP=192.168.178.1
if (("$#" >= 1))
then
IP=$1
fi
LOOP=1
trap 'echo "stopping"; LOOP=0' INT
while (( $LOOP==1 ))
do
if ping -c1 $IP
then
echo "Host $IP reached"
sshpass -p 'password' ssh -o ConnectTimeout=10 -q user#$IP '<command would go here>'
else
echo "Host $IP unreachable"
fi
sleep 1
done
The LOOP flag is not really used. The script is ended via CTRL-C.
Now this works if I do NOT add a command to be executed after the ssh and instead start the log output manually. On a disconnect the script keeps probing the connection and logs back in once the host is available again.
Also when I disconnect from the host (CTRL-D) the script will log right back into the host if CTRL-C is not pressed fast enough.
When I add a command to be executed after ssh the loop is broken. So pressing (CTRL-C) does not only stop the log but also disconnects and ends the script on the dev machine.
I guess I have to spawn another shell somewhere or something like that?
1) I want the script to keep probing, log in and run a command completely automatically and fall back to probing when the connection breaks.
2) I want to be able to stop the log on the host (CTRL-C) and thereby fall back to a logged in ssh connection to use it manually.
How do I fix this?
Maybe best approach on "fixing" would be fixing requirements.
The problematic part is number "2)".
The problem is from how SIGINT works.
When triggered, it is sent to the current control group related to your terminal. Mostly this is the shell and any process started from there. With more modern shells (you seem to use bash), the shell manages control groups such that programs started in the background are disconnected (by having been assigned a different control group).
In your case the ssh is started in the foreground (from a script executed in the foreground), so it will receive the interrupt, forward it to the remote and terminate as soon as the remote end terminated. As by that time the script shell has processed its signal handler (specified by trap) it is going to exit the loop and terminate itself.
So, as you can see, you have overloaded CTRL-C to mean two things:
terminate the monitoring script
terminate the remote command and continue with whatever is specified for the remote side.
You might get closer to what you want if you drop the first effect (or at least make it more explicit). Then, calling a script on the remote side that does not terminate itself but just the tail command, will be step. In that case you will likely need to use -t switch on ssh to get a terminal allocated for allowing normal shell operation later.
This, will not allow for terminating the remote side with just CTRL-C. You always will need to exit the remote shell that is going to be run.
The essence of such a remote script might look like:
tail command
shell
of course you would need to add whatever parts will be necessary for your shell or coding style.
An alternate approach would be to keep the current remote command being terminated and add another ssh call for the case of being interrupted that is spanning the shell for interactive use. But in that case, also `CTRL-C will not be available for terminating the minoring altogether.
To achieve this you might try changing active interrupt handler with your monitoring script to trigger termination as soon as the remote side returns. However, this will cause a race condition between the user being able to recognize remote command terminated (and control has been returned to local script) and the proper interrupt handler being in place. You might be able to sufficiently lower that risk be first activating the new trap handler and then echoing the fact and maybe add a sleep to allow the user to react.
Not really sure what you are saying.
Also, you should disable PasswordAuthentication in /etc/ssh/sshd_config and log by adding the public key of your home computer to `~/.ssh/authorized_keys
! /bin/sh
while [ true ];
do
RESPONSE=`ssh -i /home/user/.ssh/id_host user#$IP 'tail /home/user/log.txt'`
echo $RESPONSE
sleep 10
done

How to execute bash script on a remote machine asynchronously

I'm stuck with an exercise. I'm writing a bash script where I need to start execution of bash script on a remote machine and then instantly continue scripts execution. I have tried using this:
ssh user#host 'nohup bash -s > /dev/null 2>&1&' < local_script.sh
This however seems to be doing nothing. I have searched the web and can't find answer. All help will be highly appreciated.
I think you need to leave the ssh session open for this to work properly. If the ssh session closes then the remote process looses it's stdin. So bash won't be able to read load_script.sh before ssh closes.
ssh user#host 'bash -s > /dev/null 2>&1' < local_script.sh &
Anything else requires you to send the script first, then kick it off.

pipe timely commands to ssh

I am trying to pipe commands to an opened SSH session. The commands will be generated by a script, analyzing the results, and sending the next commands in accordance.
I do not want to put all the commands in a script on the remote host, and just run that script, because I am interested also in the status of the SSH process: sending locally the commands allow to "test" whether the SSH connection is alive or not, and get the appropriate return code from the SSH process.
I tried using something along these lines:
$ mkfifo /tpm/commands
$ ssh -t remote </tmp/commands
And from another term:
$ echo "command" >> /tmp/commands
Problem: SSH tells me that no pseudo-tty will be opened for stdin, and closes the connection as soon as "command" terminates.
I tried another approach:
$ ssh -t remote <<EOF
$(echo "command"; while true; do sleep 10; echo "command"; done)
EOF
But then, nothing is flushed to ssh until EOF is reached (in my case, never).
Do any of you have a solution ?
Stop closing /tmp/commands before you're done with it. When you close the pipe, ssh stops reading from it.
exec 7> /tmp/commands. # open once
echo foo >&7 # write multiple times
echo bar >&7
exec 7>&- # close once
You can additionally use ssh -tt to force ssh to open a tty on the remote.

Use SSH to start a background process on a remote server, and exit session

I am using SSH to start a background process on a remote server. This is what I have at the moment:
ssh remote_user#server.com "nohup process &"
This works, in that the process does start. But the SSH session itself does not end until I hit Ctr-C.
When I hit Ctr-C, the remote process continues to run in the background.
I would like to place the ssh command in a script that I can run locally, so I would like the ssh session to exit automatically once the remote process has started.
Is there a way to make this happen?
The "-f" option to ssh tells ssh to run the remote command in the background and to return immediately. E.g.,
ssh -f user#host "echo foo; sleep 5; echo bar"
If you type the above, you will get your shell prompt back immediately, you will then see "foo" output. Five seconds later you will then see "bar" output. In the meantime, you could have been using the shell.
When using nohup, make sure you also redirect stdin, stdout and stderr:
ssh user#server 'DISPLAY=:0 nohup xeyes < /dev/null > std.out 2> std.err &'
In this way you will be completely detached from the remote process. Be carefull with using ssh -f user#host... since that will only put the ssh process in the background on the calling side. You can verify this by running a ps -aux | grep ssh on the calling machine and this will show you that the ssh call is still active, but just put in the background.
In my example above I use DISPLAY=:0 since xeyes is an X11 program and I want it started on the remote machine.
You could use screen to run your process on this screen, detach from screen Ctrl-a :detach and exit your current session without problem. Then you can reconnect to SSH and attach to this screen again to continue with your task or check if is finished.
Or you can send the command to an already running screen. Your local script should look like this:
ssh remote_user#server.com
screen -dmS new_screen sh
screen -S new_screen -p 0 -X stuff $'nohup process \n'
exit
For more info see this tutorial
Well this question is almost 10 years old, but I recently had to launch a very long script (taking several hours to complete) on a remote server and I found a way using the crontab.
If can edit your user's crontab on the remote server, connect with ssh to the server, edit the crontab and add an entry that will start your script the next minute. Let's say it's 15h03. Add this line :
4 15 * * * /path/to/your/script.sh
save your crontab, wait a minute for the script to be launched. Then edit again your crontab to remove this entry.
You can then safely exit ssh, even shut down your computer while the script is running.

SSH: guarding stdout against disconnect

My server deployment script triggers a long-running process through SSH, like so:
ssh host 'install.sh'
Since my internet connection at home is not the best, I can sometimes be disconnected while the install.sh is running. (This is easily simulated by closing the terminal window.) I would really like for the install.sh script to keep running in those cases, so that I don't end up with interrupted apt-get processes and similar nuisances.
The reason why install.sh gets killed seems to be that stdout and stderr are closed when the SSH session is yanked, so writing to them fails. (It's not an issue of SIGHUP, by the way -- using nohup makes no difference.) If I put touch ~/1 && echo this fails && touch ~/2 into install.sh, only ~/1 is created.
So running ssh host 'install.sh &> install.out' solves the problem, but then I lose any "live" progress and error output.
So my question is: What's an easy/idiomatic way to run a process through SSH so that it doesn't crash if SSH dies, but so that I can still see the output as it runs?
Solutions I have tried:
When I run things manually, I use screen for cases like this, but I don't think it will be of much help here because I need to run install.sh automatically from a shell script. Screen seems to be made for interactive use (it complains "Must be connected to a terminal.").
Using install.sh 2>&1 | tee install.out didn't help either (silly of me to think it might).
You can redirect stdout/stderr into install.out and then tail -f it. The following snippet actually works:
touch install.out && # so tail does not bark (race condition)
(install.sh < /dev/null &> install.out &
tail --pid "$!" -F install.out)
But surely there must a less awkward way to do the same thing?
Try using screen:
screen ./install.sh
If your ssh session gets interrupted, you can simply reattach to the session via another ssh connection:
screen -x
You can provide a terminal to your ssh session using the -t switch:
ssh -t server screen ./install.sh
install.sh 2>&1 | tee install.out
if the only issue is not getting stderr. You didn't say exactly why the tee wasn't acceptable. You may need the other nohup/stdin tweaks.

Resources