How to kill port forwarding once you have have finished using it - linux

In order to sync my home and work file systems I need to go via an intermediary computer and use port forwarding. Let us call the home computer A, the intermediate one B and the work computer C. From the command line I do this
ssh -N -f -L 2025:C:22 me_B#B && unison foo ssh://me_C#localhost:2025/foo
I would like to put this one-liner into a bash script. How can I make it quit gracefully at the end and not leave any port forwarding still set up?

ssh -N -f -L 2025:C:22 me_B#B &
pid=$! # ssh PID
rc=$? # ssh return code
# set up to kill ssh when this script finishes
function finish {
kill $pid
}
trap finish EXIT
[ $rc -eq 0 ] && unison foo ssh://me_C#localhost:2025/foo

Related

Stopping the Ping process in bash script?

I created a bash script to ping my local network to see which hosts is up and I have a problem in stopping the Ping process by using ctrl+C once it is started
the only way i found to suspend it but even the kill command doesn't work with the PID of the Ping
submask=100
for i in ${submask -le 110}
do
ping -n 2 192.168.1.$submask
((submask++))
done
Ctrl + C exit ping, but another ping starts. So you can use trap.
#!/bin/bash
exit_()
{
exit
}
submask=100
while [ $submask -le 110 ]
do
fping -c 2 192.168.77.$submask
((submask++))
trap exit_ int
done
I suggest you to limit the amount of packets sent with ping with the option -c.
I also corrected the bash syntax, guessing what you intend to do.
Finally, it is faster to run all the ping processes in parallel with the operand &.
Try:
for submask in ${100..110}
do
echo ping -c 1 192.168.1.$submask &
done

How to connect input/output to SSH session

What is a good way to be able to directly send to STDIN and receive from STDOUT of a process? I'm specifically interested in SSH, as I want to do the following:
[ssh into a remote server]
[run remote commands]
[run local commands]
[run remote commands]
etc...
For example, let's say I have a local script "localScript" that will output the next command I want to run remotely, depending on the output of "remoteScript". I could do something like:
output=$(ssh myServer "./remoteScript")
nextCommand=$(./localScript $output)
ssh myServer "$nextCommand"
But it would be nice to do this without closing/reopening the SSH connection at every step.
You can redirect SSH input and output to FIFO-s and then use these for two-way communication.
For example local.sh:
#!/bin/sh
SSH_SERVER="myServer"
# Redirect SSH input and output to temporary named pipes (FIFOs)
SSH_IN=$(mktemp -u)
SSH_OUT=$(mktemp -u)
mkfifo "$SSH_IN" "$SSH_OUT"
ssh "$SSH_SERVER" "./remote.sh" < "$SSH_IN" > "$SSH_OUT" &
# Open the FIFO-s and clean up the files
exec 3>"$SSH_IN"
exec 4<"$SSH_OUT"
rm -f "$SSH_IN" "$SSH_OUT"
# Read and write
counter=0
echo "PING${counter}" >&3
cat <&4 | while read line; do
echo "Remote responded: $line"
sleep 1
counter=$((counter+1))
echo "PING${counter}" >&3
done
And simple remote.sh:
#!/bin/sh
while read line; do
echo "$line PONG"
done
The method you are using works, but I don't think you can reuse the same connection everytime. You can, however, do this using screen, tmux or nohup, but that would greatly increase the complexity of your script because you will now have to emulate keypresses/shortcuts. I'm not even sure if you can if you do directly in bash. If you want to emulate keypresses, you will have to run the script in a new x-terminal and use xdotool to emulate the keypresses.
Another method is to delegate the whole script to the SSH server by just running the script on the remote server itself:
ssh root#MachineB 'bash -s' < local_script.sh

bash: what to do when stdout does not exist

In a very simplified scenario, I have a script that looks like this:
mv test _test
sleep 10
echo $1
mv _test test
and if I execute it with:
ssh localhost "test.sh foo"
the test file will have an underscore in the name as long as the script is running, and when the script is finished, it will send foo back. The script SHOULD keep running, even if you terminate the ssh command by pressing ctrl+c or if you lose connection the the server, but it doesn't (the file is not renamed back to "test"). So, I tried the following:
nohup ssh localhost "test.sh foo"
and it makes ssh immune to ctrl+c but flaky connection to the server still causes trouble. After some debugging, it turns out that the script WILL actually reach the end IF THERE IS NO ECHO IN IT. And when you think about it, it makes sense - when the connection is dropped, there is no more stdout (ssh socket) to echo to, so it will fail, silently.
I can, of course, echo to a file and then get the file, but I would prefer something smarter, along the lines of test tty && echo $1 (but tty invoked like this always returns false). Any suggestions are greatly appreciated.
The following command does what you want:
ssh -t user#host 'nohup ~/test.sh foo > nohup.out 2>&1 & p1=$!; tail -f ~/nohup.out & wait $p1'
... test.sh is located in the users home directory
Explanation:
1.) "ssh -t user#host " ... pretty clear ... starts remote session
2.) "nohup ~/test.sh foo > nohup.out 2>&1" ... starts the test.sh script with nohup in background
3.) "p1=$!;" ... stores the child pid of the previous command in p1
4.) "tail -f ~/nohup.out &" ... tail nohup.out in background to see the output of test.sh
5.) "wait $p1" ... waits for proccess test.sh (which pid is stored in p1) to finish
The above command works even if you interrupt it with ctrl+c.
you can use ...
ssh -t localhost "test.sh foo"
... to force a tty allocation
As st0ne suggested, tail fails, but does not cause the script to terminate, as opposed to cat and echo. So, there is no need for nohup, redirecting stdout to a temporary file, etc. just plain and simple:
mv test _test
sleep 10
echo $1 | tail
mv _test test
and execute it with:
ssh localhost "test.sh foo"

pipe timely commands to ssh

I am trying to pipe commands to an opened SSH session. The commands will be generated by a script, analyzing the results, and sending the next commands in accordance.
I do not want to put all the commands in a script on the remote host, and just run that script, because I am interested also in the status of the SSH process: sending locally the commands allow to "test" whether the SSH connection is alive or not, and get the appropriate return code from the SSH process.
I tried using something along these lines:
$ mkfifo /tpm/commands
$ ssh -t remote </tmp/commands
And from another term:
$ echo "command" >> /tmp/commands
Problem: SSH tells me that no pseudo-tty will be opened for stdin, and closes the connection as soon as "command" terminates.
I tried another approach:
$ ssh -t remote <<EOF
$(echo "command"; while true; do sleep 10; echo "command"; done)
EOF
But then, nothing is flushed to ssh until EOF is reached (in my case, never).
Do any of you have a solution ?
Stop closing /tmp/commands before you're done with it. When you close the pipe, ssh stops reading from it.
exec 7> /tmp/commands. # open once
echo foo >&7 # write multiple times
echo bar >&7
exec 7>&- # close once
You can additionally use ssh -tt to force ssh to open a tty on the remote.

How to know PID of the process ran by remote ssh

If I run a process at the remote site by using ssh as follows:
nohup ssh remote sleep 100 &
Is there a way to know the PID of sleep at remote?
Trying echo $! just returns the PID of ssh at local.
And, greping won't work since there are multiple sleep processes at remote.
You could pass a string to ssh so try
nohup ssh remote 'sleep 100 &; echo $!'
Try the following
nohup ssh remote 'sleep 100 > out 2> err < /dev/null & echo $!'

Resources