Bash script - How to run ssh after another one is connected - linux

I don't have a powerful hardware so I can't run multiple ssh tunnels at the same time or it'll make the CPU load go way too high, my goal is to run a ssh tunnel after another one is connected, and reconnect if one of my ssh gets disconnected, so basically it's like this:
while true; do
if (1st ssh isn't connected); then
connect the first ssh
elif (1st ssh is finally connected); then
run the second ssh
elif (2nd ssh is finally connected); then
run the 3rd ssh
fi
sleep 1
done
The problem is that the amount of ssh tunnels keeps changing, sometimes a user wants to run 3 ssh tunnels and sometimes 5, it looks like this to run the script:
mytunnel.sh -a [number of tunnels they wanna run]
I'm thinking of for loop but I just can't figure out how to write it inside a for loop. Please help me.

Here is a for loop you can use:
#!/usr/local/bin/bash
LOOP=$1
for (( c=1; c<=$LOOP; c++ ))
do
echo "$c "
done
Replace echo with your commands and LOOP with whatever command-line arg you'll be using. This example reads command-line arg 1 (i.e. $1).
Example execution:

Tricky. Unfortunately I don't think ssh returns anything when it connects a tunnel, nor does it exit immediately when the connection is broken.
Instead what you probably want to do is make a port monitor that periodically checks that the port is accepting connections and spawns a new ssh tunnel (possibly killing the old ssh process) if it isn't.

Related

How to use the "watch" command with SSH

I have a script that monitors a specific server, giving me the disk usage, CPU usage, etc. I am using 2 Ubuntu VMs: I run the script on the server using SSH (ssh user#ip < script.sh from the first VM), and I want to make it show values in real time, so I tried 2 approaches I found on here:
1. while loop with clear
The first approach is using a while loop with "clear" to make the script run multiple times, giving new values every time and clearing the previous output like so:
while true
do
clear;
# bunch of code
done
The problem here is that it doesn't clear the terminal, it just keeps printing the new results one after another.
2. watch
The second approach uses watch:
watch -n 1 Script.sh
This works fine on the local machine (to monitor the current machine where the script is), but I can't find a way to make it run via SSH. Something like
ssh user#ip < 'watch -n 1 script.sh'
works in principle, but requires that the script be present on the server, which I want to avoid. Is there any way to run watch for the remote execution (via SSH) of a script that is present on the local machine?
For your second approach (using watch), what you can do instead is to run watch locally (from within the first VM) with an SSH command and piped-in script like this:
watch -n 1 'ssh user#ip < script.sh'
The drawback of this is that it will reconnect in each watch iteration (i.e., once a second), which some server configurations might not allow. See here for how to let SSH re-use the same connection for serial ssh runs.
But if what you want to do is to monitor servers, what I really recommend is to use a monitoring system like 'telegraf'.

Linux script for probing ssh connection in a loop and start log command after connect

I have a host machine that gets rebooted or reconnected quite a few times.
I want to have a script running on my dev machine that continuously tries to log into that machine and if successful runs a specific command (tailing the log data).
Edit: To clarify, the connection needs to stay open. The log command keeps tailing until I stop it manually.
What I have so far
#!/bin/bash
IP=192.168.178.1
if (("$#" >= 1))
then
IP=$1
fi
LOOP=1
trap 'echo "stopping"; LOOP=0' INT
while (( $LOOP==1 ))
do
if ping -c1 $IP
then
echo "Host $IP reached"
sshpass -p 'password' ssh -o ConnectTimeout=10 -q user#$IP '<command would go here>'
else
echo "Host $IP unreachable"
fi
sleep 1
done
The LOOP flag is not really used. The script is ended via CTRL-C.
Now this works if I do NOT add a command to be executed after the ssh and instead start the log output manually. On a disconnect the script keeps probing the connection and logs back in once the host is available again.
Also when I disconnect from the host (CTRL-D) the script will log right back into the host if CTRL-C is not pressed fast enough.
When I add a command to be executed after ssh the loop is broken. So pressing (CTRL-C) does not only stop the log but also disconnects and ends the script on the dev machine.
I guess I have to spawn another shell somewhere or something like that?
1) I want the script to keep probing, log in and run a command completely automatically and fall back to probing when the connection breaks.
2) I want to be able to stop the log on the host (CTRL-C) and thereby fall back to a logged in ssh connection to use it manually.
How do I fix this?
Maybe best approach on "fixing" would be fixing requirements.
The problematic part is number "2)".
The problem is from how SIGINT works.
When triggered, it is sent to the current control group related to your terminal. Mostly this is the shell and any process started from there. With more modern shells (you seem to use bash), the shell manages control groups such that programs started in the background are disconnected (by having been assigned a different control group).
In your case the ssh is started in the foreground (from a script executed in the foreground), so it will receive the interrupt, forward it to the remote and terminate as soon as the remote end terminated. As by that time the script shell has processed its signal handler (specified by trap) it is going to exit the loop and terminate itself.
So, as you can see, you have overloaded CTRL-C to mean two things:
terminate the monitoring script
terminate the remote command and continue with whatever is specified for the remote side.
You might get closer to what you want if you drop the first effect (or at least make it more explicit). Then, calling a script on the remote side that does not terminate itself but just the tail command, will be step. In that case you will likely need to use -t switch on ssh to get a terminal allocated for allowing normal shell operation later.
This, will not allow for terminating the remote side with just CTRL-C. You always will need to exit the remote shell that is going to be run.
The essence of such a remote script might look like:
tail command
shell
of course you would need to add whatever parts will be necessary for your shell or coding style.
An alternate approach would be to keep the current remote command being terminated and add another ssh call for the case of being interrupted that is spanning the shell for interactive use. But in that case, also `CTRL-C will not be available for terminating the minoring altogether.
To achieve this you might try changing active interrupt handler with your monitoring script to trigger termination as soon as the remote side returns. However, this will cause a race condition between the user being able to recognize remote command terminated (and control has been returned to local script) and the proper interrupt handler being in place. You might be able to sufficiently lower that risk be first activating the new trap handler and then echoing the fact and maybe add a sleep to allow the user to react.
Not really sure what you are saying.
Also, you should disable PasswordAuthentication in /etc/ssh/sshd_config and log by adding the public key of your home computer to `~/.ssh/authorized_keys
! /bin/sh
while [ true ];
do
RESPONSE=`ssh -i /home/user/.ssh/id_host user#$IP 'tail /home/user/log.txt'`
echo $RESPONSE
sleep 10
done

Bash poweroff script hangs system

My intention is to cycle through my list of ips and poweroff if my ping succeeds first. However the systems seems to hang. After running this script I can't ping the systems anymore and they aren't powered off. If I run ssh 192.168.1.ip "sudo poweroff" through terminal I dont encounter this issue. Any advice?
for ((ip=40, cnt=0; ip<=max; ip++, cnt++))
do
if ping -c 1 192.168.1.$ip &> /dev/null
then
printf "\n${array[$cnt]}: Ping Successful"
ssh 192.168.1.$ip "sudo poweroff"
printf "\n${array[$cnt]}: Power Down Executed\n"
sleep 1
else
printf "\n${array[$cnt]}: Ping Failed\n"
fi
done
After running a single ssh 192.168.1.40 "sudo poweroff" the system is properly powered off. When running my script, I am unable to ping the systems however I can visually see the fans and leds are still on. I think I should use a KVM to take a closer look since ssh doesn't allow allow connection after this script is run. Still at first glance I dont understand how running ssh 192.168.1.40 "sudo poweroff" and running it through my script really makes a difference. Anyways I'll try to add more information tomorrow.
ssh 192.168.1.$ip "(sleep 5; sudo poweroff)&" to put the process in the background on the remote host and sleep for 5 seconds before powering off to give time for the script to complete and exit the remote host before it goes down... – David C. Rankin
This Resolved my Issue.

Run a script if no SSH connection

I'm trying to get a script to run at startup, but does nothing if I've connected to my Raspberry Pi via SSH.
So far I've got the crontab to automatically run the script checkssh.sh via #reboot sleep 30 && sudo bash ./checkssh.sh and './checkssh.sh' contains this:
#!/bin/bash
if [ -n "$SSH_CLIENT" ] || [ -n "$SSH_TTY" ]; then
echo "SSH CONNECTED"
else
./autobackup.sh
fi
Running checkssh.sh from an SSH terminal returns 'SSH CONNECTED' which is expected, and letting it run automatically from the crontab at reboot when SSH isn't connected works correctly. However, when it runs at boot and I connect via SSH as soon as it's available, it still runs the script. I'm not sure where this is going wrong.
I need it to run automatically and if there's no SSH connection run autobackup.sh , but if there is an SSH connection, not to run anything. The device I use for the SSH connection may vary & the network used may also vary, so a script that relies on specific IP's isn't ideal.
Thanks for any help :)
Those environment variables (SSH_CLIENT and SSH_TTY) are only set in the environment of an SSH session. You cannot check them from another process and expect them to fulfill your goals here.
Instead, run the program finger. This is the standard way to see who is logged in.
Probably you need to add some delay before running your script to allow for the SSH service to come up. If cron service comes up before the sshd does, you will have a failure. Try:
#reboot sleep 60 && bash ./checkssh.sh
Also I would substitute the '.' with the full script path.In one scenario I had to add as many as 120 seconds to get the #reboot crontab to work right. But ssh should not need as much. I guess you can trim 60 seconds according to your needs after you get it working.

SSH Persistent Connection Timeout

I setup an ssh tunnel using a bash script, and the ssh tunnel is configured as a shared persistent connection tunnel.
At the end of my script, though, I have it setup to invoke a close command against the tunnel and to delete the .ssh/config file so that it doesn't remain open and nor does subsequent ssh tunnels that are manually started by a user.
Question is this... what is the best way to handle this issue of making sure the tunnel is closed in case someone ctrl+c the script or it crashes for some reason in the middle of the script before it invokes the close command and deletes the config file? I was going to add a timeout to the control master, but I cannot determine what I need to use based on my readings in the ssh_config man page.
Try to use trap:
#!/bin/bash
on_sigint(){
echo this function is called on ctrl+c
}
trap "on_sigint" SIGINT SIGTERM
echo start
# Do what you want
...
echo stop

Resources