How to use the "watch" command with SSH - linux

I have a script that monitors a specific server, giving me the disk usage, CPU usage, etc. I am using 2 Ubuntu VMs: I run the script on the server using SSH (ssh user#ip < script.sh from the first VM), and I want to make it show values in real time, so I tried 2 approaches I found on here:
1. while loop with clear
The first approach is using a while loop with "clear" to make the script run multiple times, giving new values every time and clearing the previous output like so:
while true
do
clear;
# bunch of code
done
The problem here is that it doesn't clear the terminal, it just keeps printing the new results one after another.
2. watch
The second approach uses watch:
watch -n 1 Script.sh
This works fine on the local machine (to monitor the current machine where the script is), but I can't find a way to make it run via SSH. Something like
ssh user#ip < 'watch -n 1 script.sh'
works in principle, but requires that the script be present on the server, which I want to avoid. Is there any way to run watch for the remote execution (via SSH) of a script that is present on the local machine?

For your second approach (using watch), what you can do instead is to run watch locally (from within the first VM) with an SSH command and piped-in script like this:
watch -n 1 'ssh user#ip < script.sh'
The drawback of this is that it will reconnect in each watch iteration (i.e., once a second), which some server configurations might not allow. See here for how to let SSH re-use the same connection for serial ssh runs.
But if what you want to do is to monitor servers, what I really recommend is to use a monitoring system like 'telegraf'.

Related

Linux script for probing ssh connection in a loop and start log command after connect

I have a host machine that gets rebooted or reconnected quite a few times.
I want to have a script running on my dev machine that continuously tries to log into that machine and if successful runs a specific command (tailing the log data).
Edit: To clarify, the connection needs to stay open. The log command keeps tailing until I stop it manually.
What I have so far
#!/bin/bash
IP=192.168.178.1
if (("$#" >= 1))
then
IP=$1
fi
LOOP=1
trap 'echo "stopping"; LOOP=0' INT
while (( $LOOP==1 ))
do
if ping -c1 $IP
then
echo "Host $IP reached"
sshpass -p 'password' ssh -o ConnectTimeout=10 -q user#$IP '<command would go here>'
else
echo "Host $IP unreachable"
fi
sleep 1
done
The LOOP flag is not really used. The script is ended via CTRL-C.
Now this works if I do NOT add a command to be executed after the ssh and instead start the log output manually. On a disconnect the script keeps probing the connection and logs back in once the host is available again.
Also when I disconnect from the host (CTRL-D) the script will log right back into the host if CTRL-C is not pressed fast enough.
When I add a command to be executed after ssh the loop is broken. So pressing (CTRL-C) does not only stop the log but also disconnects and ends the script on the dev machine.
I guess I have to spawn another shell somewhere or something like that?
1) I want the script to keep probing, log in and run a command completely automatically and fall back to probing when the connection breaks.
2) I want to be able to stop the log on the host (CTRL-C) and thereby fall back to a logged in ssh connection to use it manually.
How do I fix this?
Maybe best approach on "fixing" would be fixing requirements.
The problematic part is number "2)".
The problem is from how SIGINT works.
When triggered, it is sent to the current control group related to your terminal. Mostly this is the shell and any process started from there. With more modern shells (you seem to use bash), the shell manages control groups such that programs started in the background are disconnected (by having been assigned a different control group).
In your case the ssh is started in the foreground (from a script executed in the foreground), so it will receive the interrupt, forward it to the remote and terminate as soon as the remote end terminated. As by that time the script shell has processed its signal handler (specified by trap) it is going to exit the loop and terminate itself.
So, as you can see, you have overloaded CTRL-C to mean two things:
terminate the monitoring script
terminate the remote command and continue with whatever is specified for the remote side.
You might get closer to what you want if you drop the first effect (or at least make it more explicit). Then, calling a script on the remote side that does not terminate itself but just the tail command, will be step. In that case you will likely need to use -t switch on ssh to get a terminal allocated for allowing normal shell operation later.
This, will not allow for terminating the remote side with just CTRL-C. You always will need to exit the remote shell that is going to be run.
The essence of such a remote script might look like:
tail command
shell
of course you would need to add whatever parts will be necessary for your shell or coding style.
An alternate approach would be to keep the current remote command being terminated and add another ssh call for the case of being interrupted that is spanning the shell for interactive use. But in that case, also `CTRL-C will not be available for terminating the minoring altogether.
To achieve this you might try changing active interrupt handler with your monitoring script to trigger termination as soon as the remote side returns. However, this will cause a race condition between the user being able to recognize remote command terminated (and control has been returned to local script) and the proper interrupt handler being in place. You might be able to sufficiently lower that risk be first activating the new trap handler and then echoing the fact and maybe add a sleep to allow the user to react.
Not really sure what you are saying.
Also, you should disable PasswordAuthentication in /etc/ssh/sshd_config and log by adding the public key of your home computer to `~/.ssh/authorized_keys
! /bin/sh
while [ true ];
do
RESPONSE=`ssh -i /home/user/.ssh/id_host user#$IP 'tail /home/user/log.txt'`
echo $RESPONSE
sleep 10
done

Run a script if no SSH connection

I'm trying to get a script to run at startup, but does nothing if I've connected to my Raspberry Pi via SSH.
So far I've got the crontab to automatically run the script checkssh.sh via #reboot sleep 30 && sudo bash ./checkssh.sh and './checkssh.sh' contains this:
#!/bin/bash
if [ -n "$SSH_CLIENT" ] || [ -n "$SSH_TTY" ]; then
echo "SSH CONNECTED"
else
./autobackup.sh
fi
Running checkssh.sh from an SSH terminal returns 'SSH CONNECTED' which is expected, and letting it run automatically from the crontab at reboot when SSH isn't connected works correctly. However, when it runs at boot and I connect via SSH as soon as it's available, it still runs the script. I'm not sure where this is going wrong.
I need it to run automatically and if there's no SSH connection run autobackup.sh , but if there is an SSH connection, not to run anything. The device I use for the SSH connection may vary & the network used may also vary, so a script that relies on specific IP's isn't ideal.
Thanks for any help :)
Those environment variables (SSH_CLIENT and SSH_TTY) are only set in the environment of an SSH session. You cannot check them from another process and expect them to fulfill your goals here.
Instead, run the program finger. This is the standard way to see who is logged in.
Probably you need to add some delay before running your script to allow for the SSH service to come up. If cron service comes up before the sshd does, you will have a failure. Try:
#reboot sleep 60 && bash ./checkssh.sh
Also I would substitute the '.' with the full script path.In one scenario I had to add as many as 120 seconds to get the #reboot crontab to work right. But ssh should not need as much. I guess you can trim 60 seconds according to your needs after you get it working.

Run command multiple linux server

One of my tasks at work is to check the health/status of multiple Linux servers everyday. I'm thinking of a way to automate this task (without having to login to each server everyday). I'm a newbie system admin by the way. Initially, my idea was to setup a cron job that would run scripts and email the output. Unfortunately, it's not possible to send mail from the servers as of the moment.
I was thinking of running the command in parallel, but I don't know how. For example, how can I see output of df -h without logging in to servers one by one.
You can run ssh with the -t flag to open a ssh session, run a command and then close the session. But to get this fully automated you should automate the login process to every server so that you don't need to type the password for every server.
So to run df -hon a remote server and then close the session you would run ssh -t root#server.com "df -h". Then you can process that output however you want.
One way of automating this could be to write a bash script that runs this command for every server and process the output to check the health of the server.
For further information about the -t flag or how you can automate the login process for ssh.
https://www.novell.com/coolsolutions/tip/16747.html
https://serverfault.com/questions/241588/how-to-automate-ssh-login-with-password
You can use ssh tunnels or just simply ssh for this purpose. With ssh tunnel you can redirect the outputs to your machine, or as an alternative, you can run the ssh with the remote commands on your machine then get the ouput on your machine too.
Please check the following pages for further reading:
http://blog.trackets.com/2014/05/17/ssh-tunnel-local-and-remote-port-forwarding-explained-with-examples.html
https://www.google.hu/amp/s/www.cyberciti.biz/faq/unix-linux-execute-command-using-ssh/amp/
If you want to avoid manual login, use ssh keys.
Create a file /etc/sxx/hosts
populate like so:
[grp_ips]
1.1.1.1
2.2.2.2
3.3.3.3
share ssh key on all machines.
Install sxx from package:
https://github.com/ericcurtin/sxx/releases
Then run command like so:
sxx username#grp_ips "whatever bash command"

Bash script - How to run ssh after another one is connected

I don't have a powerful hardware so I can't run multiple ssh tunnels at the same time or it'll make the CPU load go way too high, my goal is to run a ssh tunnel after another one is connected, and reconnect if one of my ssh gets disconnected, so basically it's like this:
while true; do
if (1st ssh isn't connected); then
connect the first ssh
elif (1st ssh is finally connected); then
run the second ssh
elif (2nd ssh is finally connected); then
run the 3rd ssh
fi
sleep 1
done
The problem is that the amount of ssh tunnels keeps changing, sometimes a user wants to run 3 ssh tunnels and sometimes 5, it looks like this to run the script:
mytunnel.sh -a [number of tunnels they wanna run]
I'm thinking of for loop but I just can't figure out how to write it inside a for loop. Please help me.
Here is a for loop you can use:
#!/usr/local/bin/bash
LOOP=$1
for (( c=1; c<=$LOOP; c++ ))
do
echo "$c "
done
Replace echo with your commands and LOOP with whatever command-line arg you'll be using. This example reads command-line arg 1 (i.e. $1).
Example execution:
Tricky. Unfortunately I don't think ssh returns anything when it connects a tunnel, nor does it exit immediately when the connection is broken.
Instead what you probably want to do is make a port monitor that periodically checks that the port is accepting connections and spawns a new ssh tunnel (possibly killing the old ssh process) if it isn't.

Unix : Run a script on a different machine at a particular time

I am working on a script which gets the script name and time to run that script and login /host name from a configuration file.
I dont have cron,at and crontab permission.
Now is there any other way to implement the logic to run a script on the input time (set in a configuratble file) from another script running on different host.
In Detail:
It is like script_A reads a configuration file from where it gets three inputs script_B , time to run (ddmmyyyy h24:mm:ss),login1#machine1. This script_B has to be run at a time provided on the given host.
None of the connected machines have cron,crontab,at permissions
I am using solaris
Can we have something like this in unix that the script_A creates a script_c which have the script_B with a check on time parameter. Now this script_c is copied to remote machine and it keeps running there in background till the time provided is reached.Once the time has come it execute script_b (located at remote host in the config file) and exit.
Thanks.
If you want to execute command foo at epoch time xxxxxxxxxx on host, you could do:
$ delay=$((xxxxxxxxxx - $( date +%s ))); test $delay -gt 0 &&
sleep $delay && ssh host foo
The simplest method is to compile cron from source and deploy it on the target machine. Every time your code gets any kind of control over the machine, check if your cron daemon is running (classic PID file) and start it if necessary.
A warning, though:
This is a social problem and a technical solution. Your mileage will be low.

Resources