RSH: Running out of ports - linux

I have an issue where I am running out of ports when using RSH to start a script remotely.
I have a script that i need to run that has been pushed out to every server.
I have a list of servers (hostfilelist)
Basically, I have a simple loop that will run them in paralell.
for host in `cat hostfilelist`; do
rsh $host ksh script.ksh &
done
Problem is there are like 2k servers and I am hitting a limit of 512, (assuming port range is 512-1023 for RSH based on documents i have read).
How can i get around this?

With your code you would not only run into the "secure" port limitation of rsh, but you may also hit a file descriptor limit (check with ulimit -n); each network connection consumes a file descriptor as well.
What you are doing with your code is to go through the hostfilelist and for each host run an rsh command which is put in the background (on the source server) with the ampersand. Each of these connections is kept open in the background until the script on the remote host finishes.
You are much better off in this situation to put the execution of the script in the background on each remote host, so your rsh command comes back immediately after starting the remote job and thus frees up the network connection (and port) again. To do so, rewrite the second line in your code to
rsh $host "ksh script.ksh &"
However, you may still run into issues with port reuse (see TIME_WAIT status on netstat output) if things happen too fast.
And I'd strongly recommend to let go of rsh and use ssh instead.

Related

How to use the "watch" command with SSH

I have a script that monitors a specific server, giving me the disk usage, CPU usage, etc. I am using 2 Ubuntu VMs: I run the script on the server using SSH (ssh user#ip < script.sh from the first VM), and I want to make it show values in real time, so I tried 2 approaches I found on here:
1. while loop with clear
The first approach is using a while loop with "clear" to make the script run multiple times, giving new values every time and clearing the previous output like so:
while true
do
clear;
# bunch of code
done
The problem here is that it doesn't clear the terminal, it just keeps printing the new results one after another.
2. watch
The second approach uses watch:
watch -n 1 Script.sh
This works fine on the local machine (to monitor the current machine where the script is), but I can't find a way to make it run via SSH. Something like
ssh user#ip < 'watch -n 1 script.sh'
works in principle, but requires that the script be present on the server, which I want to avoid. Is there any way to run watch for the remote execution (via SSH) of a script that is present on the local machine?
For your second approach (using watch), what you can do instead is to run watch locally (from within the first VM) with an SSH command and piped-in script like this:
watch -n 1 'ssh user#ip < script.sh'
The drawback of this is that it will reconnect in each watch iteration (i.e., once a second), which some server configurations might not allow. See here for how to let SSH re-use the same connection for serial ssh runs.
But if what you want to do is to monitor servers, what I really recommend is to use a monitoring system like 'telegraf'.

Linux script for probing ssh connection in a loop and start log command after connect

I have a host machine that gets rebooted or reconnected quite a few times.
I want to have a script running on my dev machine that continuously tries to log into that machine and if successful runs a specific command (tailing the log data).
Edit: To clarify, the connection needs to stay open. The log command keeps tailing until I stop it manually.
What I have so far
#!/bin/bash
IP=192.168.178.1
if (("$#" >= 1))
then
IP=$1
fi
LOOP=1
trap 'echo "stopping"; LOOP=0' INT
while (( $LOOP==1 ))
do
if ping -c1 $IP
then
echo "Host $IP reached"
sshpass -p 'password' ssh -o ConnectTimeout=10 -q user#$IP '<command would go here>'
else
echo "Host $IP unreachable"
fi
sleep 1
done
The LOOP flag is not really used. The script is ended via CTRL-C.
Now this works if I do NOT add a command to be executed after the ssh and instead start the log output manually. On a disconnect the script keeps probing the connection and logs back in once the host is available again.
Also when I disconnect from the host (CTRL-D) the script will log right back into the host if CTRL-C is not pressed fast enough.
When I add a command to be executed after ssh the loop is broken. So pressing (CTRL-C) does not only stop the log but also disconnects and ends the script on the dev machine.
I guess I have to spawn another shell somewhere or something like that?
1) I want the script to keep probing, log in and run a command completely automatically and fall back to probing when the connection breaks.
2) I want to be able to stop the log on the host (CTRL-C) and thereby fall back to a logged in ssh connection to use it manually.
How do I fix this?
Maybe best approach on "fixing" would be fixing requirements.
The problematic part is number "2)".
The problem is from how SIGINT works.
When triggered, it is sent to the current control group related to your terminal. Mostly this is the shell and any process started from there. With more modern shells (you seem to use bash), the shell manages control groups such that programs started in the background are disconnected (by having been assigned a different control group).
In your case the ssh is started in the foreground (from a script executed in the foreground), so it will receive the interrupt, forward it to the remote and terminate as soon as the remote end terminated. As by that time the script shell has processed its signal handler (specified by trap) it is going to exit the loop and terminate itself.
So, as you can see, you have overloaded CTRL-C to mean two things:
terminate the monitoring script
terminate the remote command and continue with whatever is specified for the remote side.
You might get closer to what you want if you drop the first effect (or at least make it more explicit). Then, calling a script on the remote side that does not terminate itself but just the tail command, will be step. In that case you will likely need to use -t switch on ssh to get a terminal allocated for allowing normal shell operation later.
This, will not allow for terminating the remote side with just CTRL-C. You always will need to exit the remote shell that is going to be run.
The essence of such a remote script might look like:
tail command
shell
of course you would need to add whatever parts will be necessary for your shell or coding style.
An alternate approach would be to keep the current remote command being terminated and add another ssh call for the case of being interrupted that is spanning the shell for interactive use. But in that case, also `CTRL-C will not be available for terminating the minoring altogether.
To achieve this you might try changing active interrupt handler with your monitoring script to trigger termination as soon as the remote side returns. However, this will cause a race condition between the user being able to recognize remote command terminated (and control has been returned to local script) and the proper interrupt handler being in place. You might be able to sufficiently lower that risk be first activating the new trap handler and then echoing the fact and maybe add a sleep to allow the user to react.
Not really sure what you are saying.
Also, you should disable PasswordAuthentication in /etc/ssh/sshd_config and log by adding the public key of your home computer to `~/.ssh/authorized_keys
! /bin/sh
while [ true ];
do
RESPONSE=`ssh -i /home/user/.ssh/id_host user#$IP 'tail /home/user/log.txt'`
echo $RESPONSE
sleep 10
done

Bash script - How to run ssh after another one is connected

I don't have a powerful hardware so I can't run multiple ssh tunnels at the same time or it'll make the CPU load go way too high, my goal is to run a ssh tunnel after another one is connected, and reconnect if one of my ssh gets disconnected, so basically it's like this:
while true; do
if (1st ssh isn't connected); then
connect the first ssh
elif (1st ssh is finally connected); then
run the second ssh
elif (2nd ssh is finally connected); then
run the 3rd ssh
fi
sleep 1
done
The problem is that the amount of ssh tunnels keeps changing, sometimes a user wants to run 3 ssh tunnels and sometimes 5, it looks like this to run the script:
mytunnel.sh -a [number of tunnels they wanna run]
I'm thinking of for loop but I just can't figure out how to write it inside a for loop. Please help me.
Here is a for loop you can use:
#!/usr/local/bin/bash
LOOP=$1
for (( c=1; c<=$LOOP; c++ ))
do
echo "$c "
done
Replace echo with your commands and LOOP with whatever command-line arg you'll be using. This example reads command-line arg 1 (i.e. $1).
Example execution:
Tricky. Unfortunately I don't think ssh returns anything when it connects a tunnel, nor does it exit immediately when the connection is broken.
Instead what you probably want to do is make a port monitor that periodically checks that the port is accepting connections and spawns a new ssh tunnel (possibly killing the old ssh process) if it isn't.

About the internals of nohup ssh

I mindlessly use nohup ssh for issuing a remote ssh command without worrying about the accident hangup. Now I'm starting to think about it and it is not pretty clear.
What I'm wondering is that why just doing "ssh remote sleep 100 &" stops the job after few seconds? For instance,
$ ssh remote sleep 100 &
[1] 13358
$
[1]+ Stopped ssh remote sleep 100
By what reason is this job stopped? Could you explain the internals of this job control?
If you want the remote command to keep working until it's finished (and not depend on the ssh connection with the remote host): you could use screen (or tmux) .
connect using ssh to the remote host
once connected: screen to start a screen session (a kind of "virtual terminal", that will keep running until you close it, instead of depending on your own connection to it)
you can then detach from screen (ctrl-a d) and re-attach to it later (from another machine, etc) : just ssh again, "screen -l" to list screens, and "screen -r" to re-attach to one. Read about screen on the net.
The reason your job is stopped is not linked to the command, but to the internal of job handling. Some (good) infos can be found on http://www.linusakesson.net/programming/tty/ (search for background if you don't read the whole thing. But read the whole thing ^^). In a nutshell ... writing to a TTY from a background job will cause a SIGTTOU to suspend the entire process group (maybe your ssh asked for a password? or it displays something when connecting?)
The advantage of screen over running on the remote host usign "nohup" are numerous. The main one is that if you try to re-connect to a nohup program (ex: vi) it can't (easily) be done... especially if it is multi-line. But when you re-attach to a screen session, you see the (virtual) terminal as if you never left it (ie, it's updated if the command added things on the screen, and it still have rows/columns, etc).
You can also work at several person on the same terminal (or have some person "view it" while one works in it).
Etc.
The command
ssh remote sleep 100 &
only runs ssh in the background. Once ssh is started on the local machine, control returns to the local shell, regardless of what is running (via sshd) on the remote end.

Execute script on remote host - output given in local host

I am trying to execute two scripts which are available as sh files on remote host having 755 permissions.
I try callling them from client host as below:
REMOTE_HOST="host1"
BOUNCE_SCRIPT="
/code/sys/${ENV}/comp/1/${ENV}/scripts/unix/stopScript.sh ${ENV};
/code/sys/${ENV}/comp/1/${ENV}/scripts/unix/startScript.sh ${ENV};
"
ssh ${REMOTE_HOST} "${BOUNCE_SCRIPT}"
Above lines are in a script on local host.
While running the script on local host, the first command on remote host i.e. stopScript.sh gets executed correctly. It kills the running process which it was inteded to kill w/o any error.
However output of second script i.e. startScript.sh gets printed to local host window but the process it intended to start does not start on remote host.
Can anyone please let me know?
Is the way executing script on remote host correct?
Should I see output of running script on remote host locally as well? i.e. on the window of local host?
Thanks
You could try the -n flag for ssh:
ssh -n $REMOTE_HOST "$BOUNCE_SCRIPT" >> $LOG
The man page has further information (http://unixhelp.ed.ac.uk/CGI/man-cgi?ssh+1). The following is a snippet:
-n Redirects stdin from /dev/null (actually, prevents reading from
stdin).
Prefacing your startScript.sh line with 'nohup' may help. Often times if you remotely execute commands they will die when your ssh session ends, nohup allows your process to live after the session has ended. It would be helpful to know if your process is starting at all or if it starts and then dies.
I think cyber-monk is right, you should launch the processes with nohup to create à new independent process. Look if your stop script is killing the right process (the new one included).

Resources