Linux script to monitor remote port and launch script if not successful - linux

RHEL 7.1 is the OS this will be used on.
I have two servers which are identical (A and B). Server B needs to monitor a port on Server A and if it's down for 30 seconds, launch a script. I read netcat was replaced with ncat on RHEL 7 so this is what I have so far:
#!/bin/bash
Server=10.0.0.1
Port=123
ncat $Server $Port &> /dev/null; echo $?
If the port is up, the output is 0. If the port is down, the output is 1. I'm just not sure on how to do the next part which would be "if down for 30 seconds, then launch x script"
Any help would be appreciated. Thanks in advance.

If you really want to script this rather than using a dedicated tool like Pacemaker as #CharlesDuffy suggested, then you could do something like this:
Run an infinite loop
Check the port
If up, save the timestamp
Otherwise check the difference from the last saved timestamp
If more time passed then threshold, then run the script
Sleep a bit
For example:
#!/bin/bash
server=10.0.0.1
port=123
seconds=30
seen=$(date +%s)
while :; do
now=$(date +%s)
if ncat $server $port &> /dev/null; then
seen=$now
else
if ((now - seen > seconds)); then
run-script && exit
fi
fi
sleep 1
done

#!/bin/bash
Server=10.0.0.1
Port=123
port_was_down=0
while true; do
sleep 30
if ! ncat $Server $Port &> /dev/null; then
if [[ $port_was_down == "1" ]]; then
run-script
exit
else
port_was_down=1
fi
else
port_was_down=0
fi
done

what about using nmap?
something like:
TIMEOUT=30s;
HOST=10.0.0.1;
PORT=123;
if nmap --max-rtt-timeout $TIMEOUT --min-rtt-timeout $TIMEOUT -p $PORT $HOST | grep "^$PORT.*open"; then
echo 'OPEN';
else
echo 'CLOSED';
fi;

Related

Linux shell (sh) CLI test if ping successful

How to wire linux shell (sh) script to test with ping if host is reachable?
I guess there could be solution that uses grep but maybe ping provides that option by itself?
I am more into getting a whitelisting a successful ping operation that reached the host then checking if there was any error. I don't care about the reason of ping not succeeding in reaching a host.
I would like to limit ping attempts count and maximum amount of time to reach the host so the script does not waits too long for ping trying to reach a host.
dt=$(date +%d)
cksize=50
echo "Start $(date)"
while IFS= read -r sn
do
echo "*************************************************"
echo "Begin checking NODES client: $sn"
if ping -c 1 "$sn" -i 5 > /dev/null
then
echo "$sn node up"
else
echo "$sn node down"
fi
done < server_list
parallel -j0 --timeout 15 'ping -c 5 -i 0.2 {} >/dev/null 2>&1 && echo {} up || echo {} down' ::: freenetproject.org debian.org no-such.domain {1..254}.2.3.4
You can do it like this. It will do it in parallel for all hosts.
#!/bin/bash
for server in 'google.com' 'github.com' 'fakeserver.com'
do
{ ping -o "$server" &>/dev/null && echo "$server is UP" || echo "$server is DOWN" ; } &
done
wait
Regards!

Detecting when Mongod's port is open inside a script

I'm trying to write a bash script that starts a mongod process, waits for it to start (i.e. have it's default port open) and then pipe some commands into it through the mongo shell. I'd like some way to wait for the mongod process to be completely up that's more deterministic than just sleep 5.
This is the script so far:
set_up_authorization() {
echo "Setting up access control"
/path/to/mongo < configure_access_controls.js
}
wait_for_mongod_to_start() {
RETRIES=1000
CONNECTED="false"
echo "Waiting for mongod to start"
while [[ $RETRIES -ge 0 && $CONNECTED == "false" ]] ; do
RESPONSE=$(exec 6<>/dev/tcp/127.0.0.1/27017 || echo "1")
if [[ $RESPONSE == "" ]] # which should happen if the exec is successful
CONNECTED="true"
fi
RETRIES=$((RETRIES - 1))
done
if [[ $RETRIES -eq 0 ]] ; then
echo "Max retries reached waiting for mongod to start. Exiting."
exit 1
fi
echo "Mongod started"
}
./start_mongod_instance.sh
wait_for_mongod_to_start
set_up_authorization
While this script works, it produces a ton of output on the terminal while the exec is failing:
./initialize_cluster.sh: connect: Connection refused
./initialize_cluster.sh: line xx: /dev/tcp/127.0.0.1/27017: Connection refused
...which repeats for all ~900 failed attempts.
Neither of the following seems to get rid of the terminal logging either:
exec 6<>/dev/tcp/127.0.0.1/27017 >/dev/null
OR
exec 6<>/dev/tcp/127.0.0.1/27017 2>/dev/null
I've also tried using the following:
ps -aux | grep "mongod" | wc -l
but the process having a pid that ps lists isn't equivalent to it's port being open or it accepting connections.
Any ideas on either front would be appreciated - a more elegant way to wait for the process to start completely or a way to get rid of the excessive logging to the terminal.
Note: I don't have access to nmap or nc to check the port (this is on a client's machine).
exec is a bit special. It affects the output of the current shell. Meaning you need to redirect stderr of the current shell before running the port check:
host="localhost"
port="9000"
exec 2>/dev/null # redirect error here
while ! exec 3<>"/dev/tcp/${host}/${port}" ; do
echo "Waiting ..."
sleep 1
done
Furthermore you might have noticed that I check the exit status of exec rather than some output to decide whether the port is open or not.
If you want to reset it afterwards:
host="localhost"
port="9000"
# Copy fd 2 into fd 3 and redirect fd 2 to /dev/null
exec 3<&2 2>/dev/null
while ! exec 3<>"/dev/tcp/${host}/${port}" ; do
echo "Waiting ..."
sleep 1
done
# Copy back fd 3 into fd 2
exec 2<&3
echo "EE oops!" >&2

Linux Script to check if process is running and restart if not

I am having this script which looks for the process filebeat and restarts it if is not running. Cron runs this script every 5 minutes. Most of the time this works fine except sometime it creates multiple filebeat process. Can someone please point out what is the issue in my script.
#!/bin/bash
PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin
service=filebeat
servicex=/usr/share/filebeat/bin/filebeat
pid=`pgrep -x "filebeat"`
if [ $pid > /dev/null ]
then
echo "$(date) $service is running!!!"
else
echo "$(date) starting $service"
cd /home/hpov/beats/filebeat
./filebeat -c filebeat.yml &
fi
#!/bin/bash
pidof script.x86 >/dev/null
if [[ $? -ne 0 ]] ; then
echo "Restarting script: $(date)" >> /var/log/script.txt
/etc/script/script.x86 &
fi
Super easy :D

Bash: Loop until command exit status equals 0

I have a netcat installed on my local machine and a service running on port 25565. Using the command:
nc 127.0.0.1 25565 < /dev/null; echo $?
Netcat checks if the port is open and returns a 0 if it open, and a 1 if it closed.
I am trying to write a bash script to loop endlessly and execute the above command every second until the output from the command equals 0 (the port opens).
My current script just keeps endlessly looping "...", even after the port opens (the 1 becomes a 0).
until [ "nc 127.0.0.1 25565 < /dev/null; echo $?" = "0" ]; do
echo "..."
sleep 1
done
echo "The command output changed!"
What am I doing wrong here?
Keep it Simple
until nc -z 127.0.0.1 25565
do
echo ...
sleep 1
done
Just let the shell deal with the exit status implicitly
The shell can deal with the exit status (recorded in $?) in two ways, explicit, and implicit.
Explicit: status=$?, which allows for further processing.
Implicit:
For every statement, in your mind, add the word "succeeds" to the command, and then add
if, until or while constructs around them, until the phrase makes sense.
until nc succeeds; do ...; done
The -z option will stop nc from reading stdin, so there's no need for the < /dev/null redirect.
You could try something like
while true; do
nc 127.0.0.1 25565 < /dev/null
if [ $? -eq 0 ]; then
break
fi
sleep 1
done
echo "The command output changed!"

How to set up an automatic (re)start of a background ssh tunnel

I am a beginner user of linux, and also quite newbie at ssh and tunnels.
Anyway, my goal is to maintain a ssh tunnel open in background.
In order to do that, I wrote the following batch that I then added into crontab (the batch is automatically processed every 5 minutes during workdays and from 8am to 9pm).
I read in some other thread in stackoverflow that one should use autossh that will ensure the ssh will always be ok through a recurrent check. So did I....
#!/bin/bash
LOGFILE="/root/Tunnel/logBatchRestart.log"
NOW="$(date +%d/%m/%Y' - '%H:%M)" # date & time of log
if ! ps ax | grep ssh | grep tunnelToto &> /dev/null
then
echo "[$NOW] ssh tunnel not running : restarting it" >> $LOGFILE
autossh -f -N -L pppp:tunnelToto:nnnnn nom-prenom#193.xxx.yyy.zzz -p qqqq
if ! ps ax | grep ssh | grep toto &> /dev/null
then
echo "[$NOW] failed starting tunnel" >> $LOGFILE
else
echo "[$NOW] restart successfull" >> $LOGFILE
fi
fi
My problem is that sometimes the tunnel stops working, although every thing looks ok (ps ax | grep ssh > the result shows the two expected tasks : autossh main task and the ssh tunnel itself). I actually know about the problem cause the tunnel is used by a third party software that triggers an error as soon as the tunnel is no more responding.
SO I am wondering how I should improve my batch in order It will be able to check the tunnel and restart it if it happens to be dead. I saw some ideas in there, but it was concluded by the "autossh" hint... which I already use. Thus, I am out of ideas... If any of you have, I'd gladly have a look at them!
Thanks for taking interest in my question, and for your (maybe) suggestions!
Instead of checking the ssh process with ps you can do the following trick
create script, that does the following and add it to your crontab via crontab -e
#!/bin/sh
REMOTEUSER=username
REMOTEHOST=remotehost
SSH_REMOTEPORT=22
SSH_LOCALPORT=10022
TUNNEL_REMOTEPORT=8080
TUNNEL_LOCALPORT=8080
createTunnel() {
/usr/bin/ssh -f -N -L$SSH_LOCALPORT:$REMOTEHOST:SSH_REMOTEPORT -L$TUNNEL_LOCALPORT:$REMOTEHOST:TUNNEL_REMOTEPORT $REMOTEUSER#$REMOTEHOST
if [[ $? -eq 0 ]]; then
echo Tunnel to $REMOTEHOST created successfully
else
echo An error occurred creating a tunnel to $REMOTEHOST RC was $?
fi
}
## Run the 'ls' command remotely. If it returns non-zero, then create a new connection
/usr/bin/ssh -p $SSH_LOCALPORT $REMOTEUSER#localhost ls >/dev/null 2>&1
if [[ $? -ne 0 ]]; then
echo Creating new tunnel connection
createTunnel
fi
In fact, this script will open two ports
port 22 which will be used to check if the tunnel is still alive
port 8080 which is the port you might want to use
Please check and send me further questions via comments
(I add this as an answer since there is not enough room for it un a comment)
Ok, I managed to make the batch run to launch the ssh tunnel (I had to specify my hostname instead of localhost in order it could be triggered) :
#!/bin/bash
LOGFILE="/root/Tunnel/logBatchRedemarrage.log"
NOW="$(date +%d/%m/%Y' - '%H:%M)" # date et heure du log
REMOTEUSER=username
REMOTEHOST=remoteHost
SSH_REMOTEPORT=22
SSH_LOCALPORT=10022
TUNNEL_REMOTEPORT=12081
TUNNEL_SPECIFIC_REMOTE_PORT=22223
TUNNEL_LOCALPORT=8082
createTunnel() {
/usr/bin/ssh -f -N -L$SSH_LOCALPORT:$REMOTEHOST:$SSH_REMOTEPORT -L$TUNNEL_LOCALPORT:$REMOTEHOST:$TUNNEL_REMOTEPORT $REMOTEUSER#193.abc.def.ghi -p $TUNNEL_SPECIFIC_REMOTE_PORT
if [[ $? -eq 0 ]]; then
echo [$NOW] Tunnel to $REMOTEHOST created successfully >> $LOGFILE
else
echo [$NOW] An error occurred creating a tunnel to $REMOTEHOST RC was $? >> $LOGFILE
fi
}
## Run the 'ls' command remotely. If it returns non-zero, then create a new connection
/usr/bin/ssh -p $SSH_LOCALPORT $REMOTEUSER#193.abc.def.ghi ls >/dev/null 2>&1
if [[ $? -ne 0 ]]; then
echo [$NOW] Creating new tunnel connection >> $LOGFILE
createTunnel
fi
However, I got some immediate message (below) when the tunnel is running and when cron tries to lauch the batch again... sounds like it cannot listen to it. Also since I need some time to get a proof , I can't say yet it will successfully restart if the tunnel is out.
Here's the response to the second start of the batch.
bind: Address already in use channel_setup_fwd_listener: cannot listen
to port: 10022 bind: Address already in use
channel_setup_fwd_listener: cannot listen to port: 8082 Could not
request local forwarding.

Resources