crontab sends net rpc shutdown but nothing happens - cron

I've created two scripts to automate turning on and off computers in my classroom.
Both scripts get a list of MAC ID's and use them to 1)wake up computers (wake command) and 2)shut down computers (net rpc shutdown).
Server is FREEBSD and workstations are win7.
Both scripts look alike except for the part "wake" and "net rpc shutdown". Both scripts work perfectly when runned manually (via shell).
Only WOL script runs with crontab, shutdown script run as-well, but nothing happens to computers.
Cron logs just show, that the script is processed, but computers won't react.
Shutdown script:
#!/usr/local/bin/bash
#[ $# -eq 0 ] && { echo "Usage: $0 filename"; exit 1; } #muutujad sätitud?
list="/usr/local/etc/isc-dhcp_192.168.4"
erand="/root/skriptid/WOL/erand.txt"
# kontrollime, kas erandid ja list olemas
[ ! -f $list ] && { echo "EXIT! List puudu asukohas: $list"; exit 1; }
[ ! -f $erand ] && { echo "EXIT! Erandid puudu asukohas: $erand"; exit 1; }
#jätkame
while read line
do
IFS=' ' read -a array <<< "$line"
mac=`echo ${array[5]}|sed 's/;//'`
ip=`echo ${array[7]}|sed 's/;//'`
host=`echo ${array[1]}`
if grep -Fq "$ip" $erand
then
echo -e "Jatan vahele: HOST: $host IP: $ip MAC: $mac \n----" #exceptions
sleep 1
else
echo -e "Lülitan välja: HOST: $host IP: $ip MAC: $mac \n----"
net rpc SHUTDOWN -t 15 -f -C "Arvuti lulitub valja. Serveripoolne kask." -W luunja -U $host\\xx%xx -S $ip
sleep 1
fi
done < $list
Cron job:
40 19 * * * root /root/skriptid/WOL/shutdown.sh
Both script are run by root and have 666 rights.
Is there a way to see cronjob's output besides log?
Is there a known conflict between cronjob and net rpc?
Any help would be appreaciated!'
thanks

You need to indicate the binary executing the script.
From
40 19 * * * root /root/skriptid/WOL/shutdown.sh
to
40 19 * * * root /bin/sh /root/skriptid/WOL/shutdown.sh
or whatever comes from which sh.
(I think I answer this question once a day)

Add paths to your crontab
PATH=/etc:/bin:/sbin:/usr/bin:/usr/sbin:/usr/local/sbin:/usr/local/bin

You have to enable file sharing on windows computers in order for your script to work.

Related

Linux shell (sh) CLI test if ping successful

How to wire linux shell (sh) script to test with ping if host is reachable?
I guess there could be solution that uses grep but maybe ping provides that option by itself?
I am more into getting a whitelisting a successful ping operation that reached the host then checking if there was any error. I don't care about the reason of ping not succeeding in reaching a host.
I would like to limit ping attempts count and maximum amount of time to reach the host so the script does not waits too long for ping trying to reach a host.
dt=$(date +%d)
cksize=50
echo "Start $(date)"
while IFS= read -r sn
do
echo "*************************************************"
echo "Begin checking NODES client: $sn"
if ping -c 1 "$sn" -i 5 > /dev/null
then
echo "$sn node up"
else
echo "$sn node down"
fi
done < server_list
parallel -j0 --timeout 15 'ping -c 5 -i 0.2 {} >/dev/null 2>&1 && echo {} up || echo {} down' ::: freenetproject.org debian.org no-such.domain {1..254}.2.3.4
You can do it like this. It will do it in parallel for all hosts.
#!/bin/bash
for server in 'google.com' 'github.com' 'fakeserver.com'
do
{ ping -o "$server" &>/dev/null && echo "$server is UP" || echo "$server is DOWN" ; } &
done
wait
Regards!

Detecting when Mongod's port is open inside a script

I'm trying to write a bash script that starts a mongod process, waits for it to start (i.e. have it's default port open) and then pipe some commands into it through the mongo shell. I'd like some way to wait for the mongod process to be completely up that's more deterministic than just sleep 5.
This is the script so far:
set_up_authorization() {
echo "Setting up access control"
/path/to/mongo < configure_access_controls.js
}
wait_for_mongod_to_start() {
RETRIES=1000
CONNECTED="false"
echo "Waiting for mongod to start"
while [[ $RETRIES -ge 0 && $CONNECTED == "false" ]] ; do
RESPONSE=$(exec 6<>/dev/tcp/127.0.0.1/27017 || echo "1")
if [[ $RESPONSE == "" ]] # which should happen if the exec is successful
CONNECTED="true"
fi
RETRIES=$((RETRIES - 1))
done
if [[ $RETRIES -eq 0 ]] ; then
echo "Max retries reached waiting for mongod to start. Exiting."
exit 1
fi
echo "Mongod started"
}
./start_mongod_instance.sh
wait_for_mongod_to_start
set_up_authorization
While this script works, it produces a ton of output on the terminal while the exec is failing:
./initialize_cluster.sh: connect: Connection refused
./initialize_cluster.sh: line xx: /dev/tcp/127.0.0.1/27017: Connection refused
...which repeats for all ~900 failed attempts.
Neither of the following seems to get rid of the terminal logging either:
exec 6<>/dev/tcp/127.0.0.1/27017 >/dev/null
OR
exec 6<>/dev/tcp/127.0.0.1/27017 2>/dev/null
I've also tried using the following:
ps -aux | grep "mongod" | wc -l
but the process having a pid that ps lists isn't equivalent to it's port being open or it accepting connections.
Any ideas on either front would be appreciated - a more elegant way to wait for the process to start completely or a way to get rid of the excessive logging to the terminal.
Note: I don't have access to nmap or nc to check the port (this is on a client's machine).
exec is a bit special. It affects the output of the current shell. Meaning you need to redirect stderr of the current shell before running the port check:
host="localhost"
port="9000"
exec 2>/dev/null # redirect error here
while ! exec 3<>"/dev/tcp/${host}/${port}" ; do
echo "Waiting ..."
sleep 1
done
Furthermore you might have noticed that I check the exit status of exec rather than some output to decide whether the port is open or not.
If you want to reset it afterwards:
host="localhost"
port="9000"
# Copy fd 2 into fd 3 and redirect fd 2 to /dev/null
exec 3<&2 2>/dev/null
while ! exec 3<>"/dev/tcp/${host}/${port}" ; do
echo "Waiting ..."
sleep 1
done
# Copy back fd 3 into fd 2
exec 2<&3
echo "EE oops!" >&2

Linux Script to check a process is up or not

I have a server Application which holds 1500 process id, I need a shell Script which checks for every hour whether the process is up, if not start the process using "dsmc -u xxxx -p **" if it is not started send a mail to my gmail(xxx#gmail.com). here is the code how can i send a mail to my gmail account.
if pidof -s vsftpd = /dev/null; then
echo 'ftp is stopped'
sudo /etc/init.d/vsftpd restart
else
echo "The FTP server is Down" | mail -s "Ftp Server is Down" abcd#example.com
fi
I don't get mail to my Gmail Account.
Reference: http://rtcamp.com/wordpress-nginx/tutorials/linux/ubuntu-postfix-gmail-smtp/
to write your script you probably would want to investigate the following functions and their associated man pages
ps
pgrep
ptree
mail
The script part is easy to do. Troubleshooting the mail however is not. I would recommend running the script in screen (screen -d -m /bin/bash "test.sh"), detaching the screen (ctrl -a + d) then killing the process (/etc/init.d/vsftp stop), waiting 1 minute, and reattach the screen (screen -r). That will provide you the error with mail you can troubleshoot.
The following script will monitor your service for you.
#!/bin/bash
process="vsftp"
while true ; do
until [ ! $(pgrep $process) ]; do
sleep 1 #The number or minutes to wait until next check
done
#If process is not found do the following
/etc/init.d/$process start > /dev/null #Run as root because sudo requires password
if [ $? != 1 ]; then
echo "The FTP server was restarted" | mail -s "Ftp Server $process was restarted" abcd#example.com
else
echo "The FTP server could not restart" | mail -s "Ftp Server $process is down" abcd#example.com
sleep 1
exit 0;
fi
done
Hope this helps and good luck.
We can do the script with ps -ef
try,
# cat vsftpd.sh
#!/bin/bash
/bin/ps -ef | grep vsftpd > /dev/null 2>&1
if [ $? -ne 0 ]
then
/etc/init.d/vsftpd restart > /dev/null 2>&1
/bin/mail -s "FTP service is RESTARTED now" abcd#example.com
else
sleep 0
fi
cron:
* * * * * /bin/sh /path/to/vsftpd.sh > /dev/null 2>&1

How to set up an automatic (re)start of a background ssh tunnel

I am a beginner user of linux, and also quite newbie at ssh and tunnels.
Anyway, my goal is to maintain a ssh tunnel open in background.
In order to do that, I wrote the following batch that I then added into crontab (the batch is automatically processed every 5 minutes during workdays and from 8am to 9pm).
I read in some other thread in stackoverflow that one should use autossh that will ensure the ssh will always be ok through a recurrent check. So did I....
#!/bin/bash
LOGFILE="/root/Tunnel/logBatchRestart.log"
NOW="$(date +%d/%m/%Y' - '%H:%M)" # date & time of log
if ! ps ax | grep ssh | grep tunnelToto &> /dev/null
then
echo "[$NOW] ssh tunnel not running : restarting it" >> $LOGFILE
autossh -f -N -L pppp:tunnelToto:nnnnn nom-prenom#193.xxx.yyy.zzz -p qqqq
if ! ps ax | grep ssh | grep toto &> /dev/null
then
echo "[$NOW] failed starting tunnel" >> $LOGFILE
else
echo "[$NOW] restart successfull" >> $LOGFILE
fi
fi
My problem is that sometimes the tunnel stops working, although every thing looks ok (ps ax | grep ssh > the result shows the two expected tasks : autossh main task and the ssh tunnel itself). I actually know about the problem cause the tunnel is used by a third party software that triggers an error as soon as the tunnel is no more responding.
SO I am wondering how I should improve my batch in order It will be able to check the tunnel and restart it if it happens to be dead. I saw some ideas in there, but it was concluded by the "autossh" hint... which I already use. Thus, I am out of ideas... If any of you have, I'd gladly have a look at them!
Thanks for taking interest in my question, and for your (maybe) suggestions!
Instead of checking the ssh process with ps you can do the following trick
create script, that does the following and add it to your crontab via crontab -e
#!/bin/sh
REMOTEUSER=username
REMOTEHOST=remotehost
SSH_REMOTEPORT=22
SSH_LOCALPORT=10022
TUNNEL_REMOTEPORT=8080
TUNNEL_LOCALPORT=8080
createTunnel() {
/usr/bin/ssh -f -N -L$SSH_LOCALPORT:$REMOTEHOST:SSH_REMOTEPORT -L$TUNNEL_LOCALPORT:$REMOTEHOST:TUNNEL_REMOTEPORT $REMOTEUSER#$REMOTEHOST
if [[ $? -eq 0 ]]; then
echo Tunnel to $REMOTEHOST created successfully
else
echo An error occurred creating a tunnel to $REMOTEHOST RC was $?
fi
}
## Run the 'ls' command remotely. If it returns non-zero, then create a new connection
/usr/bin/ssh -p $SSH_LOCALPORT $REMOTEUSER#localhost ls >/dev/null 2>&1
if [[ $? -ne 0 ]]; then
echo Creating new tunnel connection
createTunnel
fi
In fact, this script will open two ports
port 22 which will be used to check if the tunnel is still alive
port 8080 which is the port you might want to use
Please check and send me further questions via comments
(I add this as an answer since there is not enough room for it un a comment)
Ok, I managed to make the batch run to launch the ssh tunnel (I had to specify my hostname instead of localhost in order it could be triggered) :
#!/bin/bash
LOGFILE="/root/Tunnel/logBatchRedemarrage.log"
NOW="$(date +%d/%m/%Y' - '%H:%M)" # date et heure du log
REMOTEUSER=username
REMOTEHOST=remoteHost
SSH_REMOTEPORT=22
SSH_LOCALPORT=10022
TUNNEL_REMOTEPORT=12081
TUNNEL_SPECIFIC_REMOTE_PORT=22223
TUNNEL_LOCALPORT=8082
createTunnel() {
/usr/bin/ssh -f -N -L$SSH_LOCALPORT:$REMOTEHOST:$SSH_REMOTEPORT -L$TUNNEL_LOCALPORT:$REMOTEHOST:$TUNNEL_REMOTEPORT $REMOTEUSER#193.abc.def.ghi -p $TUNNEL_SPECIFIC_REMOTE_PORT
if [[ $? -eq 0 ]]; then
echo [$NOW] Tunnel to $REMOTEHOST created successfully >> $LOGFILE
else
echo [$NOW] An error occurred creating a tunnel to $REMOTEHOST RC was $? >> $LOGFILE
fi
}
## Run the 'ls' command remotely. If it returns non-zero, then create a new connection
/usr/bin/ssh -p $SSH_LOCALPORT $REMOTEUSER#193.abc.def.ghi ls >/dev/null 2>&1
if [[ $? -ne 0 ]]; then
echo [$NOW] Creating new tunnel connection >> $LOGFILE
createTunnel
fi
However, I got some immediate message (below) when the tunnel is running and when cron tries to lauch the batch again... sounds like it cannot listen to it. Also since I need some time to get a proof , I can't say yet it will successfully restart if the tunnel is out.
Here's the response to the second start of the batch.
bind: Address already in use channel_setup_fwd_listener: cannot listen
to port: 10022 bind: Address already in use
channel_setup_fwd_listener: cannot listen to port: 8082 Could not
request local forwarding.

Continue to grep for traceroute result with bash

Every night I go through the same process of checking failover systems for our T1's. I essentially go through the following process:
Start the failover process.
traceroute $server;
Once I see it's failed over, I verify that connections work by SSHing into a server.
ssh $server;
Then once I see it works, I take it off of failover.
So what I want to do is to continually run a traceroute until I get a certain result, then run a SSH command.
Put your list of successful messages in a file (omit the variable lines and fractions of the line, and use a ^ to identify the start of the line, as such:)
patterns.list:
^ 7 4.68.63.165
^ 8 4.68.17.133
^ 9 4.79.168.210
^10 216.239.48.108
^11 66.249.94.46
^12 72.14.204.99
Then a simple while loop:
while ! traceroute -n ${TARGET} | grep -f patterns.list
do
sleep 5 # 5 second delay between traceroutes, for niceness.
done
ssh ${DESTINATION}
Use traceroute -n to generate the output so you don't get an IP address that resolves one time, but and a name the next, resulting in a false positive.
I think you could be better off using ping command to verify server's accessability than traceroute.
It is easy to check for return status of ping command without using any grep at all:
if [ ping -c 4 -n -q 10.10.10.10 >/dev/null 2>& ]; then
echo "Server is ok"
else
echo "Server is down"
fi
If you want to do it continually in a loop, try this:
function check_ssh {
# do your ssh stuff here
echo "performing ssh test"
}
while : ; do
if [ ping -c 4 -n -q 10.10.10.10 >/dev/null 2>& ]; then
echo "Server is ok"
check_ssh
else
echo "Server is down"
fi
sleep 60
done

Resources