I have an issue where I need to compare the date in a log file with the system date. I am trying to create a script that will send an email when the server restarts. However the issue is that when the Server restarts its creates a new log, the script I have for emailing so far is as follows:
LOG1=grep -B 15 "Server failed so attempting to restart" /home/testing/Server.out
echo $LOG1 > attachment.txt
grep -B 10 "Server failed so attempting to restart" /home/testing/Server.out &&
mailx -s "Alert - Server has shutdown and is attempting a restart" email#domain.com < attachment.txt
LOG2=grep -B 15 "Server failed so attempting to restart" /home/testing/Server.log
echo $LOG2 > attachment2.txt
grep -B 10 "Server failed so attempting to restart" /home/testing/Server.log &&
mailx -s "Alert - Server has shutdown and is attempting a restart" email#domain.com < attachment.txt
Now this works and sends the email, however if I sent this on a cron job it will run (and send an email) every single time until the log file is deleted. I need a way to say (only if you found "Server failed so attempting to restart" in the last 15 mins then run the script)
Could anyone advise on a way to do this?
Thanks for your help
Related
Dear Community Members,
I'm facing an issue in which my website redis service is making my website showing 503 error on regular interval. Previously it was in 2-3days so I made a cronjob to delete redis dump file and restart service at night every day. But now I'm still facing the issue sometimes it comes in 1 week and sometimes it comes twice in a day.
So I was thinking if there is a shell script which can check 503 error on my website and restart services. I had the script to check httpd service is active or not and restart it if it goes down.
#!/bin/sh
RESTART="systemctl start httpd"
SERVICE="httpd"
LOGFILE="/opt/httpd/autostart-apache2.log"
#check for the word inactive in the result of status
if systemctl status httpd | grep -q inactive
then
echo "starting apache at $(date)" >> $LOGFILE
$RESTART >> $LOGFILE
else
echo "apache is running at $(date)"
fi
I have Jenkins pipeline job which goes thought all our Jenkins servers and check the connectivity (runs every few minutes).
ksh file:
#!/bin/ksh
JENKINS_URL=$1
curl --connect-timeout 10 "$JENKINS_URL" >/dev/null
status=`echo $?`
if [ "$status" == "7" ]; then
export SUBJECT="Connection refused or can not connect to URL $JENKINS_URL"
echo "$SUBJECT"|/usr/sbin/sendmail -t XXXX#gmail.com
else
echo "successfully connected $JENKINS_URL"
fi
exit 0
I would like to add another piece of code, which record all the times that server was down (it should include the name of the server and timestamp) into a file, and in case the server is up again, send an email which will notify about it, and it will be also recorded in the file.
I don't want to get extra alerts, only one alert (to file and mail) when it's down, and one when it's up again. any idea how to implement it?
The detailed answer was given by unix.stackexchange community:
https://unix.stackexchange.com/questions/562594/how-to-set-and-record-alerts-for-jenkin-server-down-and-up
I have a script on a linux box in my environment to loop through a series of services that should always be running, and if one isn't running, send me an email.
Now, it seems to work fine except for two issues. I'd really appreciate some help and insight. Most of my background comes from Python and Powershell.
Whenever the script detects a service that's down, it exits the script, instead of looping through the rest. It then appends the services it didn't check to the body of the email, despite me not specifying an email body in the mail command.
Every so often, it throws a false error on "hostservices"; and I have no idea how to even go about figuring out why.
The script is a cron job running every 10 minutes. Full text of the script and list of services are below, as well as a screenshot of what happens when the script finds a service that's down.
Script
#!/bin/bash
while read services; do
#Run Command to get Service Status, store results as string variable
service_status=$(service $services status)
#Check if the service is NOT running.
if [[ "$service_status" != *"is running..." ]];
then
mail -s "Service $services is down on [SERVER]" [EMAIL ADDRESS]
elif [[ $service_status == *"is running..." ]];
then
:
else
mail -s "ERROR IN SCRIPT, unable to get $services status on [SERVER]" [EMAIL ADDRESS]
fi
done < /home/services.txt
services.txt
hostcontect
hostservices
ecs-ec
ecs-ep
imq
tomcat
httpd
Email Alert for Down Service
SUBJECT: "Service hostservices is down on [SERVER]"
BODY:
ecs-ec
ecs-ep
imq
tomcat
httpd
mail reads the body of the email from standard input. In your case, the input file is redirected to stdin, so it's read instead. Tell mail to read the body from elsewhere, e.g.
mail -s ... < /dev/null
I run an automated backup shell script, it works great, but for some reason the FTP blocks me for a few minutes. I would like to add a retry and wait feature. below is sample of my code.
echo "Moving to external server"
cd /root/backup/
/usr/bin/ftp -n -i $FTP_SERVER <<END_SCRIPT
user $FTP_USERNAME $FTP_PASSWORD
mput $FILE
bye
END_SCRIPT
after a failed login i get the message below
Authentication failed. Blocked.
Login failed.
Incorrect sequence of commands: PASS required after USER
i need to capture such output and make the code atempt to sleep for few minutes before trying again.
ideas?
If it's possible for you to install additional programs onto the system of interest i encourage you to take a look at lftp.
With lftp it is possible to set paramters like the time between reconnects etc. manually.
To achieve your aim with lftp you have to invoke the following
lftp -u user,password ${FTP_SERVER} <<END
set ftp:retry-530 "Authentication failed"
set net:reconnect-interval-base 60
set net:reconnect-interval-multiplier 10
set net:max-retries 10
<some more custom commands>
END
If the pattern after ftp:retry-530 matches the 530 reply of the server lftp tries to reconnect every 60*10 seconds.
The message below is probably going to stderr instead of stdout so you will need to capture the stderr output first:
while true
do
if ( script 2>&1 |grep -q 'Authentication failed' )
then
echo "authentication failed, sleeping for a while before trying again"
sleep 60
else
#everything worked, break out of the while loop
break
fi
done
Sometimes when I execute a bash script with the curl command to upload some files to my ftp server, it will return some error like:
56 response reading failed
and I have to find the wrong line and re-run them manually and it will be OK.
I'm wondering if that could be re-run automatically when the error occurs.
My scripts is like this:
#there are some files(A,B,C,D,E) in my to_upload directory,
# which I'm trying to upload to my ftp server with curl command
for files in `ls` ;
do curl -T $files ftp.myserver.com --user ID:pw ;
done
But sometimes A,B,C, would be uploaded successfully, only D were left with an "error 56", so I have to rerun curl command manually. Besides, as Will Bickford said, I prefer that no confirmation will be required, because I'm always asleep at the time the script is running. :)
Here's a bash snippet I use to perform exponential back-off:
# Retries a command a configurable number of times with backoff.
#
# The retry count is given by ATTEMPTS (default 5), the initial backoff
# timeout is given by TIMEOUT in seconds (default 1.)
#
# Successive backoffs double the timeout.
function with_backoff {
local max_attempts=${ATTEMPTS-5}
local timeout=${TIMEOUT-1}
local attempt=1
local exitCode=0
while (( $attempt < $max_attempts ))
do
if "$#"
then
return 0
else
exitCode=$?
fi
echo "Failure! Retrying in $timeout.." 1>&2
sleep $timeout
attempt=$(( attempt + 1 ))
timeout=$(( timeout * 2 ))
done
if [[ $exitCode != 0 ]]
then
echo "You've failed me for the last time! ($#)" 1>&2
fi
return $exitCode
}
Then use it in conjunction with any command that properly sets a failing exit code:
with_backoff curl 'http://monkeyfeathers.example.com/'
Perhaps this will help. It will try the command, and if it fails, it will tell you and pause, giving you a chance to fix run-my-script.
COMMAND=./run-my-script.sh
until $COMMAND; do
read -p "command failed, fix and hit enter to try again."
done
I have faced a similar problem where I need to make contact with servers using curl that are in the process of starting up and haven't started up yet, or services that are temporarily unavailable for whatever reason. The scripting was getting out of hand, so I made a dedicated retry tool that will retry a command until it succeeds:
#there are some files(A,B,C,D,E) in my to_upload directory,
# which I'm trying to upload to my ftp server with curl command
for files in `ls` ;
do retry curl -f -T $files ftp.myserver.com --user ID:pw ;
done
The curl command has the -f option, which returns code 22 if the curl fails for whatever reason.
The retry tool will by default run the curl command over and over forever until the command returns status zero, backing off for 10 seconds between retries. In addition retry will read from stdin once and once only, and writes to stdout once and once only, and writes all stdout to stderr if the command fails.
Retry is available from here: https://github.com/minfrin/retry