Looking for a bash script that will accomplish the followings:
Check a URL (ex. www.google.com)
Looks for a specific text string
if it exists, it does nothing
if it doesnt, then sends out an email to alert someone
I tried the following script, it doesnt do anything, I dont get any email or anything.
#!/bin/sh
URL="URL"
TMPFILE=`mktemp /string_watch.XXXXXX`
curl -s -o ${TMPFILE} ${URL} 2>/dev/null
if [ "$?" -ne "0" ];
then
echo "Unable to connect to ${URL}"
exit 2
fi
RES=`grep -i "StringToLookFor" ${TMPFILE}`
if [ "$?" -ne "0" ];
then
echo "String not found in ${URL}" | mail -s "Alert" your#email
exit 1
fi
echo "String found"
exit 0;
The command
mail -s "Alert" your#email
is pausing to let you enter the text of your email. If you want to send an email with the indicated subject and no text you need to do
mail -s "Alert" your#email < /dev/null
Related
I have just trying to write a script which just controls about the response contains "connected" or not
#!/bin/bash
cat control.txt | while read link // control.txt contains http and https urls
do
if [[ $(wget --spider -S $link 2>&1 | grep "connected") =~~ *"connected"* ]];
then echo "OK";
else echo "FAIL";
fi
done
Output:
sh -x portcontrol.sh
portcontrol.sh[2]: Syntax error at line 4 : `=~' is not expected.
If I read your script correctly, you're retrieving the page, but ignoring its contents, and all you want is to see whether wget shows the string 'connected'.
If that is so, your code can be simplified as follows:
if wget --spider -S $link 2>&1 | grep "connected" > /dev/null
then
echo "OK";
else
echo "FAIL";
fi
You don't need to capture wget's output and run a regexp search on it; grep already returns 0 (success) or 1 (not found) when searching for the string you gave.
That return code can be used directly to control the if.
The output of grep is redirected to /dev/null as to not show up on the screen or script output.
If you simply want to see if the connection request succeeded, and the wget output is of the form:
Connecting to <hostname>|<ip_addr>|:<port>... connected.
it should be sufficient to just do:
if [[ $(wget --spider -S $link 2>&1 | grep -c " connected\.") -gt 0 ]];
then echo "OK";
else echo "FAIL";
fi
Checking exit code works too, but it depends on what your requirements really are.
I'm building a bash script to send an email based off the last command. I seem to be having difficulties. Outside of a script the command works fine but when putting it in script it doesn't give the desired outcome.
Here is snippet of script:
grep -vFxf /path/to/first/file /path/to/second/file > /path/to/output/file.txt
if [ -s file.txt ] || echo "file is empty";
then
swaks -t "1#email.com" -f "norply#email.com" --header "Subject: sample" --body "Empty"
else
swaks -t "1#email.com" -f "norply#email.com" --header "subject: sample" --body "Not Empty"
fi
I ran the commands outside of script and I can see that there is data but when I add the commands within script I get the empty output. Please advise . Thank you in advance .
Your condition will always be true, because if [ -s file.txt ] fails, the exit status of the ||-list is the exit status of echo, which is almost guaranteed to be 0. You want to move the echo out of the condition and into the body of the if statement. (And to simplify further, just set the body to a variable and call swaks after the if completes.
if [ -s file.txt ];
then
body="Not Empty"
else
echo "file is empty"
body="Empty"
fi
swaks -t "1#email.com" -f "norply#email.com" --header "subject: sample" --body "$body"
If the only reason you create file.txt is to check if it is empty or not, you can just put the grep command directly in the if condition:
if grep -vFxfq /atph/to/first/file /path/to/second/file; then
body="Not Empty"
else
echo "No output"
body="Empty"
fi
swaks -t "1#email.com" -f "norply#email.com" --header "subject: sample" --body "$body"
I have a shell script. To this script I am passing arguments from a file. This file contains tables names
The script is working fine. I am able execute the command for all the tables in the file.
shell script
#!/bin/bash
[ $# -ne 1 ] && { echo "Usage : $0 input file "; exit 1; }
input_file=$1
TIMESTAMP=`date "+%Y-%m-%d"`
touch /home/$USER/logs/${TIMESTAMP}.success_log
touch /home/$USER/logs/${TIMESTAMP}.fail_log
success_logs=/home/$USER/logs/${TIMESTAMP}.success_log
failed_logs=/home/$USER/logs/${TIMESTAMP}.fail_log
#Function to get the status of the job creation
function log_status
{
status=$1
message=$2
if [ "$status" -ne 0 ]; then
echo "`date +\"%Y-%m-%d %H:%M:%S\"` [ERROR] $message [Status] $status : failed" | tee -a "${failed_logs}"
#echo "Please find the attached log file for more details"
#exit 1
else
echo "`date +\"%Y-%m-%d %H:%M:%S\"` [INFO] $message [Status] $status : success" | tee -a "${success_logs}"
fi
}
while read table ;do
sqoop job --exec $table > /home/$USER/logging/"${table}_log" 2>&1
g_STATUS=$?
log_status $g_STATUS "Sqoop job ${table}"
cp /home/$USER/logging/"${table}_log" /home/$USER/debug/`date "+%Y-%m-%d"`/logs/
done < ${input_file}
Now I want to send emails to my email address for failed jobs.
Requirements
1) Send email for each failed job i.e If `status log` has failed job for one particular table then I want email sent out saying job for that table has failed.
2) Or Send out one email for all the jobs that have failed for one input file.
Which is the best method to approach. I would like the 2nd option atleast it will reduce the no of emails to go through.
But better if I know both methods to do
edited function log_status
#Function to get the status of the job creation
function log_status
{
status=$1
message=$2
if [ "$status" -ne 0 ]; then
echo "`date +\"%Y-%m-%d %H:%M:%S\"` [ERROR] $message [Status] $status : failed" | tee -a "${failed_logs}"
mail -a mail.txt -s "This is the failed job log" user#example.com < /home/$USER/logs/${TIMESTAMP}.fail_log
#exit 1
else
echo "`date +\"%Y-%m-%d %H:%M:%S\"` [INFO] $message [Status] $status : success" | tee -a "${success_logs}"
fi
}
If I do this will I get one email for all the failed jobs.
Also it's possible using sendmail command:
sendmail user#example.com < email.txt
Using mail command:
mail -a mail.txt -s "This is the failed job log" user#example.com
-a is a attachment, -s is a subject
And with many attachments:
$(uuencode file1.txt file2.txt) | mailx -s "Subject" user#example.com
Here is a simple snippet for sending an err.log:
mail -s "stder logs for ps" "name#example.com" < err.log
This appends a subject in the first quotes and mails it to the recipient in the second set. In this case I suppose you would have a code block export the error logs via 3> to a file which I called err.log for readability. I would place the above snippet either inside of your script if you are not worried about being too vocal about the errors or neatly outside of the script if you want to be more discrete about how often you are emailing the recipient.
My requirement is to send an email if I find a string in a log file; however, I should be sending it only once. The shell script I have written is pasted below; however, it is sending repeated emails via cron job even when the condition is not matching.
#!/bin/bash
filexists=""
lbdown=""`enter code here`
if [ -f "/var/run/.mailsenttoremedy" ];
then
filexists=true
else
filexists=false
echo filexists is $filexists
fi
if tail -1000 /usr/ibm/tivoli/common/CTGIM/logs/trace.log | grep "Root exception is java.net.NoRouteToHostException: No route to host"
then
echo error found
lbdown=true
echo lbdown status after if in tail is $lbdown
else
lbdown=false
echo lbdown status after else in tail is $lbdown
fi
if filexists=false && lbdown=true
then
{
mailx -S intrelay.sysco.com -r xxx#yyy.com -s "**DEV ALERT**Load Balancer Connection not Available" -v xxx#yyy.com < /dev/null
date > /var/run/.mailsenttoremedy
}
fi
if filexists=true && lbdown=true
then
{
echo MAIL ALREADY SENT
}
fi
if lbdown=false
then
rm -f /var/run/.mailsenttoremedy
fi
echo lbdown is $lbdown and filexists is $filexists
echo outputs are:
filexists is false
Root exception is java.net.NoRouteToHostException: No route to host
error found
lbdown status after if in tail is true
Null message body; hope that's ok
Mail Delivery Status Report will be mailed to <xxx#yyy.com>.
MAIL ALREADY SENT
lbdown is false and filexists is true
You could try a normal declaration for the if requests...
Bash format:
if [ $(tail -1000 /usr/ibm/tivoli/common/CTGIM/logs/trace.log | grep "Root exception is java.net.NoRouteToHostException: No route to host") != "" ];
then
if [ "$filexists" = "false" ] && [ "$lbdown" = "true" ];
then
if [ "$lbdown" = "false" ];
then
Test commands via if should be surrounded by [ and ], at least the ones with multiple conditions. Here is a guide for if and some sample codes if you are interested.
In addition, variables need a $ in front of them at least. Normally they are surrounded by { and } too.
PS. You could use a better format for the post so people won't downvote you.
For others, the working code is:
#!/bin/bash
filexists=""
lbdown=""
if [ -f "/var/run/.mailsenttoremedy" ];
then
filexists=true
else
filexists=false
echo filexists is $filexists
fi
if tail -1000 /usr/ibm/tivoli/common/CTGIM/logs/trace.log | grep "nn"
then
echo error found
lbdown=true
echo lbdown status after if in tail is $lbdown
else
lbdown=false
echo lbdown status after else in tail is $lbdown
fi
if [[ "$filexists" = "false" && "$lbdown" = "true" ]];
then
mailx -S intrelay.sysco.com -r xxx#yyy.com -s "**DEV ALERT**Load Balancer Connection not Available" -v xxx#yyy.com < /dev/null
date > /var/run/.mailsenttoremedy
fi
if [[ "$filexists" = "true" && "$lbdown" = "true" ]];
then
echo MAIL ALREADY SENT
fi
if [ "$lbdown" = "false" ];
then
rm -f /var/run/.mailsenttoremedy
echo removing file
fi
echo lbdown is $lbdown and filexists is $filexists
A bash script executes several commands.
After each command, how to:
if success: show a custom message in the console and append it to a log file
if error: show the error message in the console, append error message to the log file and resume the entire script
Here is where I am:
log_file="log.txt"
output() {
echo $#
echo $# 2>&1 >> $log_file;
if [ $# -eq 0 ]
then
exit 1
fi
}
and then after each command:
if [ $? -eq 0 ]
then
output "Custom message"
else
output $?
fi
Which makes lots of repetitions…
You could create a "run" function to limit repetition:
run()
{
message="$1"
shift
eval "$*"
if [ $? -eq 0 ]; then
output "$message"
else
output $?
fi
}
And simply prefix every command with:
run "message" "command"
The command only needs to be quoted if it contains shell meta-expressions.
Here's a way to accomplish that; after each command to be tracked, add this line:
rc="$?"; if [ "$rc" -eq 0 ]; then echo "Custom message" 2>&1 | tee -a /folder/log; else echo "$rc" 2>&1 | tee -a /folder/log; fi
The following command will give you the exit status of your last run command.
$?
e.g lets say you executed "ls" command , after that if you execute the following
echo $?
you will see the exit status as 0. which means successful . An Non-zero exit status means failure.
You can use this in some sort of if else in your shell script and based on the exit value do whatever you need to do.