How to print a success message after the complete execution of a script? - linux

#! /bin/sh
DB_USER='aaa';
DB_PASSWD='aaa1';
DB_NAME='data';
TABLE='datalog';
mysql --local-infile=1 --user=$DB_USER --password=$DB_PASSWD $DB_NAME -e "load data local infile '/home/demo/data1.csv' into table datalog fields terminated by ',' lines terminated by '\n';" -e echo "script executed successfully " | date "+%h %e"
My aim is to print a success message after the above script executes successfully. I have written the above command to do so but it is printing the date, not the echo statement.

The arguments to mysql -e should be SQL commands, not shell script.
The solution is much simpler than what you are trying.
#!/bin/sh
# Terminate immediately if a command fails
set -e
# Don't put useless semicolons at the end of each assignment
# Use lower case for your private variables
db_user='aaa'
db_passwd='aaa1'
db_name='data'
table='datalog'
# Quote strings
mysql --local-infile=1 --user="$db_user" --password="$db_passwd" "$db_name" \
-e "load data local infile '/home/demo/data1.csv' into table $table fields terminated by ',' lines terminated by '\n';"
# Just use date
# Print diagnostics to standard error, not standard output
date "+%h %e script executed successfully" >&2
The use of set -e is somewhat cumbersome, but looks like the simplest solution to your basic script. For a more complex script, maybe instead use something like
mysql -e "... stuff ..." || exit
to terminate on failure of this individual command, but allow other failures in the script. Perhaps see also What does set -e mean in a bash script?
If you want to preserve the exit code from mysql and always print a message to show what happened, probably take out the set -e and do something like
if mysql -e "... whatever ..."
then
date "+%h %e script completed successfully" >&2
else
rc=$?
date "+%h %e script failed: $rc" >&2
exit $rc
fi
As an aside, date does not read its standard input for anything, so you can't pipe echo to it. The script above simply uses only date, but here are a few different ways you could solve that.
# Merge this output with the next output
date "+%h %e" | tr '\n' ' ' >&2
echo script executed successfully >&2
or
# Use a command substitution to interpolate the output from date into
# the arguments for echo
echo "$(date "+%h %e") script executed successfully >&2
For more complex situations, maybe also look into xargs, though it's absolutely horribly overkill here.
date "+%h %e" | xargs -I {} echo script executed successfully {} >&2
If you use only date, you will need to be mindful of your use of % in any literal message; to print a literal per-cent sign from date, use %%.
As a further stylistic aside, you should avoid upper case for your private variables.

Related

Failure of bash script called via command substitution does not stop parent script

I have a bash script (exp1.sh)
#!/bin/bash
set -e
for row in $(./exp2.sh);
do
echo $?
echo outer=$row
done
echo "continuing"
which invokes another bash script exp2.sh.
#!/bin/bash
echo "A"
echo "B"
exit 1
I want the first script to fail fast when second script exits with error. (The second script actually reads rows from database to stdout, in case of database connectivity error it returns nonzero exit code.)
I expect the set -e option causes premature termination of first script exp1.sh. However from script output it seems that even the exitcode from second script is passed to first script the loop is performed and script continues beyond the loop:
1
outer=A
0
outer=B
continuing
I want neither the loop nor any command after loop to be executed. I understand the second script had passed some data to first script before it exited with error so loop processed them. I don't understand why loop didn't stop then and what's the correct fix.
The best thing I could figure out is to store result of command substitution into array, which works.
a=$(./exp2.sh)
# execution won't get here when error
echo $?
for row in $a
Is there a way to do this without anything being executed? I played with inherit_errexit as I found here but with no success.
Your idea for a solution is good but a=$(./exp2.sh) doesn't populate an array, it populates a string and then for row in $a is leaving the contents of that string unquoted and so open to the shell for interpretation. You can do this to make/use a as an array if the output of exp2.sh is as simple as you show:
a=( $( ./exp2.sh ) )
(( $? == 0 )) || exit 1
echo "$?"
for row in "${a[#]}"
but rather than a=( $(./exp2.sh) ) which has some caveats, it'd be more robust to do:
IFS=$'\n' read -r -d '' -a a < <( ./exp2.sh && printf '\0' )
or:
readarray -t a < <( ./exp2.sh )
See Reading output of a command into an array in Bash and How to split a string into an array in Bash?

Date command as a variable in a bash script. Needs to be invoked each time instead of during variable declaration

I have a bash script and at certain points I am using echo to put some messages in a log file. The problem that I have is related to the DATE variable which will be static throughout the entire execution of the script.
I have this basic script below to illustrate the problem:
#!/bin/bash
DATE=`date +"%Y-%m-%dT%H:%M:%S%:z"`
echo "script started at $DATE"
echo "doing something"
sleep 2
echo "script finished at $DATE"
If I execute this script, the output of the $DATE variable is the same in both lines. Is there some bash magic that could nicely resolve this without having to replace $DATE with the command itself on each line?
Thanks in advance
Newer versions of the bash/printf builtin have support for generating datetime stamps without the need to spawn a subprocess to call date:
$ builtin printf --help
...snip...
Options:
-v var assign the output to shell variable VAR rather than
display it on the standard output
...snip...
In addition to the standard format specifications described in printf(1),
printf interprets:
%b expand backslash escape sequences in the corresponding argument
%q quote the argument in a way that can be reused as shell input
%(fmt)T output the date-time string resulting from using FMT as a format
string for strftime(3)
... snip ...
Instead of spawning a subprocess to call date, eg:
logdt=`date +"%Y-%m-%dT%H:%M:%S:%z"`
The same can be accomplished via printf -v by wrapping the desired format in %(...)T, eg:
printf -v logdt '%(%Y-%m-%dT%H:%M:%S:%z)T'
NOTE: assuming %:z should be :%z
Assuming you'll be tagging a lot of lines with datetime stamps then the savings from eliminating the subproces date calls could be huge.
Running a test of 1000 datetime stamp generations:
$ time for ((i=1;i<=1000;i++)); do { printf -v logdt '%(...)T' | logdate=$(date ...) }; done
Timings for printf -v logdt '%(...)T':
real 0m0.182s # ~130 times faster than $(date ...)
user 0m0.171s
sys 0m0.000s
Timings for logdt=$(date ...):
real 0m24.443s # ~130 times slower than printf -v
user 0m5.533s
sys 0m16.724s
With bash version 4.3+ , you can use the builtin printf to format datetimes. -1 below is a magic value that means "now".
#!/bin/bash
datefmt='%Y-%m-%dT%H:%M:%S%z'
printf "script started at %($datefmt)T\n" -1
echo "doing something"
sleep 2
printf "script finished at %($datefmt)T\n" -1
bash didn't recognize %:z for me.
This can help you:
#!/bin/bash
echo "script started at $(date +'%Y-%m-%dT%H:%M:%S%:z')"
echo "doing something"
sleep 2
echo "script finished at $(date +'%Y-%m-%dT%H:%M:%S%:z')"
You might want to create an alias if calling the full command looks clumsy to you.

Loop ends prematurely when executing a command via SSH in a Bash function [duplicate]

I have the following shell script. The purpose is to loop thru each line of the target file (whose path is the input parameter to the script) and do work against each line. Now, it seems only work with the very first line in the target file and stops after that line got processed. Is there anything wrong with my script?
#!/bin/bash
# SCRIPT: do.sh
# PURPOSE: loop thru the targets
FILENAME=$1
count=0
echo "proceed with $FILENAME"
while read LINE; do
let count++
echo "$count $LINE"
sh ./do_work.sh $LINE
done < $FILENAME
echo "\ntotal $count targets"
In do_work.sh, I run a couple of ssh commands.
The problem is that do_work.sh runs ssh commands and by default ssh reads from stdin which is your input file. As a result, you only see the first line processed, because the command consumes the rest of the file and your while loop terminates.
This happens not just for ssh, but for any command that reads stdin, including mplayer, ffmpeg, HandBrakeCLI, httpie, brew install, and more.
To prevent this, pass the -n option to your ssh command to make it read from /dev/null instead of stdin. Other commands have similar flags, or you can universally use < /dev/null.
A very simple and robust workaround is to change the file descriptor from which the read command receives input.
This is accomplished by two modifications: the -u argument to read, and the redirection operator for < $FILENAME.
In BASH, the default file descriptor values (i.e. values for -u in read) are:
0 = stdin
1 = stdout
2 = stderr
So just choose some other unused file descriptor, like 9 just for fun.
Thus, the following would be the workaround:
while read -u 9 LINE; do
let count++
echo "$count $LINE"
sh ./do_work.sh $LINE
done 9< $FILENAME
Notice the two modifications:
read becomes read -u 9
< $FILENAME becomes 9< $FILENAME
As a best practice, I do this for all while loops I write in BASH.
If you have nested loops using read, use a different file descriptor for each one (9,8,7,...).
More generally, a workaround which isn't specific to ssh is to redirect standard input for any command which might otherwise consume the while loop's input.
while read -r line; do
((count++))
echo "$count $line"
sh ./do_work.sh "$line" </dev/null
done < "$filename"
The addition of </dev/null is the crucial point here, though the corrected quoting is also somewhat important for robustness; see also When to wrap quotes around a shell variable?. You will want to use read -r unless you specifically require the slightly odd legacy behavior you get for backslashes in the input without -r. Finally, avoid upper case for your private variables.
Another workaround of sorts which is somewhat specific to ssh is to make sure any ssh command has its standard input tied up, e.g. by changing
ssh otherhost some commands here
to instead read the commands from a here document, which conveniently (for this particular scenario) ties up the standard input of ssh for the commands:
ssh otherhost <<'____HERE'
some commands here
____HERE
ssh -n option prevents checking the exit status of ssh when using HEREdoc while piping output to another program.
So use of /dev/null as stdin is preferred.
#!/bin/bash
while read ONELINE ; do
ssh ubuntu#host_xyz </dev/null <<EOF 2>&1 | filter_pgm
echo "Hi, $ONELINE. You come here often?"
process_response_pgm
EOF
if [ ${PIPESTATUS[0]} -ne 0 ] ; then
echo "aborting loop"
exit ${PIPESTATUS[0]}
fi
done << input_list.txt
This was happening to me because I had set -e and a grep in a loop was returning with no output (which gives a non-zero error code).

How to output the start and stop datetime of shell script (but no other log)?

I am still very new to shell scripting (bash)...but I have written my first one and it is running as expected.
What I am currently doing is writing to the log with sh name-of-script.sh >> /cron.log 2>&1. However this writes everything out. It was great for debugging but now I don't need that.
I now only want to see the start date and time along with the end date and time
I would still like to write to cron.log but just the dates as mentioned above But I can't seem to figure out how to do that. Can someone point me in the right direction to do this...either from within the script or similar to what I've done above?
A simple approach would be to add something like:
echo `date`: Myscript starts
to the top of your script and
echo `date`: Myscript ends
to the bottom and
echo `date`: Myscript exited because ...
wherever it exits with an error.
The backticks around date (not normal quotes) cause the output of the date command to be interpolated into the echo statement.
You could wrap this in functions and so forth to make it neater, or use date -u to print in UTC, but this should get you going.
You ask in the comments how you would avoid the rest of the output appearing.
One option would be to redirect the output and error of everything else in the script to /dev/null, by adding '>/dev/null 2>&1' to every line that output something, or otherwise silence them. EG
if fgrep myuser /etc/password ; then
dosomething
fi
could be written:
if fgrep myuser /etc/password >/dev/null 2>&1 ; then
dosomething
fi
though
if fgrep -q myuser /etc/password ; then
dosomething
fi
is more efficient in this case.
Another option would be to put the date wrapper in the crontab entry. Something like:
0 * * * * sh -c 'echo `date`: myscript starting ; /path/to/myscript >/dev/null 2>&1; echo `date`: myscript finished'
Lastly, you could use a subshell. Put the body of your script into a function, and then call that in a subshell with output redirected.
#!/bin/bash
do_it ()
{
... your script here ...
}
echo `date`: myscript starting
( do_it ) >/dev/null 2>&1
echo `date`: myscript finished
Try the following:
TMP=$(date); name-of-scipt.sh; echo "$TMP-$(date)"
or with formatted date
TMP=$(date +%Y%m%d.%H%M%S); name-of-scipt.sh; echo "$TMP-$(date +%Y%m%d.%H%M%S)"

Bash script does not continue to read the next line of file

I have a shell script that saves the output of a command that is executed to a CSV file. It reads the command it has to execute from a shell script which is in this format:
ffmpeg -i /home/test/videos/avi/418kb.avi /home/test/videos/done/418kb.flv
ffmpeg -i /home/test/videos/avi/1253kb.avi /home/test/videos/done/1253kb.flv
ffmpeg -i /home/test/videos/avi/2093kb.avi /home/test/videos/done/2093kb.flv
You can see each line is an ffmpeg command. However, the script just executes the first line. Just a minute ago it was doing nearly all of the commands. It was missing half for some reason. I edited the text file that contained the commands and now it will only do the first line. Here is my bash script:
#!/bin/bash
# Shell script utility to read a file line line.
# Once line is read it will run processLine() function
#Function processLine
processLine(){
line="$#"
START=$(date +%s.%N)
eval $line > /dev/null 2>&1
END=$(date +%s.%N)
DIFF=$(echo "$END - $START" | bc)
echo "$line, $START, $END, $DIFF" >> file.csv 2>&1
echo "It took $DIFF seconds"
echo $line
}
# Store file name
FILE=""
# get file name as command line argument
# Else read it from standard input device
if [ "$1" == "" ]; then
FILE="/dev/stdin"
else
FILE="$1"
# make sure file exist and readable
if [ ! -f $FILE ]; then
echo "$FILE : does not exists"
exit 1
elif [ ! -r $FILE ]; then
echo "$FILE: can not read"
exit 2
fi
fi
# read $FILE using the file descriptors
# Set loop separator to end of line
BAKIFS=$IFS
IFS=$(echo -en "\n\b")
exec 3<&0
exec 0<$FILE
while read line
do
# use $line variable to process line in processLine() function
processLine $line
done
exec 0<&3
# restore $IFS which was used to determine what the field separators are
BAKIFS=$ORIGIFS
exit 0
Thank you for any help.
UPDATE 2
Its the ffmpeg commands rather than the shell script that isn't working. But I should of been using just "\b" as Paul pointed out. I am also making use of Johannes's shorter script.
I think that should do the same and seems to be correct:
#!/bin/bash
CSVFILE=/tmp/file.csv
cat "$#" | while read line; do
echo "Executing '$line'"
START=$(date +%s)
eval $line &> /dev/null
END=$(date +%s)
let DIFF=$END-$START
echo "$line, $START, $END, $DIFF" >> "$CSVFILE"
echo "It took ${DIFF}s"
done
no?
ffmpeg reads STDIN and exhausts it. The solution is to call ffmpeg with:
ffmpeg </dev/null ...
See the detailed explanation here: http://mywiki.wooledge.org/BashFAQ/089
Update:
Since ffmpeg version 1.0, there is also the -nostdin option, so this can be used instead:
ffmpeg -nostdin ...
I just had the same problem.
I believe ffmpeg is responsible for this behaviour.
My solution for this problem:
1) Call ffmpeg with an "&" at the end of your ffmpeg command line
2) Since now the skript will not wait till completion of the ffmpeg process,
we have to prevent our script from starting several ffmpeg processes.
We achieve this goal by delaying the loop pass while there is at least
one running ffmpeg process.
#!/bin/bash
cat FileList.txt |
while read VideoFile; do
<place your ffmpeg command line here> &
FFMPEGStillRunning="true"
while [ "$FFMPEGStillRunning" = "true" ]; do
Process=$(ps -C ffmpeg | grep -o -e "ffmpeg" )
if [ -n "$Process" ]; then
FFMPEGStillRunning="true"
else
FFMPEGStillRunning="false"
fi
sleep 2s
done
done
I would add echos before and after the eval to see what it's about to eval (in case it's treating the whole file as one big long line) and after (in case one of the ffmpeg commands is taking forever).
Unless you are planning to read something from standard input after the loop, you don't need to preserve and restore the original standard input (though it is good to see you know how).
Similarly, I don't see a reason for dinking with IFS at all. There is certainly no need to restore the value of IFS before exit - this is a real shell you are using, not a DOS BAT file.
When you do:
read var1 var2 var3
the shell assigns the first field to $var1, the second to $var2, and the rest of the line to $var3. In the case where there's just one variable - your script, for example - the whole line goes into the variable, just as you want it to.
Inside the process line function, you probably don't want to throw away error output from the executed command. You probably do want to think about checking the exit status of the command. The echo with error redirection is ... unusual, and overkill. If you're sufficiently sure that the commands can't fail, then go ahead with ignoring the error. Is the command 'chatty'; if so, throw away the chat by all means. If not, maybe you don't need to throw away standard output, either.
The script as a whole should probably diagnose when it is given multiple files to process since it ignores the extraneous ones.
You could simplify your file handling by using just:
cat "$#" |
while read line
do
processline "$line"
done
The cat command automatically reports errors (and continues after them) and processes all the input files, or reads standard input if there are no arguments left. The use of double quotes around the variable means that it is passed as a single unit (and therefore unparsed into separate words).
The use of date and bc is interesting - I'd not seen that before.
All in all, I'd be looking at something like:
#!/bin/bash
# Time execution of commands read from a file, line by line.
# Log commands and times to CSV logfile "file.csv"
processLine(){
START=$(date +%s.%N)
eval "$#" > /dev/null
STATUS=$?
END=$(date +%s.%N)
DIFF=$(echo "$END - $START" | bc)
echo "$line, $START, $END, $DIFF, $STATUS" >> file.csv
echo "${DIFF}s: $STATUS: $line"
}
cat "$#" |
while read line
do
processLine "$line"
done

Resources