How to output the start and stop datetime of shell script (but no other log)? - linux

I am still very new to shell scripting (bash)...but I have written my first one and it is running as expected.
What I am currently doing is writing to the log with sh name-of-script.sh >> /cron.log 2>&1. However this writes everything out. It was great for debugging but now I don't need that.
I now only want to see the start date and time along with the end date and time
I would still like to write to cron.log but just the dates as mentioned above But I can't seem to figure out how to do that. Can someone point me in the right direction to do this...either from within the script or similar to what I've done above?

A simple approach would be to add something like:
echo `date`: Myscript starts
to the top of your script and
echo `date`: Myscript ends
to the bottom and
echo `date`: Myscript exited because ...
wherever it exits with an error.
The backticks around date (not normal quotes) cause the output of the date command to be interpolated into the echo statement.
You could wrap this in functions and so forth to make it neater, or use date -u to print in UTC, but this should get you going.
You ask in the comments how you would avoid the rest of the output appearing.
One option would be to redirect the output and error of everything else in the script to /dev/null, by adding '>/dev/null 2>&1' to every line that output something, or otherwise silence them. EG
if fgrep myuser /etc/password ; then
dosomething
fi
could be written:
if fgrep myuser /etc/password >/dev/null 2>&1 ; then
dosomething
fi
though
if fgrep -q myuser /etc/password ; then
dosomething
fi
is more efficient in this case.
Another option would be to put the date wrapper in the crontab entry. Something like:
0 * * * * sh -c 'echo `date`: myscript starting ; /path/to/myscript >/dev/null 2>&1; echo `date`: myscript finished'
Lastly, you could use a subshell. Put the body of your script into a function, and then call that in a subshell with output redirected.
#!/bin/bash
do_it ()
{
... your script here ...
}
echo `date`: myscript starting
( do_it ) >/dev/null 2>&1
echo `date`: myscript finished

Try the following:
TMP=$(date); name-of-scipt.sh; echo "$TMP-$(date)"
or with formatted date
TMP=$(date +%Y%m%d.%H%M%S); name-of-scipt.sh; echo "$TMP-$(date +%Y%m%d.%H%M%S)"

Related

command to redirect output to console and to a file at the same time works fine in bash. But how do i make it work in korn shell(ksh)

command to redirect output to console and to a file at the same time works fine in bash. But how do i make it work in korn shell(ksh).
All my scripts runs on korn shell so cant change them to bash for this particular command to work.
exec > >(tee -a $LOGFILE) 2>&1
In the code beneath I use the variable logfile, lowercase is better.
You can try something like
touch "${logfile}"
tail -f "${logfile}"&
tailpid=$!
trap 'kill -9 ${tailpid}' EXIT INT TERM
exec 1>"${logfile}" 2>&1
A not too unreasonable technique is to re-exec the shell with output to tee. That is, at the top of the script, do something like:
#!/bin/sh
test -z "$REXEC" && { REXEC=1 exec "$0" "$#" | tee -a $LOGFILE; exit; }

Shell Script is not generating the logs file

I am trying to capture the netstat command logs for every minute.I have written a script which runs in loop.But my script executes till capturing logs statement into test.sh code.
test.sh
#!/bin/sh
export TODAY=`date`
export i=0
while [ true ]
do
echo "capturing logs" $i
sh test1.sh > test$i.log
echo "sleeping for 1m"
sleep 60
i=$((i+1))
done
test1.sh
#!/bin/sh
netstat -l 5575 | while IFS= read -r line; do printf '[%s] %s\n' "$(date '+%Y-%m-%d %H:%M:%S')" "$line"; done
The output from above script is :
capturing logs
(If i press crtl-c then it move further and it display "sleeping for 1m" statement and i need to press again crtl-c when it comes to "capturing logs statement").
sh test1.sh > test$i.log
Is waiting for test1.sh to finish, which probably takes way too long to complete.
Try to execute test1.sh in another tty like
setsid sh -c 'exec [launch the script] <> /dev/tty[number_of_tty] >&0 2>&1'
and let me know.
Be careful not to run a lot of processes on the same tty. You can play with [number_of_tty] to avoid this.
Could solve the problem, could not, but it's worth trying.

How to get watch to run a bash script with quotes

I'm trying to have a lightweight memory profiler for the matlab jobs that are run on my machine. There is either one or zero matlab job instance, but its process id changes frequently (since it is actually called by another script).
So here is the bash script that I put together to log memory usage:
#!/bin/bash
pid=`ps aux | grep '[M]ATLAB' | awk '{print $2}'`
if [[ -n $pid ]]
then
\grep VmSize /proc/$pid/status
else
echo "no pid"
fi
when I run this script in bash like this:
./script.sh
it works fine, giving me the following result:
VmSize: 1289004 kB
which is exactly what I want.
Now, I want to run this periodically. So I run it with watch, like this:
watch ./script.sh
But in this case I only receive:
no pid
Please note that I know the matlab job is still running, because I can see it with the same pid on top, and besides, I know each matlab job take several hours to finish.
I'm pretty sure that something is wrong with the quotes I have when setting pid. I just can't figure out how to fix it. Anyone knows what I'm doing wrong?
PS.
In the man page of watch, it says that commands are executed by sh -c. I did run my script like sh -c ./script and it works just fine, but watch doesn't.
Why don't you use a loop with sleep command instead?
For example:
#!/bin/bash
pid=`ps aux | grep '[M]ATLAB' | awk '{print $2}'`
while [ "1" ]
do
if [[ -n $pid ]]
then
\grep VmSize /proc/$pid/status
else
echo "no pid"
fi
sleep 10
done
Here the script sleeps(waits) for 10 seconds. You can set the interval you need changing the sleep command. For example to make the script sleep for an hour use sleep 1h.
To exit the script press Ctrl - C
This
pid=`ps aux | grep '[M]ATLAB' | awk '{print $2}'`
could be changed to:
pid=$(pidof MATLAB)
I have no idea why it's not working in watch but you could use a cron job and make the script log to a file like so:
#!/bin/bash
pid=$(pidof MATLAB) # Just to follow previously given advice :)
if [[ -n $pid ]]
then
echo "$(date): $(\grep VmSize /proc/$pid/status)" >> logfile
else
echo "$(date): no pid" >> logfile
fi
You'd of course have to create logfile with touch.
You might try just running ps command in watch. I have had issues in the past with watch chopping lines and such when they get too long.
It can be fixed by making the terminal you are running the command from wider or changing the column like this (may need to adjust the 160 to your liking):
export COLUMNS=160;

Possible to get all outputs to stdout in a script?

I would like to log all error messages that the commands in a Bash script contains.
The problem is that if I have to add
E=$( ... 2>&1 ); echo $E >> $LOG
to all commands, then the script will become quite hard to read.
Question
Is it somehow possible to get a global variable, so all STDERR becomes STDOUT?
Just start your script with this:
exec 2>&1
You can do things like:
#!/bin/sh
test -z "$DOLOGGING" && { DOLOGGING=no exec $0 "${#}" 2>&1 | tee log-file; exit; }
...
to duplicate all output/errors to log-file. Although it seems I misread the question and it seems you just want to add exec 2>&1 >/dev/null to the top of your script to print all errors to stdout and discard all output.

Bash script does not continue to read the next line of file

I have a shell script that saves the output of a command that is executed to a CSV file. It reads the command it has to execute from a shell script which is in this format:
ffmpeg -i /home/test/videos/avi/418kb.avi /home/test/videos/done/418kb.flv
ffmpeg -i /home/test/videos/avi/1253kb.avi /home/test/videos/done/1253kb.flv
ffmpeg -i /home/test/videos/avi/2093kb.avi /home/test/videos/done/2093kb.flv
You can see each line is an ffmpeg command. However, the script just executes the first line. Just a minute ago it was doing nearly all of the commands. It was missing half for some reason. I edited the text file that contained the commands and now it will only do the first line. Here is my bash script:
#!/bin/bash
# Shell script utility to read a file line line.
# Once line is read it will run processLine() function
#Function processLine
processLine(){
line="$#"
START=$(date +%s.%N)
eval $line > /dev/null 2>&1
END=$(date +%s.%N)
DIFF=$(echo "$END - $START" | bc)
echo "$line, $START, $END, $DIFF" >> file.csv 2>&1
echo "It took $DIFF seconds"
echo $line
}
# Store file name
FILE=""
# get file name as command line argument
# Else read it from standard input device
if [ "$1" == "" ]; then
FILE="/dev/stdin"
else
FILE="$1"
# make sure file exist and readable
if [ ! -f $FILE ]; then
echo "$FILE : does not exists"
exit 1
elif [ ! -r $FILE ]; then
echo "$FILE: can not read"
exit 2
fi
fi
# read $FILE using the file descriptors
# Set loop separator to end of line
BAKIFS=$IFS
IFS=$(echo -en "\n\b")
exec 3<&0
exec 0<$FILE
while read line
do
# use $line variable to process line in processLine() function
processLine $line
done
exec 0<&3
# restore $IFS which was used to determine what the field separators are
BAKIFS=$ORIGIFS
exit 0
Thank you for any help.
UPDATE 2
Its the ffmpeg commands rather than the shell script that isn't working. But I should of been using just "\b" as Paul pointed out. I am also making use of Johannes's shorter script.
I think that should do the same and seems to be correct:
#!/bin/bash
CSVFILE=/tmp/file.csv
cat "$#" | while read line; do
echo "Executing '$line'"
START=$(date +%s)
eval $line &> /dev/null
END=$(date +%s)
let DIFF=$END-$START
echo "$line, $START, $END, $DIFF" >> "$CSVFILE"
echo "It took ${DIFF}s"
done
no?
ffmpeg reads STDIN and exhausts it. The solution is to call ffmpeg with:
ffmpeg </dev/null ...
See the detailed explanation here: http://mywiki.wooledge.org/BashFAQ/089
Update:
Since ffmpeg version 1.0, there is also the -nostdin option, so this can be used instead:
ffmpeg -nostdin ...
I just had the same problem.
I believe ffmpeg is responsible for this behaviour.
My solution for this problem:
1) Call ffmpeg with an "&" at the end of your ffmpeg command line
2) Since now the skript will not wait till completion of the ffmpeg process,
we have to prevent our script from starting several ffmpeg processes.
We achieve this goal by delaying the loop pass while there is at least
one running ffmpeg process.
#!/bin/bash
cat FileList.txt |
while read VideoFile; do
<place your ffmpeg command line here> &
FFMPEGStillRunning="true"
while [ "$FFMPEGStillRunning" = "true" ]; do
Process=$(ps -C ffmpeg | grep -o -e "ffmpeg" )
if [ -n "$Process" ]; then
FFMPEGStillRunning="true"
else
FFMPEGStillRunning="false"
fi
sleep 2s
done
done
I would add echos before and after the eval to see what it's about to eval (in case it's treating the whole file as one big long line) and after (in case one of the ffmpeg commands is taking forever).
Unless you are planning to read something from standard input after the loop, you don't need to preserve and restore the original standard input (though it is good to see you know how).
Similarly, I don't see a reason for dinking with IFS at all. There is certainly no need to restore the value of IFS before exit - this is a real shell you are using, not a DOS BAT file.
When you do:
read var1 var2 var3
the shell assigns the first field to $var1, the second to $var2, and the rest of the line to $var3. In the case where there's just one variable - your script, for example - the whole line goes into the variable, just as you want it to.
Inside the process line function, you probably don't want to throw away error output from the executed command. You probably do want to think about checking the exit status of the command. The echo with error redirection is ... unusual, and overkill. If you're sufficiently sure that the commands can't fail, then go ahead with ignoring the error. Is the command 'chatty'; if so, throw away the chat by all means. If not, maybe you don't need to throw away standard output, either.
The script as a whole should probably diagnose when it is given multiple files to process since it ignores the extraneous ones.
You could simplify your file handling by using just:
cat "$#" |
while read line
do
processline "$line"
done
The cat command automatically reports errors (and continues after them) and processes all the input files, or reads standard input if there are no arguments left. The use of double quotes around the variable means that it is passed as a single unit (and therefore unparsed into separate words).
The use of date and bc is interesting - I'd not seen that before.
All in all, I'd be looking at something like:
#!/bin/bash
# Time execution of commands read from a file, line by line.
# Log commands and times to CSV logfile "file.csv"
processLine(){
START=$(date +%s.%N)
eval "$#" > /dev/null
STATUS=$?
END=$(date +%s.%N)
DIFF=$(echo "$END - $START" | bc)
echo "$line, $START, $END, $DIFF, $STATUS" >> file.csv
echo "${DIFF}s: $STATUS: $line"
}
cat "$#" |
while read line
do
processLine "$line"
done

Resources