how to set timer for each script that runs - linux

Dear friend and colleges
it's lovely to be here in stack overflow the best cool site
Under /tmp/scripts we have around 128 scripts that perform many tests
As
verify_dns.sh
verify_ip.sh
verify_HW.sh
And so on
we decided to run all scripts under the current folder - /tmp/scfipt
with the following code
script_name=` find /tmp/scripts -maxdepth 1 -type f -name "verify_*" -exec basename {} \; `
for i in $script_name
do
echo running the script - $i
/tmp/scripts/$i
done
So output is like this
running the script - verify_dns.sh
running the script - verify_ip.sh
.
.
What we want to add - is the ability to print also the time that script runs
As the following example
running the script - verify_dns.sh - 16.3 Sec
running the script - verify_ip.sh - 2.5 Sec
.
.
My question , how we can add this ability in my code ?
Note - os version - is redhat 7.2

for calculating seconds you can use
SECONDS=0 ;
your_bash_script ;
echo $SECONDS
for more sensitive calculation
start=$(date +'%s%N')
your_shell_script.sh
echo "It took $((($(date +'%s%N') - $start)/100000)) miliseconds"
for internal time function
time your_shell_script.sh
Edit: example provided for OP
for i in $script_name
do
echo running the script - $i
start=$(date +'%s%N')
/tmp/scripts/$i
echo "It took $((($(date +'%s%N') - $start)/100000)) miliseconds"
done
for i in $script_name
do
echo running the script - $i
time /tmp/scripts/$i
done

You can use the time command to tell you how long each one took:
TIMEFORMAT="%E"
for i in $script_name
do
echo -en "running the script - $i\t - "
exec 3>&1 4>&2
var=$( { time /tmp/scripts/$i 1>&3 2>&4; } 2>&1) # Captures time only
exec 3>&- 4>&-
echo "$var Sec"
done
This works regardless of if your scripts produce any output/stderr. See this link for capturing only the output of time: get values from 'time' command via bash script

While it doesn't put the output on the same line, this might suit your needs.
for i in $script_name
do { set -x;
time "$i";
} 2>&1 | grep -Ev '^(user|sys|$)'
done

Related

Kill background process when another process ends in Linux

I have a little question and I hope someone can help me because I can not find a proper solution.
I want to resolve a hostname; while waiting for the result, I'd like to print a notification if it takes more than 30 seconds with shell script commands, preferably built-ins or ubiquitous system commands.
I have a background process that sleeps and then prints a message; while sleeping, the process runs ping, but I can't figure out how to kill the background process after the ping finishes and the message keeps printing even if the ping ends prior to the 30 second time limit since this is part of a bigger script that takes some time to run.
Here's the code that I've been using:
((sleep 30; echo "Querying the DNS server takes more than 30 seconds.") & ping -q -c 1 localhost >/dev/null)
I would greatly appreciate any and all help. Other solutions are welcome too; I just want to tell the user that the DNS is too slow and this will affect the further execution. I have tried ping -w or -W but this is not measuring the resolution time. I have tried to trap the result from the ping. I have tried to kill all processes with the same GPID but it is killing the console also. I am not the best with scripts, maybe this is the reason why this takes me so much time. Thank you in advance.
I hope this approach helps you. I think everything is pretty much portable, except for "bc" maybe. I can give you a "bc-less" version if you need it. Good luck!
#!/bin/bash
timeout=10; ## This is how long to wait before doing some batshit!
printed=1; ## this is how many times you want the message displayed (For #instance, you might want a message EVERY X seconds)
starttime="$( date +%F ) $( date +%T.%3N )"
################### HERE GOES YOUR BACKGROUND PROCESS
sleep 30 &
#######################################################
processId=$! ## And here we got the procees Id
#######################################################
while [ ! -z "$( ps -ef | grep $processId | grep -v grep )" ]
do
endtime="$( date +%F ) $( date +%T.%3N )";
timeelapsed=$( echo " $(date -d "$endtime" "+%s" ) - $(date -d "$starttime" "+%s" ) " | bc );
if [[ ($timeelapsed -gt $timeout) && ($printed -ne 0) ]]
then
echo "This is taking more than $timeout seconds";
printed=$(( printed - 1 ));
starttime="$( date +%F ) $( date +%T.%3N )"
fi
done
### Do something once everything finished
echo "The background process ended!!"

How to add threading to the bash script?

#!/bin/bash
cat input.txt | while read ips
do
cmd="$(snmpwalk -v2c -c abc#123 $ips sysUpTimeInstance)"
echo "$ips ---> $cmd"
echo "$ips $cmd" >> out_uptime.txt
done
How can i add threading to this bash script, i have around 80000 input and it takes lot of time?
Simple method. Assuming the order of the output is unimportant, and that snmpwalk's output is of no interest if it should fail, put a && at the end of each of the commands to background, except the last command which should have a & at the end:
#!/bin/bash
while read ips
do
cmd="$(nice snmpwalk -v2c -c abc#123 $ips sysUpTimeInstance)" &&
echo "$ips ---> $cmd" &&
echo "$ips $cmd" >> out_uptime.txt &
done < input.txt
Less simple. If snmpwalk can fail, and that output is also needed, lose the && and surround the code with curly braces,{}, followed by &. To redirect the appended output to include standard error use &>>:
#!/bin/bash
while read ips
do {
cmd="$(nice snmpwalk -v2c -c abc#123 $ips sysUpTimeInstance)"
echo "$ips ---> $cmd"
echo "$ips $cmd" &>> out_uptime.txt
} &
done < input.txt
The braces can contain more complex if ... then ... else ... fi statements, all of which would be backgrounded.
For those who don't have a complex snmpwalk command to test, here's a similar loop, which prints one through five but sleeps for random durations between echo commands:
for f in {1..5}; do
RANDOM=$f &&
sleep $((RANDOM/6000)) &&
echo $f &
done 2> /dev/null | cat
Output will be the same every time, (remove the RANDOM=$f && for varying output), and requires three seconds to run:
2
4
1
3
5
Compare that to code without the &&s and &:
for f in {1..5}; do
RANDOM=$f
sleep $((RANDOM/6000))
echo $f
done 2> /dev/null | cat
When run, the code requires seven seconds to run, with this output:
1
2
3
4
5
You can send tasks to the background by &. If you intend to wait for all of them to finish you can use the wait command:
process_to_background &
echo Processing ...
wait
echo Done
You can get the pid of the given task started in the background if you want to wait for one (or few) specific tasks.
important_process_to_background &
important_pid=$!
while i in {1..10}; do
less_important_process_to_background $i &
done
wait $important_pid
echo Important task finished
wait
echo All tasks finished
On note though: the background processes can mess up the output as they will run asynchronously. You might want to use a named pipe to collect the output from them.

clear my script logs every 10 second

I have script with name : run.sh
This is my script code :
#!/usr/bin/env bash
install() {
sudo apt-get update
sudo apt-get upgrade
}
if [ "$1" = "install" ]; then
install
else
if [ ! -f ./tg/tgcli ]; then
echo "tg not found"
echo "Run $0 install"
exit 1
fi
#sudo service redis-server restart
#./tg/tgcli -s ./bot/bot.lua -l 1 -E $#
./tg/tgcli -s ./bot/bot.lua $#
fi
and when run this script give me output like this every second :
[09:54] 2014 Hello
[09:55] 2014 Hi
[09:57] 2014 How Are you ?
and many like this (thousands in hour !)
and my server get slow in 5 hour.
i check print commands in bot.lua but there are no way to remove print it.
can you add some codes to clear my script logs every 10 second ?
Thanks a lot.
My Script Output Doesn't Save Anywhere and Just Show me in terminal
I want a code such as clear command on linux terminal , clear my script logs every 10 minute or 5 minute.
After 5 day of script running i can (sometimes can't) login my server and my server get very slow and i must wait 3 or 5 minute to login my server and this amazing after login my server my server again get fast !
and i forgot say i use byobu screen for run my scripts and I think screen get my server slow down.
I don't think that something as simple as this would cause your server to slow down, but you can add a check to your script to calculate the size or line count of your log file every time it runs.
This function assumes you are redirecting your output to a log file. Set the variables to whatever makes the most sense.
log_check() {
line_count=$(wc -l $log_file | awk '{print $1}')
size_check=$(du -ax $log_file | awk '{print $1}')
max_file_size="1500"
max_file_length="1000"
if [[ $line_count >= $max_file_length || $size_check >= $max_file_size ]]; then
echo "" > $log_file
fi
}
I would also recommend using [[ ]] over [ ] since this is a bash script, as long as you don't plan in it being posix compliant and only plan on using it with bash [[]] is always better than [].
EDIT:
Since you are logging output to the terminal and not a file you can literally use the clear command in your script.
Try this out and see how the functionality works
for i in {1..20}; do
echo $i
if (( i == 10 )); then
clear
fi
done
I'm assuming your code has a loop somewhere, if not it will be a bit more complex to clear the terminal session. I'm not really sure what part of your code is actually printing anything to stdout, I'm guessing it's this piece here
./tg/tgcli -s ./bot/bot.lua $#
You could try something like this, which will background your initial process and then run clear every 60 seconds to clear the terminal window. Is there any reason you're not writing the output to a log file? That alone could solve some of your issues as well.
#!/bin/bash
./tg/tgcli -s ./bot/bot.lua $# &
pid="$!"
check_pid() {
ps -ef |grep "$pid"|grep -v 'grep' &>/dev/null
}
cnt=1
until ! check_pid; do
if (( cnt == 6 )); then
clear
cnt=1
fi
sleep 10
((cnt++))
done

Bash Variable Maths Not Working

I have a simple bash script, which forms part of an in house web app that I've developed.
It's purpose is to automate deletion of thumbnails of images when the original image has been deleted by the user.
The script logs some basic status info to a file /var/log/images.log
#!/bin/bash
cd $thumbpath
filecount=0
# Purge extraneous thumbs
find . -type f | while read file
do
if [ ! -f "$imagepath/$file" ]
then
filecount=$[$filecount+1]
rm -f "$file"
fi
done
echo `date`: $filecount extraneous thumbs removed>>/var/log/images.log
Whilst the script correctly deletes thumbs, it doesn't correctly output the number of thumbs that are being purged, it always shows 0.
For example, having just manually created some orphaned thumbnails, and then running my script, the manually generated orphaned thumbs are deleted, but the log shows:
Thu Jun 9 23:30:12 BST 2011: 0 extraneous thumbs removed
What am I doing wrong that is stopping $filecounter from showing a number other than zero, when files are being deleted.
I've created the following bash script to test this, and this works perfectly, outputting 0 then 1:
#!/bin/bash
count=0
echo $count
count=$[$count+1]
echo $count
Edit:
Thanks for the answers, but why does the following work
$ x=3
$ x=$[$x+1]
$ echo $x
4
...and also the second example works, yet it doesn't work in the first script?
Second Edit:
This works
count=0
echo Initial Value $count
for i in `seq 1 5`
do
count=$[$count+1]
echo $count
done
echo Final Value $count
Initial Value 0
1
2
3
4
5
Final Value 5
as does replacing count=$[$count+1] with count=$((count+1)), but not in my initial script.
You're using the wrong operator. Try using $(( ... )) instead, e.g.:
$ x=4
$ y=$((x + 1))
$ echo $y
5
$
EDIT
The other problem you're bumping into is down to the pipe. Bumped into this one before (with ksh, but wouldn't suprise me to find that other shells have the same problem). The pipe is forking another bash process, so when you do the increment, filcount is getting incremented in the subshell that's been forked after the pipe. This value isn't passed back to the calling shell as the subshell has it's own independent environment (environment variables are inherited in called processes, but called process cannot modify the environment of the calling process).
As an example, this demonstrates that filecount gets incremented okay:
#!/bin/bash
filecount=0
ls /bin | while read x
do
filecount=$((filecount + 1))
echo $filecount
done
echo $filecount
...so you should see filecount increase in the loop, but the final filecount will be zero because this echo belongs to the main shell, but the forked subshell (which consists purely of the while loop).
One way you can get the value back is like this...
#!/bin/bash
filecount=0
filecount=`ls /bin | while read x
do
filecount=$((filecount + 1))
echo $filecount
done | tail -1`
echo $filecount
This will only work if you don't care about any other stdout output in the loop as this throws it all away apart from the last line we output (the final value of filecount). This works because we're using stdout and stdin to feed the data back to the parent shell.
Depending on your viewpoint this is either a nasty hack or a nifty bit of shell jiggery-pokery. I'll leave you to decide what you think it is :-)
If you remove the pipeline into the while construct, you remove bash's need to create a subshell.
Change this:
filecount=0
find . -type f | while read file; do
if [ ! -f "$imagepath/$file" ]; then
filecount=$[$filecount+1]
rm -f "$file"
fi
done
echo $filecount
to this:
filecount=0
while read file; do
if [ ! -f "$imagepath/$file" ]; then
rm -f "$file" && (( filecount++ ))
fi
done < <(find . -type f)
echo $filecount
That is harder to read because the find command is hidden at the end. Another possibility is:
files=$( find . -type f )
while ...; do
:
done <<< "$files"
Chris J is quite right that you are using the wrong operator and POSIX subshell variable scoping means you can't get a final count that way.
As a side note, when doing math operations you could also consider using the let shell bultin like this:
$ filecount=4
$ let filecount=$filecount+1
$ echo $filecount
5
Also if you want scoping to just work like you expected it to in spite of that pipeline, you could use zsh instead of bash. In this case it should be a drop in replacement and work as expected.

How can I tell if a file is older than 30 minutes from /bin/sh?

How do I write a script to determine if a file is older than 30 minutes in /bin/sh?
Unfortunately does not the stat command exist in the system. It is an old Unix system, http://en.wikipedia.org/wiki/Interactive_Unix
Perl is unfortunately not installed on the system and the customer does not want to install it, and nothing else either.
Here's one way using find.
if test "`find file -mmin +30`"
The find command must be quoted in case the file in question contains spaces or special characters.
The following gives you the file age in seconds:
echo $(( `date +%s` - `stat -L --format %Y $filename` ))
which means this should give a true/false value (1/0) for files older than 30 minutes:
echo $(( (`date +%s` - `stat -L --format %Y $filename`) > (30*60) ))
30*60 -- 60 seconds in a minute, don't precalculate, let the CPU do the work for you!
If you're writing a sh script, the most useful way is to use test with the already mentioned stat trick:
if [ `stat --format=%Y $file` -le $(( `date +%s` - 1800 )) ]; then
do stuff with your 30-minutes-old $file
fi
Note that [ is a symbolic link (or otherwise equivalent) to test; see man test, but keep in mind that test and [ are also bash builtins and thus can have slightly different behavior. (Also note the [[ bash compound command).
Ok, no stat and a crippled find. Here's your alternatives:
Compile the GNU coreutils to get a decent find (and a lot of other handy commands). You might already have it as gfind.
Maybe you can use date to get the file modification time if -r works?
(`date +%s` - `date -r $file +%s`) > (30*60)
Alternatively, use the -nt comparision to choose which file is newer, trouble is making a file with a mod time 30 minutes in the past. touch can usually do that, but all bets are off as to what's available.
touch -d '30 minutes ago' 30_minutes_ago
if [ your_file -ot 30_minutes_ago ]; then
...do stuff...
fi
And finally, see if Perl is available rather than struggling with who knows what versions of shell utilities.
use File::stat;
print "Yes" if (time - stat("yourfile")->mtime) > 60*30;
For those like myself, who don't like back ticks, based on answer by #slebetman:
echo $(( $(date +%s) - $(stat -L --format %Y $filename) > (30*60) ))
You can do this by comparing to a reference file that you've created with a timestamp of thirty minutes ago.
First create your comparison file by entering
touch -t YYYYMMDDhhmm.ss /tmp/thirty_minutes_ago
replacing the timestamp with the value thirty minutes ago. You could automate this step with a trivial one liner in Perl.
Then use find's newer operator to match files that are older by negating the search operator
find . \! -newer /tmp/thirty_minutes_ago -print
Here's my variation on find:
if [ `find cache/nodes.csv -mmin +10 | egrep '.*'` ]
Find always returns status code 0 unless it fails; however, egrep returns 1 is no match is found`. So this combination passes if that file is older than 10 minutes.
Try it:
touch /tmp/foo; sleep 61;
find /tmp/foo -mmin +1 | egrep '.*'; echo $?
find /tmp/foo -mmin +10 | egrep '.*'; echo $?
Should print 0 and then 1 after the file's path.
My function using this:
## Usage: if isFileOlderThanMinutes "$NODES_FILE_RAW" $NODES_INFO_EXPIRY; then ...
function isFileOlderThanMinutes {
if [ "" == "$1" ] ; then serr "isFileOlderThanMinutes() usage: isFileOlderThanMinutes <file> <minutes>"; exit; fi
if [ "" == "$2" ] ; then serr "isFileOlderThanMinutes() usage: isFileOlderThanMinutes <file> <minutes>"; exit; fi
## Does not exist -> "older"
if [ ! -f "$1" ] ; then return 0; fi
## The file older than $2 is found...
find "$1" -mmin +$2 | egrep '.*' > /dev/null 2>&1;
if [ $? == 0 ] ; then return 0; fi ## So it is older.
return 1; ## Else it not older.
}
Difference in seconds between current time and last modification time of myfile.txt:
echo $(($(date +%s)-$(stat -c "%Y" myfile.txt)))
you can also use %X or %Z with the command stat -c to get the difference between last access or last status change, check for 0 return!
%X time of last access, seconds since Epoch
%Y time of last data modification, seconds since Epoch
%Z time of last status change, seconds since Epoch
The test:
if [ $(($(date +%s)-$(stat -c "%Y" myfile.txt))) -lt 600 ] ; then echo younger than 600 sec ; else echo older than 600 sec ; fi
What do you mean by older than 30 minutes: modified more than 30 minutes ago, or created more than 30 minutes ago? Hopefully it's the former, as the answers so far are correct for that interpretation. In the latter case, you have problems since unix file systems do not track the creation time of a file. (The ctime file attribute records when the inode contents last changed, ie, something like chmod or chown happened).
If you really need to know if file was created more than 30 minutes ago, you'll either have to scan the relevant part of the file system repeatedly with something like find or use something platform-dependent like linux's inotify.
#!/usr/bin/ksh
## this script creates a new timer file every minute and renames all the previously created timer files and then executes whatever script you need which can now use the timer files to compare against with a find. The script is designed to always be running on the server. The first time the script is executed it will remove the timer files and it will take an hour to rebuild them (assuming you want 60 minutes of timer files)
set -x
# if the server is rebooted for any reason or this scripts stops we must rebuild the timer files from scratch
find /yourpath/timer -type f -exec rm {} \;
while [ 1 ]
do
COUNTER=60
COUNTER2=60
cd /yourpath/timer
while [ COUNTER -gt 1 ]
do
COUNTER2=`expr $COUNTER - 1`
echo COUNTER=$COUNTER
echo COUNTER2=$COUNTER2
if [ -f timer-minutes-$COUNTER2 ]
then
mv timer-minutes-$COUNTER2 timer-minutes-$COUNTER
COUNTER=`expr $COUNTER - 1`
else
touch timer-minutes-$COUNTER2
fi
done
touch timer-minutes-1
sleep 60
#this will check to see if the files have been fully updated after a server restart
COUNT=`find . ! -newer timer-minutes-30 -type f | wc -l | awk '{print $1}'`
if [ $COUNT -eq 1 ]
then
# execute whatever scripts at this point
fi
done
You can use the find command.
For example, to search for files in current dir that are older than 30 min:
find . -type f -mmin +30
You can read up about the find command HERE
if [[ "$(date --rfc-3339=ns -r /tmp/targetFile)" < "$(date --rfc-3339=ns --date '90 minutes ago')" ]] ; then echo "older"; fi

Resources