Display execution time of shell command with better accuracy - linux

The following will display the time it took for a given command to execute. How would I do the same but with better precision (i.e 5.23 seconds)?
[root#localhost ~]# start=`date +%s`; sleep 5 && echo execution time is $(expr `date +%s` - $start) seconds
execution time is 5 seconds
[root#localhost ~]#

You could try using the time command.
time sleep 5
In addition to elapsed wall clock time it will tell you how much CPU time the process consumed, and how much of the CPU time was spent in the application and how much in operating system calls.

Use the time command:
time COMMAND

You can also use the SECONDS variable:
In the Bash Variables section of the reference manual you'll read:
SECONDS This variable expands to the number of seconds since the shell was started. Assignment to this variable resets the count to the value assigned, and the expanded value becomes the value assigned plus the number of seconds since the assignment.
Hence, the following:
SECONDS=0; sleep 5; echo "I slept for $SECONDS seconds"
should output 5. It's only good for measuring times with precision of 1 second, but it sure is much better than the way you showed using the date command.
If you need more precision, you can use the time command, but it's a bit tricky to get the output of this command in a variable. You'll find all the information in the BashFAQ/032: How can I redirect the output of time to a variable or file?.

Related

Running Bash Script at random time

I have as iperf.sh shell script in multiple sub servers that runs at every " 1,14,28,42,50 * * * * " and pings the iperf server to check bandwidth , is there any way to randomize this cron or setting up a shell script that sleeps and runs at random time...?
[ Note : The issue that i am facing with this classic cron system is all sub-servers are running the iperf.sh script at the same time and my main-Iperf server is getting high cpu utilization which is resulting to improper ping data. ]
Thanks In Advance.
You can add a randomized wait period at the start of your script (or even in the crontab itself, as suggested in the comments).
I recommend GNU shuf which will be more portable than $RANDOM (since not all shells will support it, e.g. dash won't).
sleep $(shuf -i5-20 -n1)
# Rest of script
You can experiment with the range of random wait periods (5 to 20 seconds in this example).

/usr/bin/time, interpret the output

I wanted to record the cpu time of running some programs, I put /usr/bin/time command in from the the command, like the following:
/usr/bin/time command_name_and_args
the result I got as follow:
652.25user 5.29system 11:53.85elapsed 92%CPU (0avgtext+0avgdata 5109232maxresident)k
3800352inputs+1620608outputs (2major+319449minor)pagefaults 0swaps
would it be correct for the cpu time is 652.25 + 5.29 = 657.54 seconds?
and does 11:53.85elapsed mean 11 minutes 53.85 seconds on wall clock?
Thanks for help.
Exactly. CPU time might exceed wall clock time if you have more than one thread.

Get the load, cpu usage and time of executing a bash script

I have a bash script that I plan to run every 5 or 15 mins using crontab based on the load it puts on server.
I can find time of running the script, but load, memory usage and CPU usage I am not sure how to find.
Can someone help me?
Also any suggestions of rough benchmark that will help me decide if the script puts too much load and should be run every 15 mins and not 5 mins.
Thanks in Advance!
You can use "top -b", top gives the CPU usage, memory usage etc,
Insert these lines in your script, this will process in background and will terminate the process as soon as your testing overs.
ssh server_name "nohup top -b -d 0.5 >> file_name &"
\top process will run in background because of &, -d 0.5 will give you the cpu status at every 0.5 secs, redirect the output in file_name for latter analysis.
for killing the process after your test, insert following in your script,
ssh server_name "kill \`ps -elf | grep 'top -b' | grep -v grep | sed 's/ */ /g' |cut -d ' ' -f4\`"
Your main testing script should be between top command and command for killing top.
I presumed you are running the script from client side, if not ignore "ssh server_name".
If you are running it from client side, because of "ssh", you will be asked for the password, for avoiding this follow these 3 simple steps
This will definitely solve the issue.
You can check following utilities
pidstat for CPU load, man page
pmap for memory load, man page
Although you might need to make measurements also for the child processes of your executable, in order to collect summarizing information
For memory, use free -m. Your actual memory available is the second number next to +/- buffers/cache (in megabytes with -m) (source).
For CPU, it's a bit more complicated. Start by looking at cat /proc/stat | grep 'cpu ' (note the space). You'll see something like this:
cpu 2255 34 2290 22625563 6290 127 456
The columns are from left to right, "user, nice, system, idle". CPU usage is usually calculated as (user+nice+system) / (user+nice+system+idle). However, these numbers show the number of "time units" that the CPU has spent doing that thing since boot, and thus are always increasing. If you were to do the aforementioned calculation, you'd get the CPU usage average since boot. To get a point-in-time usage, you have to take 2 samples, find their difference, and calculate the usage from that. To be clear, that will be the average CPU usage between your samples. (source)

Find Mysqldump Execution Time?

I want to find how much it takes to run my mysqldump and compare it with my I/O rate at the end of mysqldump command.
looking for someting like :
bash:> time .dumpscript
--and at the end he will calculate my I/O rate from starting point to finish point giving me something like :
Dumpsize Time I/O per sec
30 gb 30 min 5mb/sec
Thx all!
You can use the time command in bash to see how long your command takes. This will give the execution time in seconds:
{ time -p ./dumpscript; } 2>&1 | tail -3 | awk 'NR==1{print $2}'
Presumably you know the location of the dump file, so you can find the size of that with stat. Since you now know the size of the file, and the time it took to create it, you can calculate the I/O rate with some basic arithmetic.

How to implement cpu-time timeout for script/program in linux?

It's crucial to measure not user time for some program/script, but the cpu time and kill it when this time limit will be breached.
What's the best way to do it?
One of the most obvious solutions is to check with some time step process tree to see if the requested program/script hasn't breached it's limits. It's implemented in a perl script (pshved/timeout). I'm looking for other aproaches
You can use ulimit(1) or setrlimit(2) to limit the cpu time. The process will be automatically killed if it uses more cpu time. It is also possible to specify a soft limit that can be ignored.
Simple example:
#! /bin/bash
(
ulimit -t 5
python -c '
a, b = 0, 1
while True:
a += b
b += a
'
echo $?
)
echo "..."

Resources