guys. I am confused about the result below:
1). time xxxxx
real 0m28.942s
user 0m28.702s
sys 0m0.328s
2). /usr/bin/time -p xxxxx
real 28.48
user 0.00
sys 0.13
so. I have some question(user: 0m28.702s != 0, sys: 0m0.328s != 0.13):
what's different between time and /usr/bin/time ?
what's different in differnt cpu platform, one core or multicore ?
any suggestion?
It's quite easy to find out the answer to your first question using type:
$ type time
time is a shell keyword
$ type /usr/bin/time
/usr/bin/time is /usr/bin/time
So the first command uses a bash built-in, while the latter defers to an external program. However, not knowing what system you are using, I have no idea where that program comes from. On Gentoo Linux, there's no /usr/bin/time by default, and the only implementation available is GNU time that has different output.
That said, I have tried a command similar to yours (assuming it's working on a 1G file), and got the following results:
$ time sed -e 's/0//g' big-file > big-file2
real 0m40.600s
user 0m31.295s
sys 0m4.174s
$ /usr/bin/time sed -e 's/0//g' big-file > big-file2
35.06user 3.31system 0:40.58elapsed 94%CPU (0avgtext+0avgdata 3488maxresident)k
8inputs+2179176outputs (0major+276minor)pagefaults 0swaps
As you can see, the numbers are similar.
Then, given your results (0 userspace time is quite impossible) I'd say that your /usr/bin/time is simply broken. This might be worth reporting a bug to its author.
Related
How can I use a linux command to get the wall time in seconds spent for executing a program. In the example below,I expected to get "0.005".
$ time ls >/dev/null
real 0m0.005s
user 0m0.001s
sys 0m0.003s
Depending on your path:
/usr/bin/time -f "%e"
The normal time is given by bash's (if you happen to use bash) intern time command
type time
while you need the one,
which time
will find.
So in context of your command:
/usr/bin/time -f "%e" ls > /dev/null
But to store it in a variable, you can't use
a=$(/usr/bin/time -f "%e" ls > /dev/null)
because the output of time is written to the error stream, to not inflict with the programs output (in this example ls). See the manpage of time for further details.
I tried to measure execution time and format it with this command:
time -f "%e" ./1 1000 1
-f: command not found
real 0m0.066s
user 0m0.044s
sys 0m0.023s
But such command works:
/usr/bin/time -f "%e" ./1 1000 1
31245 212 443
0.00
I tried to determine where another time is located, but all showes to /usr/bin/time
which time
/usr/bin/time
or
whereis time
time: /usr/bin/time /usr/bin/X11/time /usr/include/time.h /usr/share/man/man7/time.7.gz /usr/share/man/man2/time.2.gz /usr/share/man/man1/time.1.gz
or
type -a time
time is a shell keyword
time is /usr/bin/time
How to define where another time is located?
Users of the bash shell need to use an explicit path in order to run
the external time command and not the shell builtin variant. On system
where time is installed in /usr/bin, the first example would become
/usr/bin/time wc /etc/hosts
OR
Note: some shells (e.g., bash(1)) have a built-in time command that
provides less functionality than the command described here. To
access the real command, you may need to specify its pathname
(something like /usr/bin/time).
http://man7.org/linux/man-pages/man1/time.1.html
I want to get time of execution of a program in my terminal. I know that I should use this command:
time chmod +x ~/example
but the output is this:
real 0m0.088s
user 0m0.057s
sys 0m0.030s
But I want to access each one separately. for example just real. how can I get that?
You can use -f to format the time command:
$ /usr/bin/time -f "\t%E Elapsed Real Time" touch a
0:00.00 Elapsed Real Time
The geek stuff has a very broad documentation on this topic:
12 UNIX / Linux Time Command Output Format Option Examples.
It is also funny that calling it with time alone did not work to me, I have to use the full path.
In bash, you can influence the ouput of time with the TIMEFORMAT variable, by setting
TIMEFORMAT=%R # real
TIMEFORMAT=%U # user
TIMEFORMAT=%S # sys
before calling it. However, your problems probably don't end there -- capturing the output of time is not trivial with bash because it's not a subprocess but a shell builtin. There's an entry in the bash FAQ on the topic. Going from there, I think you ultimately want
TIMEFORMAT=%R myvar=$( { time chmod +x ~/example > /dev/null 2>&1; } 2>&1 )
Then $myvar will be the real running time of the command.
You can do:
(time chmod +x ~/example) |& awk '$1=="real"{print $2}'
0m0.003s
I'm working on a simulation model, where I want to determine when the storage IOPS capacity becomes a bottleneck (e.g. and HDD has ~150 IOPS, while an SSD can have 150,000). So I'm trying to come up with a way to benchmark IOPS in a command (git) for some of it's different operations (push, pull, merge, clone).
So far, I have found tools like iostat, however, I am not sure how to limit the report to what a single command does.
The best idea I can come up with is to determine my HDD IOPS capacity, use time on the actual command, see how long it lasts, multiply that by IOPS and those are my IOPS:
HDD ->150 IOPS
time df -h
real 0m0.032s
150 * .032 = 4.8 IOPS
But, this is of course very stupid, because the duration of the execution may have been related to CPU usage rather than HDD usage, so unless usage of HDD was 100% for that time, it makes no sense to measure things like that.
So, how can I measure the IOPS for a command?
There are multiple time(1) commands on a typical Linux system; the default is a bash(1) builtin which is somewhat basic. There is also /usr/bin/time which you can run by either calling it exactly like that, or telling bash(1) to not use aliases and builtins by prefixing it with a backslash thus: \time. Debian has it in the "time" package which is installed by default, Ubuntu is likely identical, and other distributions will be quite similar.
Invoking it in a similar fashion to the shell builtin is already more verbose and informative, albeit perhaps more opaque unless you're already familiar with what the numbers really mean:
$ \time df
[output elided]
0.00user 0.00system 0:00.01elapsed 66%CPU (0avgtext+0avgdata 864maxresident)k
0inputs+0outputs (0major+261minor)pagefaults 0swaps
However, I'd like to draw your attention to the man page which lists the -f option to customise the output format, and in particular the %w format which counts the number of times the process gave up its CPU timeslice for I/O:
$ \time -f 'ios=%w' du Maildir >/dev/null
ios=184
$ \time -f 'ios=%w' du Maildir >/dev/null
ios=1
Note that the first run stopped for I/O 184 times, but the second run stopped just once. The first figure is credible, as there are 124 directories in my ~/Maildir: the reading of the directory and the inode gives roughly two IOPS per directory, less a bit because some inodes were likely next to each other and read in one operation, plus some extra again for mapping in the du(1) binary, shared libraries, and so on.
The second figure is of course lower due to Linux's disk cache. So the final piece is to flush the cache. sync(1) is a familiar command which flushes dirty writes to disk, but doesn't flush the read cache. You can flush that one by writing 3 to /proc/sys/vm/drop_caches. (Other values are also occasionally useful, but you want 3 here.) As a non-root user, the simplest way to do this is:
echo 3 | sudo tee /proc/sys/vm/drop_caches
Combining that with /usr/bin/time should allow you to build the scripts you need to benchmark the commands you're interested in.
As a minor aside, tee(1) is used because this won't work:
sudo echo 3 >/proc/sys/vm/drop_caches
The reason? Although the echo(1) runs as root, the redirection is as your normal user account, which doesn't have write permissions to drop_caches. tee(1) effectively does the redirection as root.
The iotop command collects I/O usage information about processes on Linux. By default, it is an interactive command but you can run it in batch mode with -b / --batch. Also, you can a list of processes with -p / --pid. Thus, you can monitor the activity of a git command with:
$ sudo iotop -p $(pidof git) -b
You can change the delay with -d / --delay.
You can use pidstat:
pidstat -d 2
More specifically pidstat -d 2 | grep COMMAND or pidstat -C COMMANDNAME -d 2
The pidstat command is used for monitoring individual tasks currently being managed by the Linux kernel. It writes to standard output activities for every task selected with option -p or for every task managed by the Linux kernel if option -p ALL has been used. Not selecting any tasks is equivalent to specifying -p ALL but only active tasks (tasks with non-zero statistics values) will appear in the report.
The pidstat command can also be used for monitoring the child processes of selected tasks.
-C commDisplay only tasks whose command name includes the stringcomm. This string can be a regular expression.
Is there any ready-to-use solution to log the memory consumption from the start of the system? I'd like to log the data to simple text file or some database so I can analyze it later.
I'm working on Linux 2.4-based embedded system. I need to debug the problem related to memory consumption. My application automatically start on every system start. I need the way to get the data with timestamps from regular intervals (as often as possible), so I can track down problem.
The symptoms of my problem: when system starts it launched my main application and GUI to visualize the main parameters of the system. GUI based on GTK+ (X server). If I disable GUI and X server then my application works OK. If I enable GUI and X server it does not work when I have 256 MiB or 512 MiB of physical memory installed on the motherboard. If I have 1 GiB of memory installed then everything is OK.
The following script prints time stamps and a header.
#!/bin/bash -e
echo " date time $(free -m | grep total | sed -E 's/^ (.*)/\1/g')"
while true; do
echo "$(date '+%Y-%m-%d %H:%M:%S') $(free -m | grep Mem: | sed 's/Mem://g')"
sleep 1
done
The output looks like this (tested on Ubuntu 15.04, 64-bit).
date time total used free shared buffers cached
2015-08-01 13:57:27 24002 13283 10718 522 693 2308
2015-08-01 13:57:28 24002 13321 10680 522 693 2308
2015-08-01 13:57:29 24002 13355 10646 522 693 2308
2015-08-01 13:57:30 24002 13353 10648 522 693 2308
A small script like
rm memory.log
while true; do free >> memory.log; sleep 1; done
I am a big fan of logging everything and I find it useful to know which processes are using the memory and how much each process is using (as well as sumary statistics). The following command records a top printout ordered by memory consumption every 0.5 seconds:
top -bd0.5 -o +%MEM > memory.log
Just note that the log file will grow a lot faster than if you only store the total memory utilization statistics so be sure you don't run out of disk space.
There's a program called
sar
on *nix systems. You could try to use that to monitor memory usage. It takes measurements at regular intervals. Do a
man sar
for more details. I think the option is -r for taking memory measurements, -i to specify the interval you'd like.
I think adding a crontab entry will be enough
*/5 * * * * free -m >> some_output_file
There are other tools like SeaLion, New Relic, Server Density etc which will almost do the same but are much easier to install and configure. My favorite is SeaLion, as it being free and also it gives a awesome timeline view of raw outputs of common linux commands.
You could put something like
vmstat X >> mylogfile
into a startup script. Since your application is already in startup you could just add this line to the end of the initialization script your application is already using.
(where X is # of seconds between log messages)
To periodically log the memory usage efficiently, I combined another answer here with a method to only retain the top-K memory-using processes.
top -bd 1.5 -o +%MEM | grep "load average" -A 9 > memory_usage.log
This command will record, every 1.5s, the top header information and the 3 highest memory-consuming processes (there's a 6-line offset for top's header information). This saves lots of disk space over recording top's information for every process.
So I know that I am late to this game, but I just came up with this answer, as I needed to do this, and really didn't want the extra fields that vmstat, free, etc... all will seem to output without excess filtering. So here is the answer that I came up with:
top -bd 0.1 | grep 'KiB Mem' | cut -d' ' -f10 > memory.txt
OR:
top -bd 0.1 | grep 'KiB Mem' | cut -d' ' -f10 | tee memory.txt
the standard output from top when grep ing with Kib Mem is:
KiB Mem : 16047368 total, 8708172 free, 6015720 used, 1323476 buff/cache
By running this through cut, we filter down to literally just the number prior to used
The user can indeed modify the 0.1 to another number in order to run different capture sample rates. In my case I wanted to use top also because you can run memory stats faster than 1 second per capture, as you can see here I wanted to capture a stat every 1/10th of a second.
NOTES:
It does turn out that piping through cut cause MASSIVE delay in getting anything out to file. As we later found out, it is much faster to leave out the cut command during data acquisition, then perform the cut command on the output file later.
Also, we had no need for timestamps in our tests.
This thus looks as follows:
Begin Logging:
top -bd 0.1 | grep 'KiB Mem' | tee memory_raw.txt
Exit Logging:
ctrl-z (to exit logging)
Filter:
2 levels of cut (filtering), first by comma, then by space. This is due to the alignment of top and provides much cleaner output:
cut memory_raw -d',' -f3 | tee memory_used_withlabel.txt
cut memory_used_withlabel.txt -d' ' -f3 | tee memory_used.txt