Linux time command - real vs user vs system - linux

I am running a jar file in Linux with time command. Below is the output after execution.
15454.58s real 123464.61s user 6455.55s system
Below is the command executed.
time java -jar -Xmx7168m Batch.jar
But actual time taken to execute the process is 9270 seconds.
Why the actual time(wall clock time) and real time is different?
Can anyone explain this? Its running on multi core machine (32 core).

Maybe this explains the deviation you are experiencing. From the time Wikipedia article:
Because a program may fork children whose CPU times (both user and
sys) are added to the values reported by the time command, but on a
multicore system these tasks are run in parallel, the total CPU time
may be greater than the real time.
Apart from that, your understanding of real time conforms with the definition given in time(7):
Real time is defined as time measured from some fixed point, either from a standard point in the past (see the description of the Epoch and calendar time below), or from some point (e.g., the start) in the life of a process (elapsed time).
See also bash(1) (although its documentation on the time command is not overly comprehensive).
If seconds are exact enough for you, this little wrapper can help:
#!/bin/bash
starttime=$( date +"%s" )
# run your program here
endtime=$( date +"%s" )
duration=$(( endtime-starttime ))
echo "Execution took ${duration} s."
Note: If the system time is changed while your program is running, the results will be incorrect.

From what I remember, user time is the time it spends in user space, system is the time spend running in kernel space (syscalls, for example), and real is also called the wall clock time (the actual time that you can measure with a stop-watch, for example). Don't know exactly how this is calculated on a SMP system.

Related

How accurate is the Linux bash time command?

I want to timestamp some events in a logfile from a bash script. I need this timestamp to be as accurate as possible. I see that the standard way of doing this from bash seems to be the time command, which can produce a nanoseconds timestamp with the +%s%N option.
However, when doing this from C I remembered that multiple timekeeping functions had multiple clock sources, and not all of them were equally accurate or had the same guarantees (e.g. being monotonic). How do I know what clock source time uses?
The man 1 time is rather clear:
These statistics consist of (i) the elapsed real time between
invocation and termination, (ii) the user CPU time (the sum of the tms_utime and tms_cutime values in a struct tms as returned by
times(2)), and (iii) the system CPU time (the sum of the tms_stime and tms_cstime values in a struct tms as returned by
times(2)).
So we can go to man 3p times where is just states The accuracy of the times reported is intentionally left unspecified to allow implementations flexibility in design, from uniprocessor to multi-processor networks. So we can go to man 2 times, and learn that it's all measured with clock_t and maybe we should use clock_gettime instead
How do I know what clock source time uses?
As usually on a GNU system, all programs are open source. So you go and download sources of the kernel and you shell and inspect them to see how it works. I see in bash time_command() there are many methods available and nowadays bash uses rusage as a replacement for times.
How accurate is the Linux bash time command?
Both getrusage() and times() are system calls by themselfs, so the values are returned straight from the kernel. My guess would be that they are measured with the accuracy the kernel can give us - so with jiffies/HZ.
The resolution of the measurement will be equal to jiffies, so usually with 300 HZ thats 3.333ms if my math is right. The accuracy will depend on your hardware, maybe also workload - my overestimated guess would be that the values will be right up to one or two jiffies of accuracy, so up to ~7 milliseconds.

Using perf to record a profile that includes sleep/blocked times

I want to get a sampling profile of my program that includes blocked time (waiting for a network service) as well as CPU time.
perf's default profiling mode (perf record -F 99 -g -- ./binary) samples whole-system running time, but doesn't give a clear indication about how much time my program spends in what parts of my program: it's skewed toward CPU-intensive parts and doesn't show IO-intensive parts at all. The sleep time profiling mode (related on SO) shows sleep times but no general profile.
What I'd like is something really simple: record a call stack of my program every 10ms, no matter whether it's running or currently blocked. Then make a flamegraph out of that.

How to change the system clock rate or OS clock rate?

I want to know is there any way to change Windows or Linux clock rate or the system clock rate (maybe via BIOS)? I mean accelerate or decelerate system clock!
For example every 24 hours in the computer lasts 12 hours or 36 hours in real!!!
NOTE :
Using the below batch file, I can decelerate Windows time. But I want something in a lower level! I want to change clock pace in a way that all time for all the programs and tool be slower or faster! not only Windows time!
#echo off
:loop
set T=%time%
timeout 1
time %T%
timeout 1
goto loop
So your CPU's clock is not actually programmable via system calls. It's actually working off of an oscillator w/ crystal. You cannot change during booting up. This is done intentionally so that your CPU is able time regardless of your power/wifi/general system status.
As commented by That Other Guy you might perhaps use adjtimex(2) syscall, but you first should be sure that no NTP client daemon -which uses adjtimex- is running (so stop any ntpd or chrony service).
I'm not sure it would work, and it might make your system quite unstable.
A more rude possibility might be to forcibly set the date(1) -or also hwclock(8)- quite often (e.g. in some crontab job running every 5 minutes).
I believe it (i.e. decelerating a lot the system clock) is a strange and bad thing to do. Don't do that on a production machine (or even on some machine doing significant requests on the Web). Be prepared to perhaps break a lot of things.

Serial Code Experience Big Difference In Running Time On A GPFS FS

I need to measure the wall time of a serial code running on our cluster. In an exclusive mode, i.e., no other user is using my node, the wall time of the code vary quite a lot, ranging from 2:30m to 3:20m. The code does the same thing in every run. I am wandering if the big variance in the wall time is caused by the GPFS file system since the code reads and writes to files stored in a GPFS file system. My question is if there is a tool I can view the GPFS i/o performance and relate it to the performance of my code?
Thanks.
This is a very big question...we need to narrow it down a bit. So, let me ask some questions.
Let us see the time command output for a simple ls command.
$ time ls
real 0m0.003s
user 0m0.001s
sys 0m0.001s
Wall clock time is == real time, which in your case, is varying. If we go to the next step of debugging, the question to ask is: does user time and system time also varies? If GPFS file system is inside the kernel and consumes varying time, you should see the sys time vary. If the sys time remains the same, but the real time varies, then the program is spending time sleeping on something. There are more deeper ways to find the problem....but can you please clarify your question more?

How can I get system time from a proc file?

How can I get system time from a proc file? I know we can get system time from some commands such as date, and also can write some code based on time API. But I really need to use a simple proc file to get the time. No matter what the time format is, a simple value is OK. For example, the total seconds from 1970/1/1 is really good enough.
Yes, you can :
cat /proc/driver/rtc
From man:
RTC vs system clock
RTCs should not be confused with the system clock, which is a
software clock maintained by the kernel and used to implement
gettimeofday(2) and time(2), as well as setting timestamps on files,
and so on. The system clock reports seconds and microseconds since a
start point, defined to be the POSIX Epoch: 1970-01-01 00:00:00 +0000
(UTC).
More here
You can get the amount of time since the system booted from /proc/uptime, but there is no way I know of to get the real time from /proc.

Resources