How can I get system time from a proc file? I know we can get system time from some commands such as date, and also can write some code based on time API. But I really need to use a simple proc file to get the time. No matter what the time format is, a simple value is OK. For example, the total seconds from 1970/1/1 is really good enough.
Yes, you can :
cat /proc/driver/rtc
From man:
RTC vs system clock
RTCs should not be confused with the system clock, which is a
software clock maintained by the kernel and used to implement
gettimeofday(2) and time(2), as well as setting timestamps on files,
and so on. The system clock reports seconds and microseconds since a
start point, defined to be the POSIX Epoch: 1970-01-01 00:00:00 +0000
(UTC).
More here
You can get the amount of time since the system booted from /proc/uptime, but there is no way I know of to get the real time from /proc.
Related
I want to timestamp some events in a logfile from a bash script. I need this timestamp to be as accurate as possible. I see that the standard way of doing this from bash seems to be the time command, which can produce a nanoseconds timestamp with the +%s%N option.
However, when doing this from C I remembered that multiple timekeeping functions had multiple clock sources, and not all of them were equally accurate or had the same guarantees (e.g. being monotonic). How do I know what clock source time uses?
The man 1 time is rather clear:
These statistics consist of (i) the elapsed real time between
invocation and termination, (ii) the user CPU time (the sum of the tms_utime and tms_cutime values in a struct tms as returned by
times(2)), and (iii) the system CPU time (the sum of the tms_stime and tms_cstime values in a struct tms as returned by
times(2)).
So we can go to man 3p times where is just states The accuracy of the times reported is intentionally left unspecified to allow implementations flexibility in design, from uniprocessor to multi-processor networks. So we can go to man 2 times, and learn that it's all measured with clock_t and maybe we should use clock_gettime instead
How do I know what clock source time uses?
As usually on a GNU system, all programs are open source. So you go and download sources of the kernel and you shell and inspect them to see how it works. I see in bash time_command() there are many methods available and nowadays bash uses rusage as a replacement for times.
How accurate is the Linux bash time command?
Both getrusage() and times() are system calls by themselfs, so the values are returned straight from the kernel. My guess would be that they are measured with the accuracy the kernel can give us - so with jiffies/HZ.
The resolution of the measurement will be equal to jiffies, so usually with 300 HZ thats 3.333ms if my math is right. The accuracy will depend on your hardware, maybe also workload - my overestimated guess would be that the values will be right up to one or two jiffies of accuracy, so up to ~7 milliseconds.
Is there a clock in Linux with nanosecond precision that is strictly increasing and maintained through a power cycle? I am attempting to store time series data in a database where each row has a unique time stamp. Do I need to use an external time source such as a GPS receiver to do this? I would like the time stamp to be in or convertible to GPS time.
This is not a duplicate of How to create a high resolution timer in Linux to measure program performance?. I am attempting to store absolute times, not calculate relative time differences. The clock must persist over a power cycle.
Most computers now have software that periodically corrects the system time using the internet. This means that the system clock can go up or down some milliseconds every so often. Remember that the computer clock has some drift. If you don't want problems with leap seconds, use sidereal time (no leap second corrections). ntp will be off in the microsecond or millisecond range because of differences in latency over the internet. The clocks that would meet your requirements are fifty thousand dollars and up.
Based on the question, the other answers, and discussion in comments...
You can get "nanosecond precision that is strictly increasing and maintained through a power cycle" by combining the results of clock_gettime() using the CLOCK_REALTIME and from using CLOCK_MONOTONIC - with some caveats.
First, turn off NTP. Run NTP once at each system restart to sync your time with the world, but do not update the time while the system is up. This will avoid rolling the time backwards. Without doing this you could get a sequence such as
20160617173556 1001
20160617173556 1009
20160617173556 1013
20160617173555 1020 (notice the second went backward)
20160617173556 1024
(For this example, I'm just using YYMMDDhhmmss followed by some fictional monotonic clock value.)
Then you face business decisions.
• How important is matching the world's time, compared to strictly increasing uniqueness?
. (hardware clock drift could throw off accuracy with world time)
• Given that decision, is it worth the investment in specialized hardware, rather than a standard (or even high-end) PC?
• If two events actually happen during the same nanosecond, is it acceptable to have duplicate entries?
• etc.
There are many tradeoffs possible based on the true requirements that lead to developing this application.
In no particular order:
To sure your time is free of savings time changes use date's -u argument to use UTC time. That is always increasing, baring time corrections from system admins.
The trouble with %N is that the precision you actually get depends on the hardware and can be much less than %N allows. Run a few experiments to find out. Warnings on this tend to be everywhere but it is still overlooked.
If you are writing Cish code, see the utime() function and use gmtime(), not localtime() type functions to convert to text. Look at the strftime() function to format the integer part of the time. You will find the strftime() format fields magically match those of the date command formats as basically, date calls strftime. The true paranoid willing to write additional code can use CLOCK_MONOTONIC to be sure your time is increasing.
If you truly require increasing times you may need to write your own command or function that remembers the last time. If called during the same time add and offset of 1. Keep incrementing the offset as often as required to assure unique times until you get hardware time greater than your adjusted time.
Linux tends to favor NTP to obtain Network time. The previously mentioned function to assure increasing time will help with backwards jumps as the jumps are usually not large.
If nanosecond precision is really sufficient for you:
date +%s%N
I am running a jar file in Linux with time command. Below is the output after execution.
15454.58s real 123464.61s user 6455.55s system
Below is the command executed.
time java -jar -Xmx7168m Batch.jar
But actual time taken to execute the process is 9270 seconds.
Why the actual time(wall clock time) and real time is different?
Can anyone explain this? Its running on multi core machine (32 core).
Maybe this explains the deviation you are experiencing. From the time Wikipedia article:
Because a program may fork children whose CPU times (both user and
sys) are added to the values reported by the time command, but on a
multicore system these tasks are run in parallel, the total CPU time
may be greater than the real time.
Apart from that, your understanding of real time conforms with the definition given in time(7):
Real time is defined as time measured from some fixed point, either from a standard point in the past (see the description of the Epoch and calendar time below), or from some point (e.g., the start) in the life of a process (elapsed time).
See also bash(1) (although its documentation on the time command is not overly comprehensive).
If seconds are exact enough for you, this little wrapper can help:
#!/bin/bash
starttime=$( date +"%s" )
# run your program here
endtime=$( date +"%s" )
duration=$(( endtime-starttime ))
echo "Execution took ${duration} s."
Note: If the system time is changed while your program is running, the results will be incorrect.
From what I remember, user time is the time it spends in user space, system is the time spend running in kernel space (syscalls, for example), and real is also called the wall clock time (the actual time that you can measure with a stop-watch, for example). Don't know exactly how this is calculated on a SMP system.
I want to know is there any way to change Windows or Linux clock rate or the system clock rate (maybe via BIOS)? I mean accelerate or decelerate system clock!
For example every 24 hours in the computer lasts 12 hours or 36 hours in real!!!
NOTE :
Using the below batch file, I can decelerate Windows time. But I want something in a lower level! I want to change clock pace in a way that all time for all the programs and tool be slower or faster! not only Windows time!
#echo off
:loop
set T=%time%
timeout 1
time %T%
timeout 1
goto loop
So your CPU's clock is not actually programmable via system calls. It's actually working off of an oscillator w/ crystal. You cannot change during booting up. This is done intentionally so that your CPU is able time regardless of your power/wifi/general system status.
As commented by That Other Guy you might perhaps use adjtimex(2) syscall, but you first should be sure that no NTP client daemon -which uses adjtimex- is running (so stop any ntpd or chrony service).
I'm not sure it would work, and it might make your system quite unstable.
A more rude possibility might be to forcibly set the date(1) -or also hwclock(8)- quite often (e.g. in some crontab job running every 5 minutes).
I believe it (i.e. decelerating a lot the system clock) is a strange and bad thing to do. Don't do that on a production machine (or even on some machine doing significant requests on the Web). Be prepared to perhaps break a lot of things.
I wan't to measure the executing duration of a process outside this process on Linux. I found /proc/[pid]/state has a field named starttime which described to be "The time in jiffies the process started after system boot" on man page.
Also, I found /proc/uptime provides elapsed time ET in seconds since system boot. Theoretically I can acquire running time from these two files by
running time = ET - starttime / (jiffies per second).
As to jiffies, I think it refers to CONFIG_HZ of kernel (250 on ubuntu 12.04) instead of USER_HZ (100 on ubuntu 12.04, acquired by "getconf CLK_TCK"), as described in http://www.makelinux.net/books/lkd2/ch10lev1sec3. However, I test it and found that in fact the starttime uses USER_HZ on ubuntu 12.04. I was confused by this point. Could someone explained this to me? Thanks a lot!
Your man page was probably out-of-date at the time you retrieved it. Here's a more current page which states the following:
(22) starttime %llu
The time the process started after system boot. In
kernels before Linux 2.6, this value was expressed
in jiffies. Since Linux 2.6, the value is expressed
in clock ticks (divide by sysconf(_SC_CLK_TCK)).
In older kernels (before Linux 2.6), the time really was represented in kernel jiffies. However, this behavior changed to now provide time in clock ticks -- jiffies scaled via the USER_HZ constant as you expect.