Linux coreutils timeout Relative or Absolute time? - linux

I'm using coreutils 8.13 and want to use the timeout command in combination with a python subprocess. If the system clock is changed during a long running call, will the timeout command work as expected? In other words, does the timeout command use absolute time (affected by changing system clock) or relative time (unaffected by changing system clock)?
Edit: After some digging, I've narrowed down that the answer is dependent on the behavior of "alarm" in unistd.h. Still digging through source...please save me.
Edit2: Older version of coreutils (<8.13) use alarm(2) with ITIMER_REAL to timeout. Newer versions (>=8.13) use timer_create(2) with CLOCK_REALTIME to timeout. This post tells me that my implementation should be affected by system clock changes. However, it is not. A simple test using a python script running a while loop shows me that changing the system clock does not affect the timeout. ???

Adjustments to the CLOCK_REALTIME clock have no effect on relative timers based on that clock.
At least, according to this answer of this somewhat similar question.

Related

Is clock_nanosleep affected by adjtime and NTP?

Usually CLOCK_MONOTONIC_RAW is used for obtaining a clock that is not affected by NTP or adjtime(). However clock_nanosleep() doesn't support CLOCK_MONOTONIC_RAW and trying to use it anyway will result in return code 95 Operation not supported (Kernel 4.6.0).
Does clock_nanosleep() somehow take these clock adjustments into account or will the sleep time be affected by it?
What are the alternatives if a sleeping time is required which should not be affected by clock adjustments?
CLOCK_MONOTONIC_RAW never had support for clock_nanosleep() since it was introduced in Linux 2.6.28. It was also explicitly fixed to not have this support in 2.6.32 because of oopses. The code had been refactored several times after that, but still there is no support for CLOCK_MONOTONIC_RAW in clock_nanosleep() and I wasn't able to find any comments on why is that.
At the very minimum, the fact that there was a patch that explicitly disabled this functionality and it passed all reviews tells us that it doesn't look like a big problem for kernel developers. So, at the moment (4.7) the only things CLOCK_MONOTONIC_RAW supports are clock_getres() and clock_gettime().
Speaking of adjustments, as already noted by Rich CLOCK_MONOTONIC is subject to rate adjustments just by the nature of this clock. This happens because hrtimer_interrupt() runs its queues with adjusted monotonic time value (ktime_get_update_offsets_now()->timekeeping_get_ns()->timekeeping_delta_to_ns() and that operates with xtime_nsec which is subject to adjustment). Actually, looking at this code I'm probably no longer surprised that CLOCK_MONOTONIC_RAW has no support for clock_nanosleep() (and probably won't have it in future) — adjusted monotonic clock usage seems to be the basis for hrtimers.
As for alternatives, I think there are none. nanosleep() uses the same CLOCK_MONOTONIC, setitimer() has its own set of timers, alarm() uses ITIMER_REAL (same as setitimer()), that (with some indirection) is also our good old friend CLOCK_MONOTONIC. What else do we have? I guess nothing.
As an unrelated side note, there is an interesting observation in that if you call clock_nanosleep() for relative interval (that is not TIMER_ABSTIME) then CLOCK_REALTIME actually becomes a synonym for CLOCK_MONOTONIC.

How to change the system clock rate or OS clock rate?

I want to know is there any way to change Windows or Linux clock rate or the system clock rate (maybe via BIOS)? I mean accelerate or decelerate system clock!
For example every 24 hours in the computer lasts 12 hours or 36 hours in real!!!
NOTE :
Using the below batch file, I can decelerate Windows time. But I want something in a lower level! I want to change clock pace in a way that all time for all the programs and tool be slower or faster! not only Windows time!
#echo off
:loop
set T=%time%
timeout 1
time %T%
timeout 1
goto loop
So your CPU's clock is not actually programmable via system calls. It's actually working off of an oscillator w/ crystal. You cannot change during booting up. This is done intentionally so that your CPU is able time regardless of your power/wifi/general system status.
As commented by That Other Guy you might perhaps use adjtimex(2) syscall, but you first should be sure that no NTP client daemon -which uses adjtimex- is running (so stop any ntpd or chrony service).
I'm not sure it would work, and it might make your system quite unstable.
A more rude possibility might be to forcibly set the date(1) -or also hwclock(8)- quite often (e.g. in some crontab job running every 5 minutes).
I believe it (i.e. decelerating a lot the system clock) is a strange and bad thing to do. Don't do that on a production machine (or even on some machine doing significant requests on the Web). Be prepared to perhaps break a lot of things.

How to change kernel Timer frequency

I have a question about changing kernel frequency.
I compiled kernel by using:
make menuconfig(do some changes in config)
(under Processor type and features->Timer frequency to change frequency)
1.fakeroot make-kpkg --initrd --append-to-version=-mm kernel-image kernel-headers
2.export CONCURRENCY_LEVEL=3
3.sudo dpkg -i linux-image-3.2.14-mm_3.2.14-mm-10.00.Custom_amd64.deb
4.sudo dpkg -i linux-headers-3.2.14-mm_3.2.14-mm-10.00.Custom_amd64.deb
then say if I want to change the frequency of kernel,
what I did is:
I replaced .config file with my own config file
(since I want to do this automatically without opening make menuconfig ui)
then I repeat the step1,2,3,4 again
Is there anyway I do not need repeat the above 4 steps?
Thanks a lot!!!!
The timer frequency is fixed in Linux (unless you build a tickless kernel - CONFIG_NO_HZ=y - but the upper limit will still be fixed). You cannot change it at runtime or at boot time. You can only change it at compile time.
So the answer is: no. You need to rebuild the kernel when you want to change it.
The kernel timer frequency (CONFIG_HZ) is not configurable at runtime - you will have to compile a new kernel when you change the setting and you will have to reboot the system with the new kernel to see the effects of any change.
If you are doing this a lot, though, you should be able to create a little shell script to automate the kernel configure/build/install process. For example it should not be too hard to automate the procedure so that e.g.
./kernel-prep-with-hz 100
would rebuild and install a new kernel, only requiring from you to issue the final reboot command.
Keep in mind though, that the timer frequency may subtly affect various subsystems in unpredictable ways, although things have become a lot better since the tickless timer code was introduced.
Why do you want to do this anyway?
Maybe this will help. As the articale says, you can change the frequency between the available frequency that your system supports. (Check if CPUfreq is already enabled in your system)
Example, mine.
#cat /sys/devices/system/cpu/cpu0/cpufreq/scaling_available_frequencies
2000000 1667000 1333000 1000000
#echo 1000000 > cpu0/cpufreq/scaling_min_freq
http://www.ibm.com/developerworks/linux/library/l-cpufreq-2/

select() inside infinite loop uses significantly more CPU on RHEL 4.8 virtual machine than on a Solaris 10 machine

I have a daemon app written in C and is currently running with no known issues on a Solaris 10 machine. I am in the process of porting it over to Linux. I have had to make minimal changes. During testing it passes all test cases. There are no issues with its functionality. However, when I view its CPU usage when 'idle' on my Solaris machine it is using around .03% CPU. On the Virtual Machine running Red Hat Enterprise Linux 4.8 that same process uses all available CPU (usually somewhere in the 90%+ range).
My first thought was that something must be wrong with the event loop. The event loop is an infinite loop (while(1)) with a call to select(). The timeval is setup so that timeval.tv_sec = 0 and timeval.tv_usec = 1000. This seems reasonable enough for what the process is doing. As a test I bumped the timeval.tv_sec to 1. Even after doing that I saw the same issue.
Is there something I am missing about how select works on Linux vs. Unix? Or does it work differently with and OS running on a Virtual Machine? Or maybe there is something else I am missing entirely?
One more thing I am not sure which version of vmware server is being used. It was just updated about a month ago though.
I believe that Linux returns the remaining time by writing it into the time parameter of the select() call and Solaris does not. That means that a programmer who isn't aware of the POSIX spec might not reset the time parameter between calls to select.
This would result in the first call having 1000 usec timeout and all other calls using 0 usec timeout.
As Zan Lynx said, the timeval is modified by select on linux, so you should reassign the correct value before each select call. Also I suggest to check if some of the file descriptor is in a particular state (e.g. end of file, peer connection closed...). Maybe the porting is showing some latent bug in the analisys of the returned values (FD_ISSET and so on). It happened to me too some years ago in a port of a select-driven cycle: I was using the returned value in the wrong way, and a closed fd was added to the rd_set, causing select to fail. On the old platform the wrong fd was used to have a value higher than maxfd, so it was ignored. Because of the same bug, the program didn't recognize the select failure (select() == -1) and looped forever.
Bye!

Microsecond accurate (or better) process timing in Linux

I need a very accurate way to time parts of my program. I could use the regular high-resolution clock for this, but that will return wallclock time, which is not what I need: I needthe time spent running only my process.
I distinctly remember seeing a Linux kernel patch that would allow me to time my processes to nanosecond accuracy, except I forgot to bookmark it and I forgot the name of the patch as well :(.
I remember how it works though:
On every context switch, it will read out the value of a high-resolution clock, and add the delta of the last two values to the process time of the running process. This produces a high-resolution accurate view of the process' actual process time.
The regular process time is kept using the regular clock, which is I believe millisecond accurate (1000Hz), which is much too large for my purposes.
Does anyone know what kernel patch I'm talking about? I also remember it was like a word with a letter before or after it -- something like 'rtimer' or something, but I don't remember exactly.
(Other suggestions are welcome too)
The Completely Fair Scheduler suggested suggested by Marko is not what I was looking for, but it looks promising. The problem I have with it is that the calls I can use to get process time are still not returning values that are granular enough.
times() is returning values 21, 22, in milliseconds.
clock() is returning values 21000, 22000, same granularity.
getrusage() is returning values like 210002, 22001 (and somesuch), they look to have a bit better accuracy but the values look conspicuously the same.
So now the problem I'm probably having is that the kernel has the information I need, I just don't know the system call that will return it.
If you are looking for this level of timing resolution, you are probably trying to do some micro-optimization. If that's the case, you should look at PAPI. Not only does it provide both wall-clock and virtual (process only) timing information, it also provides access to CPU event counters, which can be indispensable when you are trying to improve performance.
http://icl.cs.utk.edu/papi/
See this question for some more info.
Something I've used for such things is gettimeofday(). It provides a structure with seconds and microseconds. Call it before the code, and again after. Then just subtract the two structs using timersub, and you can get the time it took in seconds from the tv_usec field.
If you need very small time units to for (I assume) testing the speed of your software, I would reccomend just running the parts you want to time in a loop millions of times, take the time before and after the loop and calculate the average. A nice side-effect of doing this (apart from not needing to figure out how to use nanoseconds) is that you would get more consistent results because the random overhead caused by the os sceduler will be averaged out.
Of course, unless your program doesn't need to be able to run millions of times in a second, it's probably fast enough if you can't measure a millisecond running time.
I believe CFC (Completely Fair Scheduler) is what you're looking for.
You can use the High Precision Event Timer (HPET) if you have a fairly recent 2.6 kernel. Check out Documentation/hpet.txt on how to use it. This solution is platform dependent though and I believe it is only available on newer x86 systems. HPET has at least a 10MHz timer so it should fit your requirements easily.
I believe several PowerPC implementations from Freescale support a cycle exact instruction counter as well. I used this a number of years ago to profile highly optimized code but I can't remember what it is called. I believe Freescale has a kernel patch you have to apply in order to access it from user space.
http://allmybrain.com/2008/06/10/timing-cc-code-on-linux/
might be of help to you (directly if you are doing it in C/C++, but I hope it will give you pointers even if you're not)... It claims to provide microsecond accuracy, which just passes your criterion. :)
I think I found the kernel patch I was looking for. Posting it here so I don't forget the link:
http://user.it.uu.se/~mikpe/linux/perfctr/
http://sourceforge.net/projects/perfctr/
Edit: It works for my purposes, though not very user-friendly.
try the CPU's timestamp counter? Wikipedia seems to suggest using clock_gettime().

Resources