I want to know is there any way to change Windows or Linux clock rate or the system clock rate (maybe via BIOS)? I mean accelerate or decelerate system clock!
For example every 24 hours in the computer lasts 12 hours or 36 hours in real!!!
NOTE :
Using the below batch file, I can decelerate Windows time. But I want something in a lower level! I want to change clock pace in a way that all time for all the programs and tool be slower or faster! not only Windows time!
#echo off
:loop
set T=%time%
timeout 1
time %T%
timeout 1
goto loop
So your CPU's clock is not actually programmable via system calls. It's actually working off of an oscillator w/ crystal. You cannot change during booting up. This is done intentionally so that your CPU is able time regardless of your power/wifi/general system status.
As commented by That Other Guy you might perhaps use adjtimex(2) syscall, but you first should be sure that no NTP client daemon -which uses adjtimex- is running (so stop any ntpd or chrony service).
I'm not sure it would work, and it might make your system quite unstable.
A more rude possibility might be to forcibly set the date(1) -or also hwclock(8)- quite often (e.g. in some crontab job running every 5 minutes).
I believe it (i.e. decelerating a lot the system clock) is a strange and bad thing to do. Don't do that on a production machine (or even on some machine doing significant requests on the Web). Be prepared to perhaps break a lot of things.
Related
A python program that I'm building was used to die for no apparent reason. I couldn't figure out the reason, so my workaround was to add few lines that write the time to a 'vitality' file every time a certain line within the program is executed, which happens about every 0.1 seconds.
A separate script reads the 'vitality' file every 1 second, and when the vital sign doesn't update for, say 10 seconds, the script kills the program and restarts it.
So far this workaround has been working great on the original problem, but now I'm rather concerned if the SSD will degrade by this or not.
Does writing 10 digits of unixtimestamp every 0.1s to a file have negligible effect on SSD health, or would it degrade the SSD fast?
Doing that will degrade the SSD and destroy it over time.
In my last job, the SSD health tool (smartctl) indicated that the 15 SSDs in our cluster product were wearing rapidly and had only months of life left. The team found that a third party software package (etcd) was syncing a small amounts of data to a filesystem on SSD once per second. And each sync wrote at least an entire 16K block. Luckily, the problem was found early enough that we could patch it in a software update before suffering too many customer returns.
Write the 'vitality' file somewhere else. It could be on a tmpfs like /var/run/user/. Or use a different vitality mechanism; something like supervisord can manage your task, run health checks and restart it on failure.
I am profiling some code on a Linux system (running on Intel Core i7 4500U) to obtain the time of ONLY the execution costs. The application is the demo mpeg2dec from libmpeg2. I am trying to obtain a probability distribution for the mpeg2 execution times. However we want to see the raw execution cost when cache is switched off.
Is there a way I can disable the cpu cache of my system via a Linux command, or via a gcc flag ? or even set the cpu (L1/L2) cache size to 0KB ? or even add some code changed to disable cache ? Of course, without modifying or rebuilding the kernel.
See this 2012 thread, someone posted a tiny kernel module source to disable cache through asm.
http://www.linuxquestions.org/questions/linux-kernel-70/disabling-cpu-caches-936077/
If disabling the cache is really necessary, then so be it.
Otherwise, to know how much time a process takes in terms of user or system "cycles", then I would recommend the getrusage() function.
struct rusage usage;
getrusage(RUSAGE_SELF, &usage);
You can call it before/after your loop/test and subtracted the values to get a good idea of how much time your process took, even if many other processes run in parallel on the same machine. The main problem you'd get is if your process start swapping. In that case your timings will be off.
double user_usage = usage.ru_utime.tv_sec + usage.ru_utime.tv_usec / 1000000.0;
double system_uage = usage.ru_stime.tv_sec + usage.ru_stime.tv_usec / 1000000.0;
This is really precise from my own experience. To increase precision, you could be root when running your test and give it a negative priority (-1 or -2 is enough.) Then it won't be swapped out until you call a function that may require it.
Of course, you still get the effect of the cache... assuming you do not handle very large amount of data with code that goes on and on (opposed to having a loop).
The Raspberry Pi has no real clock to keep track of time. Instead it uses the NTP daemon to keep date and time as accurate as possible. This should work, I guess, but in my case it doesn't for some reason.
Without going into too much detail, I use my Raspberry Pi in a way where it's always plugged in but doesn't always have an Internet connection. Sometimes the CPU has allot to do. Sometimes it doesn't. This results in the Rpi losing track of time. I would think that once it gets back on the Internet, it would sync the clock using the NTP servers to get back on track. However, it doesn't. From what I understand, if the offset is too big, the system doesn't sync the time.
Is there any way to force NTPD to sync the time no matter how big the offset is compared to the NTP servers? Or will I have to set up a cronjob say every hour running:
ntpd -g"
Add this to /etc/ntp.conf:
tinker panic 0
That will cause ntpd to sync despite the large clock offset.
You need to add a real time clock as a local clock. I suggest you shop for something with 3ppm or better accuracy. Then set it up as a stratum 10 clock. You may also connect a stratum 0 clock, e.g. a wwvb, msf or dcf77 receiver. However in all cases you need a reasonable local clock. That is your only chance to keep accurate timing is to add an RTC.
Hi I need micro second level time synchronisation within a group of systems, But I found it difficult.
My experiment: in a LAN, constantly comparing 2 x64 linux system hardware time clocks between via some ruby code: which is using udp packets to send microseocond timestamp to the other and comparing the difference. experiment code: https://github.com/c2h2/chrono-diff
Results: The time drift quickly! Interestingly they are also not drifting within one direction, the difference of two clocks is sometimes positive, sometimes negative, random manner. And they might differ as apart as 1 second after several hours from pervious sync.
How should I keep the perfectly sync'ed all the time? run time sync every several minutes?
setup one of the host as ntp server and let the other host use that server as ntp server.
http://www.ntp.org/documentation.html
When I run my Virtual Machine with Gentoo as guest, I have found that there is considerable overhead coming from tick_periodic function. (This is the function which runs on every timer interrupt.) This function updates a global jiffy using write_seqlocks which leads to the overhead.
Here's a grep of HZ and relevant stuff in my kernel config file.
sharan013#sitmac4:~$ cat /boot/config | egrep 'HZ|TIME'
# CONFIG_RCU_FAST_NO_HZ is not set
CONFIG_NO_HZ=y
# CONFIG_HZ_100 is not set
# CONFIG_HZ_250 is not set
# CONFIG_HZ_300 is not set
CONFIG_HZ_1000=y
CONFIG_HZ=1000
# CONFIG_MACHZ_WDT is not set
CONFIG_TIMERFD=y
CONFIG_HIGH_RES_TIMERS=y
CONFIG_X86_CYCLONE_TIMER=y
CONFIG_HPET_TIMER=y
Clearly it has set the configuration to 1000, but when I do sysconf(_SC_CLK_TCK), I get 100 as my timer frequency. So what is my system's timer frequency?
What I want to do is to bring the frequency down to 100, even lower if possible. Although it might effect the interactivity and precision of poll/select and schedulers time slice, I am ready to sacrifice these things for lesser timer interrupt as it will speed up VM.
When I tried to find out what has to be done I read in some place that you can do so by changing in the configuration file, else where I read that adding divider=10 to the boot parameter does the job, else where I read that none of it is needed if you can set the CONFIG_HIGH_RES_TIMERS to acheive low-latency timers even without increasing the timer frequency and the same is possible with a tickless system CONFIG_NO_HZ.
I am extermely confused about what is the right approach.
All I want is to bring down the timer interrupt to as low as possible.
Can I know the right way of doing this?
Don't worry! Your confusion is nothing but expected. Linux timer interrupts are very confusing and have had a long and quite exciting history.
CLK_TCK
Linux has no sysconf system call and glibc is just returning the constant value 100. Sorry.
HZ <-- what you probably want
When configuring your kernel you can choose a timer frequency of either 100Hz, 250Hz, 300Hz or 1000Hz. All of these are supported, and although 1000Hz is the default it's not always the best.
People will generally choose a high value when they value latency (a desktop or a webserver) and a low value when they value throughput (HPC).
CONFIG_HIGH_RES_TIMERS
This has nothing to do with timer interrupts, it's just a mechanism that allows you to have higher resolution timers. This basically means that timeouts on calls like select can be more accurate than 1/HZ seconds.
divider
This command line option is a patch provided by Red Hat. You can probably use this (if you're using Red Hat or CentOS), but I'd be careful. It's caused lots of bugs and you should probably just recompile with a different Hz value.
CONFIG_NO_HZ
This really doesn't do much, it's for power saving and it means that the ticks will stop (or at least become less frequent) when nothing is executing. This is probably already enabled on your kernel. It doesn't make any difference when at least one task is runnable.
Frederic Weisbecker actually has a patch pending which generalizes this to cases where only a single task is running, but it's a little way off yet.