In this interesting article about falsehoods programmers believe about time, one of them is
Thread.sleep(1000) sleeps for >= 1000 milliseconds.
When isn't this true?
According to this (Implementation of the sleep by windows operating system, which is what Thread.sleep will call underneath): If dwMilliseconds is less than the resolution of the system clock, the thread may sleep for less than the specified length of time. If dwMilliseconds is greater than one tick but less than two, the wait can be anywhere between one and two ticks, and so on. To increase the accuracy of the sleep interval, call the timeGetDevCaps function to determine the supported minimum timer resolution and the timeBeginPeriod function to set the timer resolution to its minimum.
The OS only reacts at interrupts and therefore handles sleep expiries at the time of an interrupt. It is correct the the interrup frequency can be increased by means of timeBeginPeriod. The difficulty is that the expiry of the Sleep() function requires two conditions to be met:
An Interrupt has to occur
dwMilliseconds has to expire.
Condition 2 is the problem here. The dwMilliseconds will be compared to the expired system time at interrupts. The system time will cause the Sleep() function to expire in filetime-format increments, in other words when n times the system-time increment becomes larger than dwMilliseconds. Thus one may not be able to ever get 1ms sleep delays. This heavely depends on the systems hard- and software and configuration (system-time increment/granularity).
A closer look with some examples can be found here
To answer the question: Thread.sleep(1000) sleeps for >= 1000 milliseconds is always TRUE!
Edit: When executed right after a Thread.sleep(1)
Edit: However Thread.sleep(1) sleeps for >= 1 milliseconds may not always be TRUE
Related
We have a throttling implementation that essentially boils down to:
Semaphore s = new Semaphore(1);
...
void callMethod() {
s.acquire();
timer.recordCallable(() -> // call expensive method);
s.release();
}
I would like to gather metrics about the impact semaphore has on the overall response time of the method. For example, I would like to know the number of threads that were waiting for acquire, the time spend waiting etc., What, I guess, I am looking for is guage that also captures timing information?
How do I measure the Semphore stats?
There are multiple things you can do depending on your needs and situation.
LongTaskTimer is a timer that measures tasks that are currently in-progress. The in-progress part is key here, since after the task has finished, you will not see its effect on the timer. That's why it is for long running tasks, I'm not sure if it fits your use case.
The other thing that you can do is having a Timer and a Gauge where the timer measures the time it took to acquire the Semaphore while with the gauge, you can increment/decrement the number of threads that are currently waiting on it.
I understand the notion of update_rq_clock as it updates the run queue clock on system tick periodically. But this function calls update_rq_clock_task(). What is the purpose behind this function?
Within update_rq_clock the difference between the CPU timestamp and the run queue clock is calculated (The rq->clock variable represents the last clock read from the CPU). That difference is added to the rq->clock and to the rq->clock_task (Which is the same as rq->clock - time for interrupts and stolen time) through update_rq_clock_task.
There are a couple of options within the function, which you can activate with kernel build options. But basically it breaks down to:
...
rq->clock_task += delta;
...
update_rq_clock_pelt(rq, delta);
...
So, both functions together update the clock of the run queue and the clock of the run queue without accounting for interrupts and stolen time (unless you activated that accounting through the kernel options), so the actual time that the tasks used.
queue_delayed_work(struct workqueue_struct *wq,struct delayed_work *dwork,unsigned long delay)
In the above function, is it possible to give delay that is less than one jiffy?
You can give a delay of zero or more jiffies. To get delay, kernel internally uses a timer. The earliest timer can expire is on the closest next tick. therefore the smallest delay possible is of 1 jiffies. In case of zero jiffies, the delayed work (dwork) will immediately start without any delay.
queue_delayed_work internally calls __queue_delayed_work where implementation for configuring timer is done. The minimum expire time is jiffies + delay. Refer links for more information.
To schedule your work less than jiffiy timer, You can make use of hrtimers(high resolution timer).
For more information related to implementing hrtimer read followinf links :
hrtimer repeating task in the Linux kernel
https://www.ibm.com/developerworks/library/l-timers-list/
The only delay which would be less than one jiffy is 0 jiffies in case of queue_delayed_work.
delay has type unsigned long and it's specified as "number of jiffies to wait before queueing".
when we call wait_event_interruptible ( wq,
condition) is it mandatory to call wake_up function when we use wait_event_interruptible ?
What is the best way to do the following in Linux
while(continue)
{
render(); //this function will take a large fraction of the framerate
wait(); //Wait until the full frame period has expired.
}
On windows, waitable timers seems to work pretty well (within 1 ms). One way of proceeding is to use a separate thread that just sleeps and triggers a sychronization mechanism. However I do not know how much overhead there are in this.
Note: Accuracy is more important than high frequency: A timer with frequency 1.000 kHz is preffered over a timer with 1 MHz.
Assuming you're looking for an answer in the C language:
I don't remember the precision, but I recall I used to use the setitimer() function when I needed good precision.
Here's an example of how to use it: http://docs.oracle.com/cd/E23824_01/html/821-1602/chap7rt-89.html
What is the expected duration of a call to sleep with one as the argument? Is it some random time that doesn't exceed 1 second? Is it some random time that is at least one second?
Scenario:
Developer A writes code that performs some steps in sequence with an output device. The code is shipped and A leaves.
Developer B is advised from the field that steps j and k need a one-second interval between them. So he inserts a call to sleep(1) between those steps. The code is shipped and Developer B leaves.
Developer C wonders if the sleep(1) should be expected to sleep long enough, or whether a higher-resolution method should be used to make sure that at least 1000 milliseconds of delay occurs.
sleep() only guarantees that the process will sleep for at least the amount of time specified, so as you put it "some random time that is at least one second."
Similar behavior is mentioned in the man page for nanosleep:
nanosleep() suspends the execution of the calling thread until either at least the time specified in *req has elapsed...
You might also find the answers in this question useful.
my man-page says this:
unsigned int sleep(unsigned int seconds);
DESCRIPTION
sleep() makes the calling thread sleep until seconds seconds have
elapsed or a signal arrives which is not ignored.
...
RETURN VALUE
Zero if the requested time has elapsed, or the number of seconds left
to sleep, if the call was interrupted by a signal handler.
so sleep makes the thread sleep, as long as you tell it, but a signals awakes it. I see no further guarantees.
if you need a better, more precise waiting time, then sleep is not good enough. There is nanosleep and (sound funny, but is true) select is the only posix portable way to sleep sub-second (or with higher precision), that I am aware of.