Expected duration of sleep(1) - linux

What is the expected duration of a call to sleep with one as the argument? Is it some random time that doesn't exceed 1 second? Is it some random time that is at least one second?
Scenario:
Developer A writes code that performs some steps in sequence with an output device. The code is shipped and A leaves.
Developer B is advised from the field that steps j and k need a one-second interval between them. So he inserts a call to sleep(1) between those steps. The code is shipped and Developer B leaves.
Developer C wonders if the sleep(1) should be expected to sleep long enough, or whether a higher-resolution method should be used to make sure that at least 1000 milliseconds of delay occurs.

sleep() only guarantees that the process will sleep for at least the amount of time specified, so as you put it "some random time that is at least one second."
Similar behavior is mentioned in the man page for nanosleep:
nanosleep() suspends the execution of the calling thread until either at least the time specified in *req has elapsed...
You might also find the answers in this question useful.

my man-page says this:
unsigned int sleep(unsigned int seconds);
DESCRIPTION
sleep() makes the calling thread sleep until seconds seconds have
elapsed or a signal arrives which is not ignored.
...
RETURN VALUE
Zero if the requested time has elapsed, or the number of seconds left
to sleep, if the call was interrupted by a signal handler.
so sleep makes the thread sleep, as long as you tell it, but a signals awakes it. I see no further guarantees.
if you need a better, more precise waiting time, then sleep is not good enough. There is nanosleep and (sound funny, but is true) select is the only posix portable way to sleep sub-second (or with higher precision), that I am aware of.

Related

How can I pause a Thread for some seconds in Godot?

How can I pause execution for a certain amount of time in Godot?
I can't really find a clear answer.
The equivalent of Thread.Sleep(1000); for Godot is OS.DelayMsec(1000). The documentation says:
Delays execution of the current thread by msec milliseconds. msec must be greater than or equal to 0. Otherwise, delay_msec will do nothing and will print an error message.
Note: delay_msec is a blocking way to delay code execution. To delay code execution in a non-blocking way, see SceneTree.create_timer. Yielding with SceneTree.create_timer will delay the execution of code placed below the yield without affecting the rest of the project (or editor, for EditorPlugins and EditorScripts).
Note: When delay_msec is called on the main thread, it will freeze the project and will prevent it from redrawing and registering input until the delay has passed. When using delay_msec as part of an EditorPlugin or EditorScript, it will freeze the editor but won't freeze the project if it is currently running (since the project is an independent child process).
One-liner:
yield(get_tree().create_timer(1), "timeout")
This will delay the execution of the following line for 1 second.
Usually I make this to a sleep() function for convenience:
func sleep(sec):
yield(get_tree().create_timer(sec), "timeout")
Call it with sleep(1) to delay 1 second.

Measuring Semaphore wait times with Micrometer

We have a throttling implementation that essentially boils down to:
Semaphore s = new Semaphore(1);
...
void callMethod() {
s.acquire();
timer.recordCallable(() -> // call expensive method);
s.release();
}
I would like to gather metrics about the impact semaphore has on the overall response time of the method. For example, I would like to know the number of threads that were waiting for acquire, the time spend waiting etc., What, I guess, I am looking for is guage that also captures timing information?
How do I measure the Semphore stats?
There are multiple things you can do depending on your needs and situation.
LongTaskTimer is a timer that measures tasks that are currently in-progress. The in-progress part is key here, since after the task has finished, you will not see its effect on the timer. That's why it is for long running tasks, I'm not sure if it fits your use case.
The other thing that you can do is having a Timer and a Gauge where the timer measures the time it took to acquire the Semaphore while with the gauge, you can increment/decrement the number of threads that are currently waiting on it.

How to give delay value that is less than jiffies in delayed workqueue

queue_delayed_work(struct workqueue_struct *wq,struct delayed_work *dwork,unsigned long delay)
In the above function, is it possible to give delay that is less than one jiffy?
You can give a delay of zero or more jiffies. To get delay, kernel internally uses a timer. The earliest timer can expire is on the closest next tick. therefore the smallest delay possible is of 1 jiffies. In case of zero jiffies, the delayed work (dwork) will immediately start without any delay.
queue_delayed_work internally calls __queue_delayed_work where implementation for configuring timer is done. The minimum expire time is jiffies + delay. Refer links for more information.
To schedule your work less than jiffiy timer, You can make use of hrtimers(high resolution timer).
For more information related to implementing hrtimer read followinf links :
hrtimer repeating task in the Linux kernel
https://www.ibm.com/developerworks/library/l-timers-list/
The only delay which would be less than one jiffy is 0 jiffies in case of queue_delayed_work.
delay has type unsigned long and it's specified as "number of jiffies to wait before queueing".
when we call wait_event_interruptible ( wq,
condition) is it mandatory to call wake_up function when we use wait_event_interruptible ?

Do we need a sleep() while running a forever process in Linux?

I have read that a forever process like daemon should run with a sleep() in their while(1) or for(;;) loop. They say, it is required because otherwise this process will always be in a run queue and the kernel will always run it. This will block the other process. I don't agree that it will block the other process completely. If there is a time slicing, then it will execute other process. But, certainly it will steal a time from others. Making a delay for other process since this process is always in the run state. By default, the Linux runs as a round-robin. The first task is swapd, then other tasks . This is a circular link list with first task as swapd(process-id is 0) and then other tasks. I believe this is still based as time sliced. A particular time for each process. These tasks are nothing but the process-descriptor. I believe this link list is maintained by the init process. Please do correct me here If I am wrong. Other question is if we need to give a sleep() then what should be its value? How can we determine the sleep value to get the best results?
If your program has useful things to do, don't throttle it. A program can move out of the run queue by doing blocking stuff like IO and waiting.
If you are writing a polling loop that can spin an arbitrary number of times you probably want to throttle it a bit with sleep because spinning too often has little value.
That said, polling loops are a means of last resort. Normally, programs perform useful work with every instruction, so they don't sleep at all.
Sleep is almost certainly the wrong solution.
Usually what you do it call a blocking function which wakes you up when there's something for you to do.
For example, if you're a network service you'd want to remain inactive until a request arrives.
In other words, the core of your daemon should not look like this:
while(1)
{
if (checkIfSomethingToDo())
doSomething();
else
sleep(1);
}
but rather a little like this:
while(1)
{
int ret = poll(fds, nfds, -1);
if (ret > 0)
doSomething();
}
Have the kernel put you to sleep until there's actual work to do. It's not hard to implement, you'd be a lot more efficient (not stealing CPU time from others, only to waste it doing no actual work) and your response latency will go down too.
A sleep forces the os to pass execution to another thread and therefore is helpfull, or at least fair. Start with sleep one. Should be ok.

When does Thread.sleep(1000) sleeps less than 1000 milliseconds?

In this interesting article about falsehoods programmers believe about time, one of them is
Thread.sleep(1000) sleeps for >= 1000 milliseconds.
When isn't this true?
According to this (Implementation of the sleep by windows operating system, which is what Thread.sleep will call underneath): If dwMilliseconds is less than the resolution of the system clock, the thread may sleep for less than the specified length of time. If dwMilliseconds is greater than one tick but less than two, the wait can be anywhere between one and two ticks, and so on. To increase the accuracy of the sleep interval, call the timeGetDevCaps function to determine the supported minimum timer resolution and the timeBeginPeriod function to set the timer resolution to its minimum.
The OS only reacts at interrupts and therefore handles sleep expiries at the time of an interrupt. It is correct the the interrup frequency can be increased by means of timeBeginPeriod. The difficulty is that the expiry of the Sleep() function requires two conditions to be met:
An Interrupt has to occur
dwMilliseconds has to expire.
Condition 2 is the problem here. The dwMilliseconds will be compared to the expired system time at interrupts. The system time will cause the Sleep() function to expire in filetime-format increments, in other words when n times the system-time increment becomes larger than dwMilliseconds. Thus one may not be able to ever get 1ms sleep delays. This heavely depends on the systems hard- and software and configuration (system-time increment/granularity).
A closer look with some examples can be found here
To answer the question: Thread.sleep(1000) sleeps for >= 1000 milliseconds is always TRUE!
Edit: When executed right after a Thread.sleep(1)
Edit: However Thread.sleep(1) sleeps for >= 1 milliseconds may not always be TRUE

Resources