How can I pause a Thread for some seconds in Godot? - multithreading

How can I pause execution for a certain amount of time in Godot?
I can't really find a clear answer.

The equivalent of Thread.Sleep(1000); for Godot is OS.DelayMsec(1000). The documentation says:
Delays execution of the current thread by msec milliseconds. msec must be greater than or equal to 0. Otherwise, delay_msec will do nothing and will print an error message.
Note: delay_msec is a blocking way to delay code execution. To delay code execution in a non-blocking way, see SceneTree.create_timer. Yielding with SceneTree.create_timer will delay the execution of code placed below the yield without affecting the rest of the project (or editor, for EditorPlugins and EditorScripts).
Note: When delay_msec is called on the main thread, it will freeze the project and will prevent it from redrawing and registering input until the delay has passed. When using delay_msec as part of an EditorPlugin or EditorScript, it will freeze the editor but won't freeze the project if it is currently running (since the project is an independent child process).

One-liner:
yield(get_tree().create_timer(1), "timeout")
This will delay the execution of the following line for 1 second.
Usually I make this to a sleep() function for convenience:
func sleep(sec):
yield(get_tree().create_timer(sec), "timeout")
Call it with sleep(1) to delay 1 second.

Related

F# / MailBoxProcessor is unresponsive to PostAndReply under nearly 100% load

I have a MailBoxProcessor, which does the following things:
Main loop (type AsyncRunner: https://github.com/kkkmail/ClmFSharp/blob/master/Clm/ContGen/AsyncRun.fs#L257 – the line number may change as I keep updating the code). It generates some "models", compiles each of them into a model specific folder, spawns them as external processes, and then each model uses WCF to "inform" AsyncRunner about its progress by calling updateProgress. A model may take several days to run. Once any of the models is completed, the runner generates / spawns more. It is designed to run at 100% processor load (but with priority: ProcessPriorityClass.BelowNormal), though I can specify a smaller number of logical cores to use (some number between 1 and Environment.ProcessorCount). Currently I "async"-ed almost everything that goes inside MailBoxProcessor by using … |> Async.Start to ensure that I "never ever" block the main loop.
I can "ask" the runner (using WCF) about its state by calling member this.getState () = messageLoop.PostAndReply GetState.
OR I can send some commands to it (again using WCF), e.g. member this.start(), member this.stop(), …
Here is where it gets interesting. Everything works! However, if I run a "monitor", which would ask for a state by effectively calling PostAndReply (exposed as this.getState ()) in an infinite loop, the after a while it sort of hangs up. I mean that it does eventually return, but with some unpredictably large delays (like a few minutes). At that same time, I can issue commands and they do return fast while getState still has not returned.
Is it possible to make it responsive at nearly 100% load? Thanks a lot!
I would suggest not asyncing anything(other than your spawning of processes) in your main program, since your code creates additional processes. Your main loop is waiting on the loop return to continue before processing the GetState() method.

Why is setTimeout not executing at the expected time?

setTimeout(function(){
console.log('hari');
process.exit()
}, 300);
for(i=0;i<3000000;i++) {
console.log(i);
}
Please explain why setTimeout is not finished in 300 ms here. setTimeout is executing only after the for loop completes; why?
As you might already know, Node.js is using the V8 (JavaScript) engine which is single threaded. The core of I/O execution is around an event-loop; when you block the event loop it also blocks execution of other events.
Basically, the for loop is executed before, and it starts blocking the event loop. The setTimeout does not guarantee that your code gets executed exactly after 300 ms, but more like >= 300 ms. To get a better understanding of event loop read this.
Basically, your code runs in a single thread. setTimeout has lower priority than your standard code, so it will execute first and then (because the thread will be finished) it will execute your function from setTimeout.
setTimeout(milliseconds) runs a function no faster than in the specified number of milliseconds.

How does libuv and Node.js actually schedule timers?

How does libuv and the operating system actually schedule timers like setTimeout and setInterval in Node.js? I see that no CPU is used by the node process until a timer fires. Does this mean the OS schedules the timer, and wakes up the Node process when the timer is fired? If so, how does an OS schedule a timer and how exactly does the hardware execute it?
Timer callbacks are executed as part of the NodeJS event loop. When you call setTimeout or setInterval, libuv (the C library which implements the NodeJS event loop) creates a timer in a 'min heap' data structure which is called the timers heap. In this data structure, it keeps track of the timestamp that each timer expires at.
At the start of each fresh iteration of the event loop, libuv calls uv__update_time which in turn calls a syscall to get the current time and updates the current loop time up to a millisecond precision. (https://github.com/nodejs/node/blob/master/deps/uv/src/unix/core.c#L375)
Right after that step comes the first major phase of the event loop iteration, which is timers phase. At this phase, libuv calls uv__run_timers to process all the callbacks of expired timers. During this phase, libuv traverses the timers heap to identify expired timers based on the 'loop time' it just updated using uv__update_time. Then it invokes the callbacks of all those expired timers.
Following is a redacted snippet from the event loop implementation in NodeJS to highlight what I just described.
while (r != 0 && loop->stop_flag == 0) {
uv__update_time(loop);
uv__run_timers(loop);
// ...redacted for brevity...
r = uv__loop_alive(loop);
if (mode == UV_RUN_ONCE || mode == UV_RUN_NOWAIT)
break;
}
I wrote a series of articles on NodeJS event loop some time back. I hope this article from that series would be helpful. https://blog.insiderattack.net/timers-immediates-and-process-nexttick-nodejs-event-loop-part-2-2c53fd511bb3
Node uses libuv underneath to take care of this. While setTimeout has some internal managing of its own, it ends up using the uv_timer_t facility provided by libuv.
Lets assume that the only thing the event loop is doing is the timer. libuv will calculate the poll timeout, which will actually be the timer's due time (in this example). Then the event loop will block for i/o by using the appropriate syscall (epoll_wait, kevent, etc). At that point it's up to the kernel to decide what to do, but the current thread of execution is blocked until the kernel wakes it up again, so there is no used CPU here, because nothing is happening.
Once the timeout expires, the aforementioned syscall will return, and libuv will process the due timers and i/o.

Is the first thread that gets to run inside a Win32 process the "primary thread"? Need to understand the semantics

I create a process using CreateProcess() with the CREATE_SUSPENDED and then go ahead to create a little patch of code inside the remote process to load a DLL and call a function (exported by that DLL), using VirtualAllocEx() (with ..., MEM_RESERVE | MEM_COMMIT, PAGE_EXECUTE_READWRITE), WriteProcessMemory(), then call FlushInstructionCache() on that patch of memory with the code.
After that I call CreateRemoteThread() to invoke that code, creating me a hRemoteThread. I have verified that the remote code works as intended. Note: this code simply returns, it does not call any APIs other than LoadLibrary() and GetProcAddress(), followed by calling the exported stub function that currently simply returns a value that will then get passed on as the exit status of the thread.
Now comes the peculiar observation: remember that the PROCESS_INFORMATION::hThread is still suspended. When I simply ignore hRemoteThread's exit code and also don't wait for it to exit, all goes "fine". The routine that calls CreateRemoteThread() returns and PROCESS_INFORMATION::hThread gets resumed and the (remote) program actually gets to run.
However, if I call WaitForSingleObject(hRemoteThread, INFINITE) or do the following (which has the same effect):
DWORD exitCode = STILL_ACTIVE;
while(STILL_ACTIVE == exitCode)
{
Sleep(500);
if(!GetExitCodeThread(hRemoteThread, &exitCode))
break;
}
followed by CloseHandle() this leads to hRemoteThread finishing before PROCESS_INFORMATION::hThread gets resumed and the process simply "disappears". It is enough to allow hRemoteThread to finish somehow without PROCESS_INFORMATION::hThread to cause the process to die.
This looks suspiciously like a race condition, since under certain circumstances hRemoteThread may still be faster and the process would likely still "disappear", even if I leave the code as is.
Does that imply that the first thread that gets to run within a process becomes automatically the primary thread and that there are special rules for that primary thread?
I was always under the impression that a process finishes when its last thread dies, not when a particular thread dies.
Also note: there is no call to ExitProcess() involved here in any way, because hRemoteThread simply returns and PROCESS_INFORMATION::hThread is still suspended when I wait for hRemoteThread to return.
This happens on Windows XP SP3, 32bit.
Edit: I have just tried Sysinternals Process Monitor to see what's happening and I could verify my observations from before. The injected code does not crash or anything, instead I get to see that if I don't wait for the thread it doesn't exit before I close the program where the code got injected. I'm thinking whether the call to CloseHandle(hRemoteThread) should be postponed or something ...
Edit+1: it's not CloseHandle(). If I leave that out just for a test, the behavior doesn't change when waiting for the thread to finish.
The first thread to run isn't special.
For example, create a console app which creates a suspended thread and terminates the original thread (by calling ExitThread). This process never terminates (on Windows 7 anyway).
Or make the new thread wait for five seconds then exit. As expected, the process will live for five seconds and exit when the secondary thread terminates.
I don't know what's happening with your example. The easiest way to avoid the race is to make the new thread resume the original thread.
Speculating now, I do wonder if what you're doing isn't likely to cause problems anyway. For example, what happens to all the DllMain calls for the implicitly loaded DLLs? Are they unexpectedly happening on the wrong thread, are they being skipped, or are they postponed until after your code has run and the main thread starts?
Odds are good that the thread with the main (or equivalent) function calls ExitProcess (either explicitly or in its runtime library). ExitProcess, well, exits the entire process, including killing all threads. Since the main thread doesn't know about your injected code, it doesn't wait for it to finish.
I don't know that there's a good way to make the main thread wait for yours to complete...

Expected duration of sleep(1)

What is the expected duration of a call to sleep with one as the argument? Is it some random time that doesn't exceed 1 second? Is it some random time that is at least one second?
Scenario:
Developer A writes code that performs some steps in sequence with an output device. The code is shipped and A leaves.
Developer B is advised from the field that steps j and k need a one-second interval between them. So he inserts a call to sleep(1) between those steps. The code is shipped and Developer B leaves.
Developer C wonders if the sleep(1) should be expected to sleep long enough, or whether a higher-resolution method should be used to make sure that at least 1000 milliseconds of delay occurs.
sleep() only guarantees that the process will sleep for at least the amount of time specified, so as you put it "some random time that is at least one second."
Similar behavior is mentioned in the man page for nanosleep:
nanosleep() suspends the execution of the calling thread until either at least the time specified in *req has elapsed...
You might also find the answers in this question useful.
my man-page says this:
unsigned int sleep(unsigned int seconds);
DESCRIPTION
sleep() makes the calling thread sleep until seconds seconds have
elapsed or a signal arrives which is not ignored.
...
RETURN VALUE
Zero if the requested time has elapsed, or the number of seconds left
to sleep, if the call was interrupted by a signal handler.
so sleep makes the thread sleep, as long as you tell it, but a signals awakes it. I see no further guarantees.
if you need a better, more precise waiting time, then sleep is not good enough. There is nanosleep and (sound funny, but is true) select is the only posix portable way to sleep sub-second (or with higher precision), that I am aware of.

Resources