Lua Script coroutine - multithreading

Hi need some help on my lua script. I have a script here that will run a server like application (infinite loop). Problem here is it doesn't execute the second coroutine.
Could you tell me whats wrong Thank you.
function startServer()
print( "...Running server" )
--run a server like application infinite loop
os.execute( "server.exe" )
end
function continue()
print("continue")
end
co = coroutine.create( startServer() )
co1 = coroutine.create( continue() )

Lua have cooperative multithreading. Threads are not swtiched automatically, but must yield to others. When one thread is running, every other thread is waiting for it to finish or yield. Your first thread in this example seems to run server.exe, which, I assume, never finishes until interrupted. Thus second thread never gets its turn to run.
You also run threads wrong. In your example you're not running any threads at all. You execute function and then would try to create coroutine with its output, which naturally would fail. But since you never get back from server.exe you didn't notice this problem yet. Remove those brackets after startServer and continue to fix it.

As already noted, there are several issues with the script that prevent you from getting what you want:
os.execute("...") is blocked until the command is completed and in your case it doesn't complete (as it runs an infinite loop). Solution: you need to detach that process from yours by using something like io.popen() instead of os.execute()
co = coroutine.create( startServer() ) doesn't create a coroutine in your case. coroutine.create call accepts a function reference and you pass it the result of startServer call, which is nil. Solution: use co = coroutine.create( startServer ) (note that parenthesis are dropped, so it's not a function call anymore).
You are not yielding from your coroutines; if you want several coroutines to work together, they need to be cooperating by giving control to each other when appropriate. That's what yield command is for and that's why it's called non-preemptive multithreading. Solution: you need to use a combination of resume and yield calls after you create your coroutine.
startServer doesn't need to be a coroutine as you are not giving control back to it; its only purpose is to start the server.
In your case, the solution may not even need coroutines as all you need to do is: (1) start the server and let it detach from your process (for example, using popen) and (2) work with your process using whatever communication protocol it requires (pipes, sockets, etc.).
There are more complex and complete solutions (like LuaLanes) and also several good descriptions on creating simple coroutine dispatchers.

Your coroutine is not yielding

Related

Is there any linter that detects blocking calls in an async function?

https://www.aeracode.org/2018/02/19/python-async-simplified/
It's not going to ruin your day if you call a non-blocking synchronous
function, like this:
def get_chat_id(name):
return "chat-%s" % name
async def main():
result = get_chat_id("django")
However, if you call a blocking function, like the Django ORM, the
code inside the async function will look identical, but now it's
dangerous code that might block the entire event loop as it's not
awaiting:
def get_chat_id(name):
return Chat.objects.get(name=name).id
async def main():
result = get_chat_id("django")
You can see how it's easy to have a non-blocking function that
"accidentally" becomes blocking if a programmer is not super-aware of
everything that calls it. This is why I recommend you never call
anything synchronous from an async function without doing it safely,
or without knowing beforehand it's a non-blocking standard library
function, like os.path.join.
So I am looking for a way to automatically catch instances of this mistake. Are there any linters for Python which will report sync function calls from within an async function as a violation?
Can I configure Pylint or Flake8 to do this?
I don't necessarily mind if it catches the first case above too (which is harmless).
Update:
On one level I realise this is a stupid question, as pointed out in Mikhail's answer. What we need is a definition of a "dangerous synchronous function" that the linter should detect.
So for purpose of this question I give the following definition:
A "dangerous synchronous function" is one that performs IO operations. These are the same operations which have to be monkey-patched by gevent, for example, or which have to be wrapped in async functions so that the event loop can context switch.
(I would welcome any refinement of this definition)
So I am looking for a way to automatically catch instances of this
mistake.
Let's make few things clear: mistake discussed in article is when you call any long running sync function inside some asyncio coroutine (it can be I/O blocking call or just pure CPU function with a lot of calculations). It's a mistake because it'll block whole event loop what will lead to significant performance downgrade (more about it here including comments below answer).
Is there any way to catch this situation automatically? Before run time - no, no one except you can predict if particular function will take 10 seconds or 0.01 second to execute. On run time it's already built-in asyncio, all you have to do is to enable debug mode.
If you afraid some sync function can vary between being long running (detectable in run time in debug mode) and short running (not detectable) just execute function in background thread using run_in_executor - it'll guarantee event loop will not be blocked.

thread with a forever loop with one inherently asynch operation

I'm trying to understand the semantics of async/await in an infinitely looping worker thread started inside a windows service. I'm a newbie at this so give me some leeway here, I'm trying to understand the concept.
The worker thread will loop forever (until the service is stopped) and it processes an external queue resource (in this case a SQL Server Service Broker queue).
The worker thread uses config data which could be changed while the service is running by receiving commands on the main service thread via some kind of IPC. Ideally the worker thread should process those config changes while waiting for the external queue messages to be received. Reading from service broker is inherently asynchronous, you literally issue a "waitfor receive" TSQL statement with a receive timeout.
But I don't quite understand the flow of control I'd need to use to do that.
Let's say I used a concurrentQueue to pass config change messages from the main thread to the worker thread. Then, if I did something like...
void ProcessBrokerMessages() {
foreach (BrokerMessage m in ReadBrokerQueue()) {
ProcessMessage(m);
}
}
// ... inside the worker thread:
while (!serviceStopped) {
foreach (configChange in configChangeConcurrentQueue) {
processConfigChange(configChange);
}
ProcessBrokerMessages();
}
...then the foreach loop to process config changes and the broker processing function need to "take turns" to run. Specifically, the config-change-processing loop won't run while the potentially-long-running broker receive command is running.
My understanding is that simply turning the ProcessBrokerMessages() into an async method doesn't help me in this case (or I don't understand what will happen). To me, with my lack of understanding, the most intuitive interpretation seems to be that when I hit the async call it would go off and do its thing, and execution would continue with a restart of the outer while loop... but that would mean the loop would also execute the ProcessBrokerMessages() function over and over even though it's already running from the invocation in the previous loop, which I don't want.
As far as I know this is not what would happen, though I only "know" that because I've read something along those lines. I don't really understand it.
Arguably the existing flow of control (ie, without the async call) is OK... if config changes affect ProcessBrokerMessages() function (which they can) then the config can't be changed while the function is running anyway. But that seems like it's a point specific to this particular example. I can imagine a case where config changes are changing something else that the thread does, unrelated to the ProcessBrokerMessages() call.
Can someone improve my understanding here? What's the right way to have
a block of code which loops over multiple statements
where one (or some) but not all of those statements are asynchronous
and the async operation should only ever be executing once at a time
but execution should keep looping through the rest of the statements while the single instance of the async operation runs
and the async method should be called again in the loop if the previous invocation has completed
It seems like I could use a BackgroundWorker to run the receive statement, which flips a flag when its job is done, but it also seems weird to me to create a thread specifically for processing the external resource and then, within that thread, create a BackgroundWorker to actually do that job.
You could use a CancelationToken. Most async functions accept one as a parameter, and they cancel the call (the returned Task actually) if the token is signaled. SqlCommand.ExecuteReaderAsync (which you're likely using to issue the WAITFOR RECEIVE is no different. So:
Have a cancellation token passed to the 'execution' thread.
The settings monitor (the one responding to IPC) also has a reference to the token
When a config change occurs, the monitoring makes the config change and then signals the token
the execution thread aborts any pending WAITFOR (or any pending processing in the message processing loop actually, you should use the cancellation token everywhere). any transaction is aborted and rolled back
restart the execution thread, with new cancellation token. It will use the new config
So in this particular case I decided to go with a simpler shared state solution. This is of course a less sound solution in principle, but since there's not a lot of shared state involved, and since the overall application isn't very complicated, it seemed forgivable.
My implementation here is to use locking, but have writes to the config from the service main thread wrapped up in a Task.Run(). The reader doesn't bother with a Task since the reader is already in its own thread.

QtConcurrent::run how to stop background task

I have the same situation like this: stop thread started by qtconcurrent::run
I need to close child thread (started with QtConcurrent::run) on closeEvent in QMainWindow.
But my function in child thread use code from *.dll: I can`t use loop because all that I do - is calling the external dll like
QFuture<void> = QtConcurrent::run(obj->useDllfunc_with_longTermJob());
And when I close the app with x-button my gui is closed, but second thread with_longTermJob() still worked and when is finished I have an error.
I know some decisions for this:
using other functions like map() or something else with
QFuture.cancel/stop functionality, not QtConcurrent::run().But I need only one function call. run() is what I need.
or use QThread instead Concurrent.But it`s not good for me.
What method more simple and better and how can I implement this? Is there a method that I don`t listed?
Could you provide small code sample for decision. Thx!
QtConcurrent::run isn't a problem here. You must have means of stopping the dllFuncWithLongTermJob. If you don't have such means, then the API you're using is broken, and you're out of luck. There's nothing you can do that'd be generally safe. Forcibly terminating a thread can leave the heap in an inconsistent state, etc. - if you need to terminate a thread, you need to immediately abort the application.
Hopefully, you can call something like stopLongTermJob that sets some flag that interrupts the dllFuncWithLongTermJob.
Then:
auto obj = new Worker;
auto objFuture = QtConcurrent::run([=]{obj->dllFuncWithLongTermJob();});
To interrupt:
obj->stopLongTermJob(); // must be thread-safe, sets a flag
objFuture.waitForFinished();

Is the first thread that gets to run inside a Win32 process the "primary thread"? Need to understand the semantics

I create a process using CreateProcess() with the CREATE_SUSPENDED and then go ahead to create a little patch of code inside the remote process to load a DLL and call a function (exported by that DLL), using VirtualAllocEx() (with ..., MEM_RESERVE | MEM_COMMIT, PAGE_EXECUTE_READWRITE), WriteProcessMemory(), then call FlushInstructionCache() on that patch of memory with the code.
After that I call CreateRemoteThread() to invoke that code, creating me a hRemoteThread. I have verified that the remote code works as intended. Note: this code simply returns, it does not call any APIs other than LoadLibrary() and GetProcAddress(), followed by calling the exported stub function that currently simply returns a value that will then get passed on as the exit status of the thread.
Now comes the peculiar observation: remember that the PROCESS_INFORMATION::hThread is still suspended. When I simply ignore hRemoteThread's exit code and also don't wait for it to exit, all goes "fine". The routine that calls CreateRemoteThread() returns and PROCESS_INFORMATION::hThread gets resumed and the (remote) program actually gets to run.
However, if I call WaitForSingleObject(hRemoteThread, INFINITE) or do the following (which has the same effect):
DWORD exitCode = STILL_ACTIVE;
while(STILL_ACTIVE == exitCode)
{
Sleep(500);
if(!GetExitCodeThread(hRemoteThread, &exitCode))
break;
}
followed by CloseHandle() this leads to hRemoteThread finishing before PROCESS_INFORMATION::hThread gets resumed and the process simply "disappears". It is enough to allow hRemoteThread to finish somehow without PROCESS_INFORMATION::hThread to cause the process to die.
This looks suspiciously like a race condition, since under certain circumstances hRemoteThread may still be faster and the process would likely still "disappear", even if I leave the code as is.
Does that imply that the first thread that gets to run within a process becomes automatically the primary thread and that there are special rules for that primary thread?
I was always under the impression that a process finishes when its last thread dies, not when a particular thread dies.
Also note: there is no call to ExitProcess() involved here in any way, because hRemoteThread simply returns and PROCESS_INFORMATION::hThread is still suspended when I wait for hRemoteThread to return.
This happens on Windows XP SP3, 32bit.
Edit: I have just tried Sysinternals Process Monitor to see what's happening and I could verify my observations from before. The injected code does not crash or anything, instead I get to see that if I don't wait for the thread it doesn't exit before I close the program where the code got injected. I'm thinking whether the call to CloseHandle(hRemoteThread) should be postponed or something ...
Edit+1: it's not CloseHandle(). If I leave that out just for a test, the behavior doesn't change when waiting for the thread to finish.
The first thread to run isn't special.
For example, create a console app which creates a suspended thread and terminates the original thread (by calling ExitThread). This process never terminates (on Windows 7 anyway).
Or make the new thread wait for five seconds then exit. As expected, the process will live for five seconds and exit when the secondary thread terminates.
I don't know what's happening with your example. The easiest way to avoid the race is to make the new thread resume the original thread.
Speculating now, I do wonder if what you're doing isn't likely to cause problems anyway. For example, what happens to all the DllMain calls for the implicitly loaded DLLs? Are they unexpectedly happening on the wrong thread, are they being skipped, or are they postponed until after your code has run and the main thread starts?
Odds are good that the thread with the main (or equivalent) function calls ExitProcess (either explicitly or in its runtime library). ExitProcess, well, exits the entire process, including killing all threads. Since the main thread doesn't know about your injected code, it doesn't wait for it to finish.
I don't know that there's a good way to make the main thread wait for yours to complete...

Calling a Lua function from another thread

In my sample application, I have basically two threads.
The main thread contains a Lua engine (which is not thread-safe) and registers some C++ functions into this engine. However, one of these functions takes too long to perform (since it downloads some file over the internet) and I want the Lua engine to continue doing other stuff without blocking during the download process.
Therefore, I want to make it asynchronous: When the downloadFile() function is called from Lua, I create a new thread which performs the download. Then, the function returns and the Lua engine can process other work. When the download is finished, the second thread somehow needs to tell the main thread that it should somehow call some additional function processFile() to complete it.
This is where I'm struggling now: What is the easiest / cleanest solution to achieve this?
A new user-data object can be returned by your downloadFile() or similarly named function for kicking off the thread. This new user-data object would contain the thread handle and have an associated meta-table with an __index entry that would have a function for checking the completion status of the download, and contain other synchronization functions.
Possibly looking like this:
local a = downloadFile("foo")
-- do other things
a:join() -- now let the download finish
processFile()
or this:
local a = downloadFile("foo")
local busywork = coroutine.create(doOtherStuff)
while(not a:finished()) do
coroutine.resume(busywork)
end
processFile()
Try LuaLanes for instance. Also here.
Without integrating multithreading into lua ( which isn't that hard, Lua is already prepared for it ), your only solution is to handle the signaling in c++.
From what you've said, there isn't any interaction with Lua in the thread downloadFile() creates, does Lua have to call processFile()? If not, what's stopping you from handling it all in c++? If you need to notify Lua, you can always use a callback function ( save it in the registry ), handle the signal in c++ and run the callback.
Since your engine is not thread-safe, I don't think there is a way to handle it in Lua.
Return a "future" object from the downloadFile function. This can be used to check the result of the long-running asynchronous operation. See java.util.concurrent.Future or QFuture for example implementations. Also, check out this mailing list thread for a discussion of implementing futures/promises in Lua using coroutines.

Resources