Python multiprocessing deadlock when calling logger-issue6721 - python-3.x

I have a code running in Python 3.7.4 which forks off multiple processes. I believe I'm hitting a known issue (issue6721: https://github.com/python/cpython/issues/50970). I setup the child process to send "progress report" through a pipe to the parent process and noticed that sometimes a log statement doesn't get printed and that the code gets stuck in a deadlock situation.
After reading issue6721, I'm not sure I'm still understanding why parent might hold logger Handler lock after a log statement is done execution (i.e the line that logs is executed and the execution has moved to the next line of code). I totally get it that in the context of C++, the compiler might re-arrange instructions. Not fully understand it in context of Python. In C++ I can have barrier instructions to stop the compiler moving instructions beyond a point. Is there something similar that can be done in Python to avoid having a lock getting copied to child process?
I have seen solutions using "atfork" which is a library that seems not supported (so I can't really use it).
Does anyone know a reliable and standard solution to this problem?

Related

PyO3 - prevent user submitted code from looping and blocking server thread

I'm writing a game in Rust where each player can submit some python scripts to the server in order to automate various tasks in the game. I plan on using pyo3 to run the python from rust.
However, I can see an issue arising if a player submits a script like this:
def on_event(e):
while True:
pass
Now when the server calls the function (using something like PyAny::call1()) the thread will hang as it reaches the infinite loop.
My first thought was to have pyo3 execute the python one statement at a time, therefore being able to exit if the script been running for over a certain threshold, but I don't think pyo3 supports this.
My next idea was to give each player their own thread to run their own scripts on, that way if one of their scripts got stuck it only affected their gameplay. However, I still have the issue of not being able to kill a thread when it gets stuck in an infinite loop - if a lot of players submitted scripts that just looped, lots of threads would start using a lot of CPU time.
All I need is way to execute python scripts in a way such that if one of them does loop, it does not affect the server's performance at all.
Thanks :)
One solution is to restrict the time that you give each user script to run.
You can do it via PyThreadState_SetAsyncExc, see here for some code. It uses C calls of the interpreter, which you probably can access in Rust (with PyO3 FFI magic).
Another way would be to do it on the OS level: if you spawn a process for the user script, and then kill it when it runs for too long. This might be more secure if you limit what a process can access (with some OS calls), but requires some boilerplate to communicate between the host.

using packages inside tcl thread

I have a tcl proc called run_expect that I use to run basic tcl expect flow: spawn <device>, send <cmd>, expect <string>. Now I need to run this code from 2 threads running in parallel, I did the following attemps:
when I tried to write the multi-threaded proc which simply calls run_expect I got the error of unknown command run_expect from the thread's context/scope.
I tried to take the implementation of the run_expect proc and put it in the thread itself, but then I encountered another issue that the thread doesn't seem to see expect library as the other procs and complains on: "invalid command name "spawn".
I tried then to do package require Expect from the thread itself, but got Segmentation fault: 11 error.
Tried to update the ::audio_path variable of the thread to be same as main context but also didn't help making the package require work (::thread::send -async [lindex $tids 0] [list set ::auto_path $::auto_path])
Is there anyway to call any already existing proc from a thread?
if not, is moving the code into the thread is the write solution? and how can I get the thread to know the packages / commands loaded?
Each thread in Tcl is almost totally isolated from all other threads. They don't share any commands (including procedures) or variables. The easiest way to manage things in multiple threads is to put the code to be in each thread into its own Tcl file and to tell the worker threads to source that as part of starting up.
However…
The Expect package is not thread safe; you've provided clear evidence of that. I don't know the details of why this is so. This means that if you want multithreaded expecting, your easiest approach is to use several processes instead. Tcl's good at managing subprocesses via asynchronous pipelines, and when everything is designed to work that way, you don't need to use Expect in the parent to manage things. You could also use the comm package to do the communications with the subprocesses.

How can I find a reason that a program stalls?

I'm handling a program which controls a car.
The program has a pretty large scale and it was made by other people.
So I don't understand completely how it works.
But I have to apply it and make a car moving.
The problem I'm facing is that the program often stalls with no error, no segmentation.
If it crashes, I can trace the cause with gdb or something like that.
But it does not crash, it silently stops.
How can I find the cause?
From your description - program silently stops - I understand that your program simply and gracefully exited, but not from your expected flow. This can happen for many reasons - for example, maybe your program enters illegal flow and some sub-component, such as standard library or other library decide that program should exit, and thus calls c-runtime exit() or directly calls Kernel32!ExitProcess().The best way to debug this flow is to attach a debugger and set a breakpoint on these two methods and find out who is calling them.If you mean your program enters a deadlock and halts then also you will need to attach a debugger and find out who is stuck.

Why the window of my vb6 application stalls when calling a function written in C?

I'm using 3.9.7 cURL library to download files from the internet, so I created a dynamic bibioteca of viculo. dll written in C using VC + + 6.0 the problem is that when either I call my function from within my vb6 application window locks and unlocks only after you have downloaded the file how do I solve this problem?
The problem is that when you call the function from your DLL, it "blocks" your app's execution until it gets finished. Basically, execution goes from the piece of code that makes the function call, to the code inside of the function call, and then only comes back to the next line after the function call after the code inside of the function has finished running. In fact, that's how all function calls work. You can see this for yourself by single-stepping through your code in the VB 6 development environment.
You don't normally notice this because the code inside of a function being called doesn't take very long to execute before control is returned to the caller. But in this case, since the function you're calling from the DLL is doing a lot of processing, it takes a while to execute, so it "blocks" the execution of your application's code for quite a while.
This is a good general explanation for the reason why your application window appears to be frozen. A bit more technically, it's because the message pump that is responsible for processing user interaction with on-screen elements is not running (it's part of your code that has been temporarily suspended until the function that you called finishes processing). This is a bit more difficult for a VB programmer to appreciate, since none of this nitty-gritty stuff is exposed in the world of VB. It's all happening behind the scenes, just like it is in a C program, but you don't normally have to deal with any of it. Occasionally, though, the abstraction leaks, and the nitty-gritty rears its ugly head. This is one of those cases.
The correct solution to this general problem, as others have hinted at, is to run lengthy operations on a background thread. This leaves your main thread (right now, the only one you have, the one your application is running on) free to continue processing user input, while the other thread can process the data and return that processed data to the main thread when it is finished. Of course, computers can't actually do more than one thing at a time, but the magic of the operating system rapidly switching between one task and another means that you can simulate this. The mechanism for doing so involves threads.
The catch comes in the fact that the VB 6 environment does not have any type of support for creating multiple threads. You only get one thread, and that's the main thread that your application runs on. If you freeze execution of that one, even temporarily, your application freezes—as you've already found out.
However, if you're already writing a C++ DLL, there's no reason you can't create multiple threads in a VB 6 app. You just have to handle everything yourself as if you were using another lower-level language like C++. Run the C++ code on a background thread, and only return its results to the main thread when it is completely finished. In the mean time, your main thread is free.
This is still quite a bit of work, though, especially if you're inexperienced when it comes to Win32 programming and the issues surrounding multiple threads. It might be easier to find a different library that supports asynchronous function calls out-of-the-box. Antagony suggests using VB's AsyncRead method. That is probably a good option; as Karl Peterson says in the linked article, it keeps everything in pure VB 6 code, which can be a real time saver as well as a boon to future maintenance programmers. The only problem is that you'll still have to process the data somehow once you obtain it. And if that's slow, you're right back where you started from…
Check out this article, which demonstrates how to asynchronously transfer large files using a little-known method in user controls.

Threading in Spidermonkey

I am trying to enable a threaded debug dump in SpiderMonkey, by
editing the jsinterp.cpp file. Basically, the things I am trying to do
are as follows:
Catch a JSScript before the main loop of Interpret() begins.
Open a separate thread.
In that thread, invoke js_Disassemble with the script to get the
machine code.
Write the machine code to a file.
The reason for trying a threaded version is simply for performance
issues. Some addons become "unresponsive" if I run the disassmeble and
write the output in the same thread. I can get some output in a single
thread but it's way too slow.
I followed the tutorial in https://developer.mozilla.org/en/Making_Cross-Thread_Calls_Using_Runnables
for creating threads. But when I built it, I faced 11 "unresolved
external symbol error." Again after some googling, I found out about
setting XPCOM_GLUE by #define XPCOM_GLUE 1. However, this time I am
facing a new problem: "base class nsRunnable not defined". I can't
find a solution for this.
Any help would be appreciated.
Thanks,
You can't safely use a separate thread for this. Garbage collection could run on the main thread and collect the JSScript out from under you. Then the process would crash.
js_Interpret is called every time SpiderMonkey enters the interpreter, whether the browser is running a <script> or just calling a function or an onclick= event listener. So you probably end up dumping the same scripts many times. Maybe that's why it's so slow. Consider dumping the bytecode at compile time instead.

Resources