PyO3 - prevent user submitted code from looping and blocking server thread - multithreading

I'm writing a game in Rust where each player can submit some python scripts to the server in order to automate various tasks in the game. I plan on using pyo3 to run the python from rust.
However, I can see an issue arising if a player submits a script like this:
def on_event(e):
while True:
pass
Now when the server calls the function (using something like PyAny::call1()) the thread will hang as it reaches the infinite loop.
My first thought was to have pyo3 execute the python one statement at a time, therefore being able to exit if the script been running for over a certain threshold, but I don't think pyo3 supports this.
My next idea was to give each player their own thread to run their own scripts on, that way if one of their scripts got stuck it only affected their gameplay. However, I still have the issue of not being able to kill a thread when it gets stuck in an infinite loop - if a lot of players submitted scripts that just looped, lots of threads would start using a lot of CPU time.
All I need is way to execute python scripts in a way such that if one of them does loop, it does not affect the server's performance at all.
Thanks :)

One solution is to restrict the time that you give each user script to run.
You can do it via PyThreadState_SetAsyncExc, see here for some code. It uses C calls of the interpreter, which you probably can access in Rust (with PyO3 FFI magic).
Another way would be to do it on the OS level: if you spawn a process for the user script, and then kill it when it runs for too long. This might be more secure if you limit what a process can access (with some OS calls), but requires some boilerplate to communicate between the host.

Related

Python multiprocessing deadlock when calling logger-issue6721

I have a code running in Python 3.7.4 which forks off multiple processes. I believe I'm hitting a known issue (issue6721: https://github.com/python/cpython/issues/50970). I setup the child process to send "progress report" through a pipe to the parent process and noticed that sometimes a log statement doesn't get printed and that the code gets stuck in a deadlock situation.
After reading issue6721, I'm not sure I'm still understanding why parent might hold logger Handler lock after a log statement is done execution (i.e the line that logs is executed and the execution has moved to the next line of code). I totally get it that in the context of C++, the compiler might re-arrange instructions. Not fully understand it in context of Python. In C++ I can have barrier instructions to stop the compiler moving instructions beyond a point. Is there something similar that can be done in Python to avoid having a lock getting copied to child process?
I have seen solutions using "atfork" which is a library that seems not supported (so I can't really use it).
Does anyone know a reliable and standard solution to this problem?

does python 3 multiprocessing freeze_support() sets the starting method to spawn?

well recently I encountered some freezing in my applications in Long run.
my program uses an infinite while loop to constantly check for new processes from a redis db and if there is any job to work on it will spawns a new process to run it in the background.
so I had issue with its freezing after 20 minutes, sometimes 10 minutes. it took me one week to figure it out that the problem rise from lack of this line before my while loop:
multiprocessing.set_start_method('spawn')
it looks like python does not do that on Windows and since windows does not support fork it's gonna stuck.
anyway, it seems this will solve my problem but I have another question.
in order to make a exe file for this program with something like pyinstaller I need to add another line as below to make sure its not freezing in the exe execution:
multiprocessing.freeze_support()
I want to know does this freeze_support() automatically sets the start method to 'spawn' too? I mean should I use both of these lines or just running one of them is ok? if so which one should I use from now on?
In the case of windows, spawn is already the default method so it would not be necessary to run the set_start_method ('spawn') line of code.
The freeze_support () is a different thing that does not affect the definition of start methods. You must use it in this scenario to generate an .exe.

Is it possible to create a child process to handle some tasks in Node?

I have a pretty big and terribly written piece of code that eventually crashes main Node JS process. Probably there are a lot of memory leaks. I tried fixing it, but it's very bad. (Single letter variables and such.)
Sometimes it crashes in 10 seconds, sometimes after 5 hours, but it crashes.
It is not something mission critical. It is trying to read emails by using IMAP.
I don't want to integrate a queue processor right now. Can I simply create a child instance with Node JS, run this code block in the scope of this? Any correct way of doing it?
You can use .spawn() or .exec() from the child_process module. If you're running a node.js script, then the program you are running is node and you specify the script you want to run as the first argument and it will run in another process and any other arguments to the script as subsequent arguments.
You just separate out the troublesome code into it's own node.js script and then you can run it this way.
If you want to understand more about the difference between spawn and exec, this is a good article on that.

How to run parallel fork as single thread in perl?

I was trying to check response messages written in perl which takes requests through Amazon API and returns responses..How to run parallel fork as single thread in perl?. I'm using LWP::UserAgent module and I want to debug HTTP requests.
As a word of warning - threads and forks are different things in perl. Very different.
However the long and short of it is - you can't, at least not trivially - a fork is a separate process. It actually happens when you run -any- external command in perl, it's just by default perl sits and waits for that command to finish and return output.
However if you've got access to the code, you can amend it to run single threaded - sometimes that's as simple as reducing the paralleism with a config parameter. (In fact quite often - debugging parallel code is a much more complicated task than sequential, so getting it working before running parallel is really important).
You might be able to embed a waitpid into your primary code so you've only got one thing running at once. Without a code example though, it's impossible to say for sure.

wxpython using gauge pulse with threaded long running processes

The program I am developing uses threads to deal with long running processes. I want to be able to use Gauge Pulse to show the user that whilst a long running thread is in progress, something is actually taking place. Otherwise visually nothing will happen for quite some time when processing large files & the user might think that the program is doing nothing.
I have placed a guage within the status bar of the program. My problem is this. I am having problems when trying to call gauge pulse, no matter where I place the code it either runs to fast then halts, or runs at the correct speed for a few seconds then halts.
I've tried placing the one line of code below into the thread itself. I have also tried create another thread from within the long running process thread to call the code below. I still get the same sort of problems.
I do not think that I could use wx.CallAfter as this would defeat the point. Pulse needs to be called whilst process is running, not after the fact. Also tried usin time.sleep(2) which is also not good as it slows the process down, which is something I want to avoid. Even when using time.sleep(2) I still had the same problems.
Any help would be massively appreciated!
progress_bar.Pulse()
You will need to find someway to send update requests to the main GUI from your thread during the long running process. For example, if you were downloading a very large file using a thread, you would download it in chunks and after each chunk is complete, you would send an update to the GUI.
If you are running something that doesn't really allow chunks, such as creating a large PDF with fop, then I suppose you could use a wx.Timer() that just tells the gauge to pulse every so often. Then when the thread finishes, it would send a message to stop the timer object from updating the gauge.
The former is best for showing progress while the latter works if you just want to show the user that your app is doing something. See also
http://wiki.wxpython.org/LongRunningTasks
http://www.blog.pythonlibrary.org/2010/05/22/wxpython-and-threads/
http://www.blog.pythonlibrary.org/2013/09/04/wxpython-how-to-update-a-progress-bar-from-a-thread/

Resources