How to stop an infinite loop safely in JupyterLab? - python-3.x

We are using jupyterLab for some long running operations (doing physic simulations in our case). The user should be able to stop these operations safely without killing the kernel.
Is there a clean ways to do this?
Are there maybe even best practices for this?
My cell looks something like this:
environment = gym.make()
running = True
while running:
environment.step()
running = ???
serialize(environment)
Notes
This is not a duplicate of How to stop the running cell if interupt kernel does not work[...], because I'm looking for a save way to stop without interrupting the control flow.
This is not a duplicate of How to stop an infinite loop safely in Python?, because I'm looking for a way that is suited for JupyterNotebooks and JupyterLab.

According to https://stackoverflow.com/a/19040553/, IPython interrupts the kernel by sending a SIGINT. Shouldn't it be possible to catch and handle the signal programmatically, as described in How to stop an infinite loop safely in Python?.
Edit: This sounds helpful: graceful interrupt of while loop in ipython notebook

Related

Python multiprocessing deadlock when calling logger-issue6721

I have a code running in Python 3.7.4 which forks off multiple processes. I believe I'm hitting a known issue (issue6721: https://github.com/python/cpython/issues/50970). I setup the child process to send "progress report" through a pipe to the parent process and noticed that sometimes a log statement doesn't get printed and that the code gets stuck in a deadlock situation.
After reading issue6721, I'm not sure I'm still understanding why parent might hold logger Handler lock after a log statement is done execution (i.e the line that logs is executed and the execution has moved to the next line of code). I totally get it that in the context of C++, the compiler might re-arrange instructions. Not fully understand it in context of Python. In C++ I can have barrier instructions to stop the compiler moving instructions beyond a point. Is there something similar that can be done in Python to avoid having a lock getting copied to child process?
I have seen solutions using "atfork" which is a library that seems not supported (so I can't really use it).
Does anyone know a reliable and standard solution to this problem?

PyO3 - prevent user submitted code from looping and blocking server thread

I'm writing a game in Rust where each player can submit some python scripts to the server in order to automate various tasks in the game. I plan on using pyo3 to run the python from rust.
However, I can see an issue arising if a player submits a script like this:
def on_event(e):
while True:
pass
Now when the server calls the function (using something like PyAny::call1()) the thread will hang as it reaches the infinite loop.
My first thought was to have pyo3 execute the python one statement at a time, therefore being able to exit if the script been running for over a certain threshold, but I don't think pyo3 supports this.
My next idea was to give each player their own thread to run their own scripts on, that way if one of their scripts got stuck it only affected their gameplay. However, I still have the issue of not being able to kill a thread when it gets stuck in an infinite loop - if a lot of players submitted scripts that just looped, lots of threads would start using a lot of CPU time.
All I need is way to execute python scripts in a way such that if one of them does loop, it does not affect the server's performance at all.
Thanks :)
One solution is to restrict the time that you give each user script to run.
You can do it via PyThreadState_SetAsyncExc, see here for some code. It uses C calls of the interpreter, which you probably can access in Rust (with PyO3 FFI magic).
Another way would be to do it on the OS level: if you spawn a process for the user script, and then kill it when it runs for too long. This might be more secure if you limit what a process can access (with some OS calls), but requires some boilerplate to communicate between the host.

Python3 - Fail to actuate output device properly based on input from object detection process

First of all, I attach some of the general specifications of my project at the end of this post.
The main objective of my project is to detect the use of face mask via camera vision and then actuate certain actions accordingly. For example, if it detects a person not wearing a face mask, a buzzer will start to buzz continuously, Red LED start to flash and the gate will not open.
So far I manage to implement the object detection process, where it able to detect the use of face mask sufficiently. The object detection process should run continuously in a infinite while loop without any delays and will stop until I press a specific key.
The problem is when I try to incorporate delays for the actuation process in the same loop, for instance a blinking LED. The video stream for the object detection process froze because of the delays.
Thus, I try a few things to ensure that the output actuation process will not interrupt the object detection process such as by implementing multiprocessing along with pickle file that act as a buffer memory storage that stores information produce by the object detection process. But still, I did not manage to solve this problem. I have an issue with writing/reading the pickle file simultaneously from two different processes.
Requirement of the processes are as listed below.
Process 1 (Main Process)
In an infinite loop
No delays, speed of the iteration is limited by the hardware and the OS
Able to write output signal as soon as it detect the face mask
Process 2 (Secondary Process)
Start to run the program once received signal from the main process
Able to read output signal from the main process
Able to operate with delays without interrupting the main process
Able to delete/edit the output signal from the main process
Killed once the main process is terminated
Therefore, I wonder if there is any method/library/function that able to run two process simultaneously and independently, with different timing, and able to retrieve/transfer information within those processes. If it is necessary to share my codes, please do inform me.
Thank you.
General specifications of my project:
Programming language, Python3
Text editor/compiler, Code-OSS
Hardware, Nvidia Jetson Nano 2GB
OS, Linux/Nvidia JetPack
Pre-trained model, SSD-Mobilenet V2
After reading and searching more about multiprocessing I manage to find something that are useful for my project, it is the method of "Sharing data using server process" and "Process synchronization" more details about this features you may refer to the YouTube video below. It is highly recommended for you to watch the full playlist, so that you have broader understanding on multiprocessing which might simplify your work.
Sharing data using server process
https://youtu.be/v5u5zSYKhFs
Process synchronization
https://youtu.be/-zJ1x2QHTKE
Both of this method successfully solve my problem, I think my previous problem raised due to the issue of simultaneous writing and reading of the pickle file from both of the processes.

What's the most efficient way to prevent a node.js script from terminating?

If I'm writing something simple and want it to run until explicitly terminated, is there a best practice to prevent script termination without causing blocking, using CPU time or preventing callbacks from working?
I'm assuming at that point I'd need some kind of event loop implementation or a way to unblock the execution of events that come in from other async handlers (network io, message queues)?
A specific example might be something along the lines of "I want my node script to sleep until a job is available via Beanstalkd".
I think the relevant counter-question is "How are you checking for the exit condition?".
If you're polling a web service, then the underlying setInterval() for the poll will keep it alive until cancelled. If you're taking in input from a stream, that should keep it alive until the stream closes, etc.
Basically, you must be monitoring something in order to know whether or not you should exit. That monitoring should be the thing keeping the script alive.
Node.js end when it have nothing else to do.
If you listen on a port, it have something to do, and a way to receive beanstalk command, so it will wait.
Create a function that close the port and you ll have your explicit exit, but it will wait for all current job to end before closing.

wxpython using gauge pulse with threaded long running processes

The program I am developing uses threads to deal with long running processes. I want to be able to use Gauge Pulse to show the user that whilst a long running thread is in progress, something is actually taking place. Otherwise visually nothing will happen for quite some time when processing large files & the user might think that the program is doing nothing.
I have placed a guage within the status bar of the program. My problem is this. I am having problems when trying to call gauge pulse, no matter where I place the code it either runs to fast then halts, or runs at the correct speed for a few seconds then halts.
I've tried placing the one line of code below into the thread itself. I have also tried create another thread from within the long running process thread to call the code below. I still get the same sort of problems.
I do not think that I could use wx.CallAfter as this would defeat the point. Pulse needs to be called whilst process is running, not after the fact. Also tried usin time.sleep(2) which is also not good as it slows the process down, which is something I want to avoid. Even when using time.sleep(2) I still had the same problems.
Any help would be massively appreciated!
progress_bar.Pulse()
You will need to find someway to send update requests to the main GUI from your thread during the long running process. For example, if you were downloading a very large file using a thread, you would download it in chunks and after each chunk is complete, you would send an update to the GUI.
If you are running something that doesn't really allow chunks, such as creating a large PDF with fop, then I suppose you could use a wx.Timer() that just tells the gauge to pulse every so often. Then when the thread finishes, it would send a message to stop the timer object from updating the gauge.
The former is best for showing progress while the latter works if you just want to show the user that your app is doing something. See also
http://wiki.wxpython.org/LongRunningTasks
http://www.blog.pythonlibrary.org/2010/05/22/wxpython-and-threads/
http://www.blog.pythonlibrary.org/2013/09/04/wxpython-how-to-update-a-progress-bar-from-a-thread/

Resources