In my python program, I'm spawning a process using spawnve(). Now if the user enters CTRL-C during execution of this spawned program, I want to make this program stop without stopping the calling program. So I need to exceptions, one is KeyboardInterrupt and other is OSError which is required if its not able to spawn the process. How am I going to use both exceptions together in a try except block ?
It's all explained here...
try:
...
except (KeyboardInterrupt, OSError) as e:
...
Related
What is proper practice for stopping a continuously running Python process? Consider the following code:
from multiprocessing import Process
run_process = Process(target=self.run)
def start_running():
run_process.start()
def run():
while True:
# Do stuff
def stop_running():
# ???
run_process.join()
I would expect that the ideal situation would be to have the run() process end on its own when stop_running() is called. One idea is to signal a semaphore in stop_running(), which is checked in the run() loop, so it knows to break. But I would like to know what common practice is.
There is no "proper" way of doing much of anything in Python. As you are running a process instead of a thread, you have more options, which is good.
If your process does not have a risk of being stuck completely and it does not have the risk of being stuck on IO waiting for input potentially indefinitely (for example from a queue), I would use a semaphore or a variable to signal the process it should exit now.
If there is a risk of the process being stuck in wait, you can get rid of it by run_process.kill() or run_process.terminate(). Kill equals kill -9 in shell and is guaranteed to get the job done.
The drawback in killing/terminating a process is that if the process holds any shared objects (queues, shared variables etc), those become corrupted also in the other processes that share them. It is safe to discard them but if you keep reading from them, you may encounter occasionally obscure exceptions that are hard to debug.
So as always it depends. The variable/semaphore method has its strengths but if there is a risk of the subprocess being stuck in sleep or wait and not checking the condition, you do not achieve anything. If your subprocess does not share any resources with other processes, kill may be simpler and a guaranteed way of getting rid of your process.
A python script executes an IO-bound function a lot of times (order of magnitude: anything between 5000 and 75000). This is still pretty performant by using
def _iterator(): ... # yields 5000-75000 different names
def _thread_function(name): ...
with concurrent.futures.ThreadPoolExecutor(max_workers=11) as executor:
executor.map(_thread_function, _iterator(), timeout=44)
If a user presses CTRL-C, it just messes up a single thread. I want it to stop launching new threads; and finish the current ongoing threads or kill them instantly, whatever.
How can I do that?
Exception handling in concurrent.futures.Executor.map might answer your question.
In essence, from the documentation of concurrent.futures.Executor.map
If a func call raises an exception, then that exception will be raised when its value is retrieved from the iterator.
As you are never retrieving the values from map(), the exception is never raised in your main thread.
Furthermore, from PEP 255
If an unhandled exception-- including, but not limited to, StopIteration --is raised by, or passes through, a generator function, then the exception is passed on to the caller in the usual way, and subsequent attempts to resume the generator function raise StopIteration. In other words, an unhandled exception terminates a generator's useful life.
Hence if you change your code to (notice the for loop):
def _iterator(): ... # yields 5000-75000 different names
def _thread_function(name): ...
with concurrent.futures.ThreadPoolExecutor(max_workers=11) as executor:
for _ in executor.map(_thread_function, _iterator(), timeout=44):
pass
The InterruptedError will be raised in the main thread, and by passing through the generator (executor.map(_thread_function, _iterator(), timeout=44)) it will terminate it.
In essence my question is when and where is the asyncio.CancelledError
exception raised in the coroutine being cancelled?
I have an application with a couple of async tasks that run in a loop. At some
point I start those tasks like this:
async def connect(self);
...
t1 = asyncio.create_tasks(task1())
t2 = asyncio.create_task(task2())
...
self._workers = [t1, t2, ...]
When disconnecting, I cancel the tasks like this:
async def disconnect(self):
for task in self._workers:
task.cancel()
This has been working fine. The documentation of Task.cancel says
The coroutine then has a chance to clean up or even deny the request by suppressing the exception with a
try … … except CancelledError … finally block. Therefore, unlike Future.cancel(), Task.cancel() does
not guarantee that the Task will be cancelled, although suppressing cancellation completely is
not common and is actively discouraged.
so in my workers I avoid doing stuff like this:
async def worker():
while True:
...
try:
some work
except:
continue
but that means that now I have to explicitly put asyncio.CancelledError in the
except statement:
async def worker():
while True:
...
try:
some work
except asyncio.CancelledError:
raise
except:
continue
which can be tedious and I also have to make sure that anything that I call from
my worker obliges by this rule.
So now I'm not sure if this is a good practice at all. Now that I'm thinking
about it, I don't even know when exactly the exception is raised. I was
searching for a similar case here in SO and found this question which also
raised the same question "When will this exception be thrown? And where?". The
answer says
This exception is thrown after task.cancel() is called. It is thrown inside the coroutine,
where it is caught in the example, and it is then re-raised to be thrown and caught
in the awaiting routine.
And while it make sense, this got me thinking: this is async scheduling, the
tasks are not interrupted at any arbitrary place like with threads but they only
"give back control" to the event loop when a task does an await. Right?
So that means that checking everywhere whether
asyncio.CancelledError was raised might not be necessary. For example, let's
consider this example:
def worker(interval=1):
while True:
try:
# doing some work and no await is called in this block
sync_call1()
sync_call2()
sync_call3()
except asyncio.CancelledError:
raise
except:
# deal with error
pass
await asyncio.sleep(interval)
So I think here the except asyncio.CancelledError is unnecessary because this
error cannot "physically" be raised in the try/block at all since the thread
in the try block will never be interrupted by the event loop. The only place
where this task gives back the control to the event loop is at the sleep call,
that is not even in a try/block and hence it doesn't suppress the exception. Is
my train of though correct? If so, does that mean that I only have to account
for asyncio.CancelledError when I have an await in the try block? So would
this also be OK, knowing that worker() can be cancelled?
def worker(interval=1):
while True:
try:
# doing some work and no await is called in this block
sync_call1()
sync_call2()
sync_call3()
except:
# deal with error
pass
await asyncio.sleep(interval)
And after reading the answer of the other SO question, I think I should also
wait for the cancelled tasks in my disconnect() function, do I? Like this?
async def disconnect(self):
for task in self._workers:
task.cancel()
await asyncio.gather(*self._workers)
Is this correct?
Your reasoning is correct: if the code doesn't contain an awaiting construct, you can't get a CancelledError (at least not from task.cancel; someone could still raise it manually, but then you probably want to treat is as any other exception). Note that awaiting constructs include await, async for and async with.
Having said that, I would add that try: ... except: continue is an anti-pattern. You should always catch a more specific exception. If you do catch all exceptions, that should be only to perform some cleanup/logging before re-raising it. If you do so, you won't have a problem with CancelledError. If you absolutely must catch all exceptions, consider at least logging the fact that an exception was raised, so that it doesn't pass silently.
Python 3.8 made it much easier to catch exceptions other than CancelledError because it switched to deriving CancelledError from BaseException. In 3.8 except Exception won't catch it, resolving your issue.
To sum it up:
If you run Python 3.8 and later, use except Exception: traceback.print_exc(); continue.
In Python 3.7 and earlier you need to use the pattern put forth in the question. If it's a lot of typing, you can abstract it into a function, but that will still require some refactoring.
For example, you could define a utility function like this:
def run_safe(thunk):
try:
thunk()
return True
except asyncio.CancelledError:
raise
except:
traceback.print_exc()
return False
I want to get more information about possible exceptions raised in the child. I know it could be done by looking at stderr like below:
try:
completed_process = subprocess.run(['python3', 'test.py'], capture_output=True, check=True)
except subprocess.CalledProcessError as e:
print(e.stderr)
However, I wonder if I could directly get the exception raised in the child such as ModuleNotFoundError?
Alternative: some tool for parsing the raw stderr.
This isn't possible, because the process tools used in forking / joining aren't aware of any detail except return code and maybe the OS signal that caused the exit.
If you want in-process detail like exceptions you're actually looking for threading.
Threading is a way of having parallelism for a subroutine in the same logical processes, and which would allow you to capture in-process details like exceptions.
For python3, details are here: https://docs.python.org/3/library/threading.html
I'm looking for some exit code that will be run from a thread but will be able to kill the main script. It's in Jython but I can't use java.lang.System.exit() because I still want the Java app I'm in to run, and sys.exit() isn't working. Ideally I would like to output a message then exit.
My code uses the threading.Timer function to run a function after a certain period of time. Here I'm using it to end a for loop that is executing for longer than 1 sec. Here is my code:
import threading
def exitFunct():
#exit code here
t = threading.Timer(1.0, exitFunct)
t.start()
for i in range(1, 2000):
print i
Well, if you had to, you could call mainThread.stop(). But you shouldn't.
This article explains why what you're trying to do is considered a bad idea.
If you want to kill the current process and you don't care about flushing IO buffers or reseting the terminal, you can use os._exit().
I don't know why they made this so hard.