I want to get more information about possible exceptions raised in the child. I know it could be done by looking at stderr like below:
try:
completed_process = subprocess.run(['python3', 'test.py'], capture_output=True, check=True)
except subprocess.CalledProcessError as e:
print(e.stderr)
However, I wonder if I could directly get the exception raised in the child such as ModuleNotFoundError?
Alternative: some tool for parsing the raw stderr.
This isn't possible, because the process tools used in forking / joining aren't aware of any detail except return code and maybe the OS signal that caused the exit.
If you want in-process detail like exceptions you're actually looking for threading.
Threading is a way of having parallelism for a subroutine in the same logical processes, and which would allow you to capture in-process details like exceptions.
For python3, details are here: https://docs.python.org/3/library/threading.html
Related
What is proper practice for stopping a continuously running Python process? Consider the following code:
from multiprocessing import Process
run_process = Process(target=self.run)
def start_running():
run_process.start()
def run():
while True:
# Do stuff
def stop_running():
# ???
run_process.join()
I would expect that the ideal situation would be to have the run() process end on its own when stop_running() is called. One idea is to signal a semaphore in stop_running(), which is checked in the run() loop, so it knows to break. But I would like to know what common practice is.
There is no "proper" way of doing much of anything in Python. As you are running a process instead of a thread, you have more options, which is good.
If your process does not have a risk of being stuck completely and it does not have the risk of being stuck on IO waiting for input potentially indefinitely (for example from a queue), I would use a semaphore or a variable to signal the process it should exit now.
If there is a risk of the process being stuck in wait, you can get rid of it by run_process.kill() or run_process.terminate(). Kill equals kill -9 in shell and is guaranteed to get the job done.
The drawback in killing/terminating a process is that if the process holds any shared objects (queues, shared variables etc), those become corrupted also in the other processes that share them. It is safe to discard them but if you keep reading from them, you may encounter occasionally obscure exceptions that are hard to debug.
So as always it depends. The variable/semaphore method has its strengths but if there is a risk of the subprocess being stuck in sleep or wait and not checking the condition, you do not achieve anything. If your subprocess does not share any resources with other processes, kill may be simpler and a guaranteed way of getting rid of your process.
Recently observed a rather odd behaviour that only happens in Linux but not freeBSD and was wondering whether anyone had an explanation or at least a guess of what might really be going on.
The problem:
The socket creation method, socket.socket(), sometimes fails. This only happens when multiple threads are creating the sockets, single-threaded works just fine.
To expand on socket.socket() fails, most of the time I get "error 13: Permission denied", but I have also seen "error 93: Protocol not supported".
Notes:
I have tried this on Ubuntu 18.04 (bug is there) and freeBSD 12.0 (bug is not there)
It only happens when multiple threads are creating sockets
I've used UDP as a protocol for the sockets, although that seems to be more fault-tolerant. I have tried it with TCP as well, it even goes haywire faster with similar errors.
It only happens sometimes, so multiple-runs might be required or as in the case I provided below - a bloated number of threads should also do the trick.
Code:
Here's some minimal code that you can use to reproduce that:
from threading import Thread
import socket
def foo():
udp = socket.getprotobyname('udp')
try:
send_socket = socket.socket(socket.AF_INET, socket.SOCK_DGRAM, udp)
except Exception as e:
print type(e)
print repr(e)
def main():
for _ in range(6000):
t = Thread(target=foo)
t.start()
main()
Note:
I have used an artificially large number of threads just to maximize the probability that you'd hit that error at least once within a run with UDP. As I said earlier, if you try TCP you'll see A LOT of errors with that number of threads. But in reality, even a more real-world number of threads like 20 or even 10 would trigger the error, you'd just likely need multiple runs in order to observe it.
Surrounding the socket creation with while, try/except will cause all subsequent calls to also fail.
Surrounding the socket creation with try/except and in the "exception handing" bit restarting the function, i.e. calling it again would work and will not fail.
Any ideas, suggestions or explanations are welcome!!!
P.S.
Technically I know I can get around my problem by having a single thread create as many sockets as I need and pass them as arguments to my other threads, but that is not the point really. I am more interested in why this is happening and how to solve it, rather than what workarounds there might be, even though these are also welcome. :)
I managed to solve it. The problem comes from getprotobyname() not being thread safe!
See:
The Linux man page
On another note, looking at the freeBSD man page also hints that this might cause problems with concurrency, however my experiments prove that it does not, maybe someone can follow up?
Anyway, a fixed version of the code for anyone interested would be to get the protocol number in the main thread (seems sensible and should have done that in the first place) and then pass it as an argument. It would both reduce the system calls that you perform and fix any concurrency-related problems with that within the program. The code would look as follows:
from threading import Thread
import socket
def foo(proto_num):
try:
send_socket = socket.socket(socket.AF_INET, socket.SOCK_DGRAM, proto_num)
except Exception as e:
print type(e)
print repr(e)
def main():
proto_num = socket.getprotobyname('udp')
for _ in range(6000):
t = Thread(target=foo, args=(proto_num,))
t.start()
main()
Exceptions with socket creation in the form of "Permission denied" or "Protocol not supported" will not be reported this way. Also, note that if you use SOCK_DGRAM the proto_number is redundant and might be skipped altogether, however the solution would be more relevant in case someone wants to create a SOCK_RAW socket.
Now I make TCP server with asyncio.
I want added exception error handling in my code. (like below)
try:
data = await reader.read(SERVER_IO_BUFFER_SIZE)
except SomeError:
#error handle
So, I look asyncio official document.
but I can't find any of information about Errors that may occur.
(link: https://docs.python.org/3/library/asyncio-stream.html#asyncio.StreamReader.read)
How can I get infomation about Errors that may occur?
The exact errors that may occur will depend on the type of the stream behind the StreamReader. An implementation that talks to a socket will raise IOError, while an implementation that reads data from a database might raise some database-specific errors.
If you are dealing with the network, e.g. through asyncio.open_connection or asyncio.start_server, you can expect instances of IOError and its subclasses. In other words, use except IOError as e.
Also, if the coroutine is cancelled, you can get asyncio.CancelledError at any await. You probably don't want to handle that exception - just let it propagate, and be sure to use the appropriate finally clauses or with context managers to ensure cleanup. (This last part is a good idea regardless of CancelledError.)
I have a production code that heavily used asyncio.semaphore module which is suspected to have deadlock problem.
I already found some solution of how to attach to running python code with unix signal, debug with ipdb.set_trace() and list all tasks on event loop with asyncio.Task.all_tasks(). Can I further inspect into the stack frame of each task or viewing each line of coroutine which is currently pending by futures on ipdb?
As OP observes, further inspections may be obtained with
[*map(asyncio.Task.print_stack, asyncio.Task.all_tasks())]
(OP is certainly free to self-answer.)
In my python program, I'm spawning a process using spawnve(). Now if the user enters CTRL-C during execution of this spawned program, I want to make this program stop without stopping the calling program. So I need to exceptions, one is KeyboardInterrupt and other is OSError which is required if its not able to spawn the process. How am I going to use both exceptions together in a try except block ?
It's all explained here...
try:
...
except (KeyboardInterrupt, OSError) as e:
...