I have loaded the ngspice shared spice circuit simulator (dll) in python using ctypes. I am running my simulations in the background thread of ngspice and I call ngspice_running() function to see if the simulation is still running.
So the problem is that when sometimes I input a wrong circuit, the program crashes. I think since I am still calling ngspice_running() function to check the status of the background thread, it makes my python kernel to die. I am using python exception statements to keep my program running but it is not so helpful in this case I guess.
How could I avoid this kernel death issue to keep my python program running as I cannot avoid inputting some wrong circuits? It would be great if someone could point me to the right direction.
Is there some way by which the program could return to certain line of code in python in case of external error from where the program cannot recover?
Thanks in advance!
Related
I have a code running in Python 3.7.4 which forks off multiple processes. I believe I'm hitting a known issue (issue6721: https://github.com/python/cpython/issues/50970). I setup the child process to send "progress report" through a pipe to the parent process and noticed that sometimes a log statement doesn't get printed and that the code gets stuck in a deadlock situation.
After reading issue6721, I'm not sure I'm still understanding why parent might hold logger Handler lock after a log statement is done execution (i.e the line that logs is executed and the execution has moved to the next line of code). I totally get it that in the context of C++, the compiler might re-arrange instructions. Not fully understand it in context of Python. In C++ I can have barrier instructions to stop the compiler moving instructions beyond a point. Is there something similar that can be done in Python to avoid having a lock getting copied to child process?
I have seen solutions using "atfork" which is a library that seems not supported (so I can't really use it).
Does anyone know a reliable and standard solution to this problem?
We have an application that invokes a Python interpreter from within a C++ code. The C++ code is parallelized with mpi. The interpreter is used to run Python scripts (these may involve message passing via mpi4py or not). Here's the problem: When we run the code as a serial code, if the Python scripts contains an error, on stderr we get the message that the interpreter generates with the usual diagnostic information (line where error occurs, type of error,...). If, however, we run the code in parallel over multiple cores, we do not get any diagnostic info from the interpreter. On the C++ side, we know that an error occurred in the script, but that is all. Of course, this makes debug much harder, since some of the errors may only occurs when running in parallel. So my question is how to redirect error messages from the interpreter to a file, or other ideas to deal with this situation.
The problem had to do with how we exit the code after an error on the Python side occurred.
Originally, we used MPI_Abort(). Switching to std::exit(0) solves the issue.
Now, when a script generates an error, the Python interpreter messages are correctly displayed (from every process!).
I'm handling a program which controls a car.
The program has a pretty large scale and it was made by other people.
So I don't understand completely how it works.
But I have to apply it and make a car moving.
The problem I'm facing is that the program often stalls with no error, no segmentation.
If it crashes, I can trace the cause with gdb or something like that.
But it does not crash, it silently stops.
How can I find the cause?
From your description - program silently stops - I understand that your program simply and gracefully exited, but not from your expected flow. This can happen for many reasons - for example, maybe your program enters illegal flow and some sub-component, such as standard library or other library decide that program should exit, and thus calls c-runtime exit() or directly calls Kernel32!ExitProcess().The best way to debug this flow is to attach a debugger and set a breakpoint on these two methods and find out who is calling them.If you mean your program enters a deadlock and halts then also you will need to attach a debugger and find out who is stuck.
I am using the spyder interface with a python script that works through many time steps in succession. At a random point in my code, the process will terminate and the console will say "kernel died, restarting...". I tried running the same script in pycharm and the process also terminates, seemingly at a random point, with some exit code which I assume means the same thing.
Anyone have any tips on how to get rid of this problem? Even a workaround so I can get some work done. This is incredibly frustrating.
Note: I recently moved and got a new router and internet service, not sure if that might affect things.
I've got a PyQt4 project with a very weird error, under certain circumstances the main thread simply dies and I have no idea why.
No exception reported or shown, I've tried wrapping a try - except around app.exec_() and nothing.
sys.exit() is not called
Does anybody have any tips, is there a tool to see what signals/messages are passed around inside Qt or something else?
It is likely that the application is crashing in Qt. Try running the program with gdb.
gdb --args python myprog.py
When the program crashes, this should give you a backtrace that may shed some light on what is going on.
Note that having debug symbols available for Qt will make the backtrace more useful. On Ubuntu or Debian systems, the libqt4-dbg package can be installed to make these debug symbols available.
Reading the backtrace with gdb is the first step, as suggested (after the program crashes, type'backtrace' in gdb). In many cases, though, this will not lead to an obvious solution.
Here's a collection of things to look out for that cause crashes:
What are good practices for avoiding crashes / hangs in PyQt?