When I try to make a process start, no matter what, it just fails to do so. This simple code doesn't work:
import multiprocessing
def function():
print("function started")
function_process = multiprocessing.Process(target = function)
function_process.start()
function_process.join()
The output of this code is simply nothing. If I print function_process after this, it returns <Process name='Process-1' pid=13432 parent=7564 stopped exitcode=0>. Adding if __name__ == "__main__" does nothing. Is there something I'm missing here?
Works for me:
$ python mp.py
function started
Maybe the code you show is part of a larger program, and this code is never called?
This is the code:
import multiprocessing
def function():
print("function started")
function_process = multiprocessing.Process(target = function)
function_process.start()
function_process.join()
Some IDEs redirect STDIN, STDERR and STDOUT, and data gets lost. The problem does not happen with Visual Studio Code or PyCharm.
Related
I need to test functions that use pyautogui. For example, if pressing keyboard key "a" actually press an "a". How can I reproduce this? There is a way to capture the key that it´s been press?
Ideal if it works in Windows and Linux
The PyAutoGUI unit tests have something like this. See the TypewriteThread code for details. Set up a thread to wait a second and call pyautogui.typewrite(), then call input() to accept text. Make sure your window is in focus and the typewrite() thread ends by pressing Enter (the \n character).
import time
import threading
import pyautogui
class TypewriteThread(threading.Thread):
def __init__(self, msg, interval=0.0):
super(TypewriteThread, self).__init__()
self.msg = msg
self.interval = interval
def run(self):
time.sleep(0.25)
pyautogui.typewrite(self.msg, self.interval)
t = TypewriteThread('Hello world!\n')
t.start()
response = input()
print(response == 'Hello world!')
I have a python3 program that starts a second thread (besides the main thread) for handling some events asynchronously. Ideally, my program works without a flaw and never has an unhandled exceptions. But stuff happens. When/if there is an exception, I want the whole interpreter to exit with an error code as if it had been a single thread. Is that possible?
Right now, if an exception occurs on the spawned thread, it prints out the usual error information, but doesn't exit. The main thread just keeps going.
Example
import threading
import time
def countdown(initial):
while True:
print(initial[0])
initial = initial[1:]
time.sleep(1)
if __name__ == '__main__':
helper = threading.Thread(target=countdown, args=['failsoon'])
helper.start()
time.sleep(0.5)
#countdown('THISWILLTAKELONGERTOFAILBECAUSEITSMOREDATA')
countdown('FAST')
The countdown will eventually fail to access [0] from the string because it's been emptied causing an IndexError: string index out of range error. The goal is that whether the main or helper dies first, the whole program dies alltogether, but the stack trace info is still output.
Solutions Tried
After some digging, my thought was to use sys.excepthook. I added the following:
def killAll(etype, value, tb):
print('KILL ALL')
traceback.print_exception(etype, value, tb)
os.kill(os.getpid(), signal.SIGKILL)
sys.excepthook = killAll
This works if the main thread is the one that dies first. But in the other case it does not. This seems to be a known issue (https://bugs.python.org/issue1230540). I will try some of the workarounds there.
While the example shows a main thread and a helper thread which I created, I'm interested in the general case where I may be running someone else's library that launches a thread.
Well, you could simply raise an error in your thread and have the main thread handle and report that error. From there you could even terminate the program.
For example on your worker thread:
try:
self.result = self.do_something_dangerous()
except Exception as e:
import sys
self.exc_info = sys.exc_info()
and on main thread:
if self.exc_info:
raise self.exc_info[1].with_traceback(self.exc_info[2])
return self.result
So to give you a more complete picture, your code might look like this:
import threading
class ExcThread(threading.Thread):
def excRun(self):
pass
#Where your core program will run
def run(self):
self.exc = None
try:
# Possibly throws an exception
self.excRun()
except:
import sys
self.exc = sys.exc_info()
# Save details of the exception thrown
# DON'T rethrow,
# just complete the function such as storing
# variables or states as needed
def join(self):
threading.Thread.join(self)
if self.exc:
msg = "Thread '%s' threw an exception: %s" % (self.getName(), self.exc[1])
new_exc = Exception(msg)
raise new_exc.with_traceback(self.exc[2])
(I added an extra line to keep track of which thread is causing the error in case you have multiple threads, it's also good practice to name them)
My solution ended up being a happy marriage between the solution posted here and the SIGKILL solution piece from above. I added the following killall.py submodule to my package:
import threading
import sys
import traceback
import os
import signal
def sendKillSignal(etype, value, tb):
print('KILL ALL')
traceback.print_exception(etype, value, tb)
os.kill(os.getpid(), signal.SIGKILL)
original_init = threading.Thread.__init__
def patched_init(self, *args, **kwargs):
print("thread init'ed")
original_init(self, *args, **kwargs)
original_run = self.run
def patched_run(*args, **kw):
try:
original_run(*args, **kw)
except:
sys.excepthook(*sys.exc_info())
self.run = patched_run
def install():
sys.excepthook = sendKillSignal
threading.Thread.__init__ = patched_init
And then ran the install right away before any other threads are launched (of my own creation or from other imported libraries).
Just wanted to share my simple solution.
In my case I wanted the exception to display as normal but then immediately stop the program. I was able to accomplish this by starting a timer thread with a small delay to call os._exit before raising the exception.
import os
import threading
def raise_and_exit(args):
threading.Timer(0.01, os._exit, args=(1,)).start()
raise args[0]
threading.excepthook = raise_and_exit
Python 3.8 added threading.excepthook which makes it possible to handle this more cleanly.
I wrote the package "unhandled_exit" to do just that. It basically adds os._exit(1) to after the default handler. This means you get the normal backtrace before the process exits.
Package is published to pypi here: https://pypi.org/project/unhandled_exit/
Code is here: https://github.com/rfjakob/unhandled_exit/blob/master/unhandled_exit/\_\_init__.py
Usage is simply:
import unhandled_exit
unhandled_exit.activate()
I try to try to write some kind of renderer for the command line that should be able to print data from stdin and from another data source using asyncio and blessed, which is an improved version of python-blessings.
Here is what I have so far:
import asyncio
from blessed import Terminal
#asyncio.coroutine
def render(term):
while True:
received = yield
if received:
print(term.bold + received + term.normal)
async def ping(renderer):
while True:
renderer.send('ping')
await asyncio.sleep(1)
async def input_reader(term, renderer):
while True:
with term.cbreak():
val = term.inkey()
if val.is_sequence:
renderer.send("got sequence: {0}.".format((str(val), val.name, val.code)))
elif val:
renderer.send("got {0}.".format(val))
async def client():
term = Terminal()
renderer = render(term)
render_task = asyncio.ensure_future(renderer)
pinger = asyncio.ensure_future(ping(renderer))
inputter = asyncio.ensure_future(input_reader(term, renderer))
done, pending = await asyncio.wait(
[pinger, inputter, renderer],
return_when=asyncio.FIRST_COMPLETED,
)
for task in pending:
task.cancel()
if __name__ == '__main__':
asyncio.get_event_loop().run_until_complete(client())
asyncio.get_event_loop().run_forever()
For learning and testing purposes there is just a dump ping that sends 'ping' each second and another routine, that should grab key inputs and also sends them to my renderer.
But ping only appears once in the command line using this code and the input_reader works as expected. When I replace input_reader with a pong similar to ping everything is fine.
This is how it looks when typing 'pong', although if it takes ten seconds to write 'pong':
$ python async_term.py
ping
got p.
got o.
got n.
got g.
It seems like blessed is not built to work correctly with asyncio: inkey() is a blocking method. This will block any other couroutine.
You can use a loop with kbhit() and await asyncio.sleep() to yield control to other coroutines - but this is not a clean asyncio solution.
I have a question on multiprocessing and tkinter. I am having some problems getting my process to function parallel with the tkinter GUI. I have created a simple example to practice and have been reading up to understand the basics of multiprocessing. However when applying them to tkinter, only one process runs at the time. (Using Multiprocessing module for updating Tkinter GUI) Additionally, when I added the queue to communicate between processes, (How to use multiprocessing queue in Python?), the process won't even start.
Goal:
I would like to have one process that counts down and puts the values in the queue and one to update tkinter after 1 second and show me the values.
All advice is kindly appreciated
Kind regards,
S
EDIT: I want the data to be available when the after method is being called. So the problem is not with the after function, but with the method being called by the after function. It will take 0.5 second to complete the calculation each time. Consequently the GUI is unresponsive for half a second, each second.
EDIT2: Corrections were made to the code based on the feedback but this code is not running yet.
class Countdown():
"""Countdown prior to changing the settings of the flows"""
def __init__(self,q):
self.master = Tk()
self.label = Label(self.master, text="", width=10)
self.label.pack()
self.counting(q)
# Countdown()
def counting(self, q):
try:
self.i = q.get()
except:
self.label.after(1000, self.counting, q)
if int(self.i) <= 0:
print("Go")
self.master.destroy()
else:
self.label.configure(text="%d" % self.i)
print(i)
self.label.after(1000, self.counting, q)
def printX(q):
for i in range(10):
print("test")
q.put(9-i)
time.sleep(1)
return
if __name__ == '__main__':
q = multiprocessing.Queue()
n = multiprocessing.Process(name='Process2', target=printX, args = (q,))
n.start()
GUI = Countdown(q)
GUI.master.mainloop()
Multiprocessing does not function inside of the interactive Ipython notebook.
Multiprocessing working in Python but not in iPython As an alternative you can use spyder.
No code will run after you call mainloop until the window has been destroyed. You need to start your other process before you call mainloop.
You are calling wrong the after function. The second argument must be the name of the function to call, not a call to the function.
If you call it like
self.label.after(1000, self.counting(q))
It will call counting(q) and wait for a return value to assign as a function to call.
To assign a function with arguments the syntax is
self.label.after(1000, self.counting, q)
Also, start your second process before you create the window and call counting.
n = multiprocessing.Process(name='Process2', target=printX, args = (q,))
n.start()
GUI = Countdown(q)
GUI.master.mainloop()
Also you only need to call mainloop once. Either position you have works, but you just need one
Edit: Also you need to put (9-i) in the queue to make it count down.
q.put(9-i)
Inside the printX function
That's my code:
import threading
import queue
qq=queue.Queue(10)
def x11grab(n):
print('haha')
while True:
a='abcd'+str(n)
n+=1
qq.put(a)
print('put queue:',a)
def rtpsend():
while True:
s=qq.get()
head=s[:4]
body=s[4:]
print('head',head)
print('body',body)
t1=threading.Thread(target=x11grab,args=(1,))
t2=threading.Thread(target=rtpsend)
t1.start
t2.start
I wanna x11grab() function put string 'abcd1','abcd2'... into queue,and rtpsend() function get the string from queue, and display it. It's a demo, but it didn't work. I think your advice could be helpful.:-)
You never start your threads! change
t1.start
t2.start
to
t1.start()
t2.start()