call method on running process from parent process - python-3.x

I'm trying to write a program that interfaces with hardware via pyserial according to this diagram https://github.com/kiyoshi7/Intrument/blob/master/Idea.gif . my problem is that I don't know how to tell the child process to run a method.
I tried reducing my problem down to the essence of what I am trying to do can call the method request() from the main script. I just dont know how to handle two way communication like this, in examples using queue i just see data shared or i cant understand the examples
import multiprocessing
from time import sleep
class spawn:
def __init__(self, _number, _max):
self._number = _number
self._max = _max
self.Update()
def request(self, x):
print("{} was requested.".format(x))
def Update(self):
while True:
print("Spawned {} of {}".format(self._number, self._max))
sleep(2)
if __name__ == '__main__':
p = multiprocessing.Process(target=spawn, args=(1,1))
p.start()
sleep(5)
p.request(2) #here I'm trying to run the method I want
update thanks to Carcigenicate
import multiprocessing
from time import sleep
from operator import methodcaller
class Spawn:
def __init__(self, _number, _max):
self._number = _number
self._max = _max
# Don't call update here
def request(self, x):
print("{} was requested.".format(x))
def update(self):
while True:
print("Spawned {} of {}".format(self._number, self._max))
sleep(2)
if __name__ == '__main__':
spawn = Spawn(1, 1) # Create the object as normal
p = multiprocessing.Process(target=methodcaller("update"), args=(spawn,)) # Run the loop in the process
p.start()
while True:
sleep(1.5)
spawn.request(2) # Now you can reference the "spawn"

You're going to need to rearrange things a bit. I would not do the long running (infinite) work from the constructor. That's generally poor practice, and is complicating things here. I would instead initialize the object, then run the loop in the separate process:
from operator import methodcaller
class Spawn:
def __init__(self, _number, _max):
self._number = _number
self._max = _max
# Don't call update here
def request(self, x):
print("{} was requested.".format(x))
def update(self):
while True:
print("Spawned {} of {}".format(self._number, self._max))
sleep(2)
if __name__ == '__main__':
spawn = Spawn(1, 1) # Create the object as normal
p = multiprocessing.Process(target=methodcaller("update"), args=(spawn,)) # Run the loop in the process
p.start()
spawn.request(2) # Now you can reference the "spawn" object to do whatever you like
Unfortunately, since Process requires that it's target argument is pickleable, you can't just use a lambda wrapper like I originally had (whoops). I'm using operator.methodcaller to create a pickleable wrapper. methodcaller("update") returns a function that calls update on whatever is given to it, then we give it spawn to call it on.
You could also create a wrapper function using def:
def wrapper():
spawn.update()
. . .
p = multiprocessing.Process(target=wrapper) # Run the loop in the process
But that only works if it's feasible to have wrapper as a global function. You may need to play around to find out what works best, or use a multiprocessing library that doesn't require pickleable tasks.
Note, please use proper Python naming conventions. Class names start with capitals, and method names are lowercase. I fixed that up in the code I posted.

Related

Is there way to write code usable for both of the multi-processing and single-process in python?

The problem is the lock. The multiprocessing needs a lock but not for the single process. For example, consider the following code:
Class Test():
def __init__(self, rlock = None):
self.tlock = rlock
def do_test(self, invalue):
with self.tlock:
return invalue + 1
For the multiprocessing, I need to use the tlock but when I use the class for single process, I don't need it. So the line with self.tlock doesn't make sense for a single process execution.
My immediate thought is to write in the following way:
def do_test(self, invalue):
if self.tlock is not None:
with self.tlock:
return invalue + 1
else:
return invalue + 1
But this looks so awkward as I would have handful of methods of this pattern inside the class.
Is there any elegant and efficient way to write the code for the code reuse?
You can create a dummy class that can support context managers, and use that instead of storing the lock if it is None (i.e, no multiprocessing is involved):
class DummyLock:
def __enter__(self):
pass
def __exit__(self, exc_type, exc_val, exc_tb):
pass
class Test:
def __init__(self, rlock=None):
self.tlock = rlock
if self.tlock is None:
self.tlock = DummyLock()
All other methods in the class will not need to be changed unless they are accessing self.rlock specific methods (like self.rlock.acquire()) instead of using context managers (with self.rlock:).

Python - How can I implement a 'stoppable' thread?

There is a solution posted here to create a stoppable thread. However, I am having some problems understanding how to implement this solution.
Using the code...
import threading
class StoppableThread(threading.Thread):
"""Thread class with a stop() method. The thread itself has to check
regularly for the stopped() condition."""
def __init__(self):
super(StoppableThread, self).__init__()
self._stop_event = threading.Event()
def stop(self):
self._stop_event.set()
def stopped(self):
return self._stop_event.is_set()
How can I create a thread that runs a function that prints "Hello" to the terminal every 1 second. After 5 seconds I use the .stop() to stop the looping function/thread.
Again I am having troubles understanding how to implement this stopping solution, here is what I have so far.
import threading
import time
class StoppableThread(threading.Thread):
"""Thread class with a stop() method. The thread itself has to check
regularly for the stopped() condition."""
def __init__(self):
super(StoppableThread, self).__init__()
self._stop_event = threading.Event()
def stop(self):
self._stop_event.set()
def stopped(self):
return self._stop_event.is_set()
def funct():
while not testthread.stopped():
time.sleep(1)
print("Hello")
testthread = StoppableThread()
testthread.start()
time.sleep(5)
testthread.stop()
Code above creates the thread testthread which can be stopped by the testthread.stop() command. From what I understand this is just creating an empty thread... Is there a way I can create a thread that runs funct() and the thread will end when I use .stop(). Basically I do not know how to implement the StoppableThread class to run the funct() function as a thread.
Example of a regular threaded function...
import threading
import time
def example():
x = 0
while x < 5:
time.sleep(1)
print("Hello")
x = x + 1
t = threading.Thread(target=example)
t.start()
t.join()
#example of a regular threaded function.
There are a couple of problems with how you are using the code in your original example. First of all, you are not passing any constructor arguments to the base constructor. This is a problem because, as you can see in the plain-Thread example, constructor arguments are often necessary. You should rewrite StoppableThread.__init__ as follows:
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self._stop_event = threading.Event()
Since you are using Python 3, you do not need to provide arguments to super. Now you can do
testthread = StoppableThread(target=funct)
This is still not an optimal solution, because funct uses an external variable, testthread to stop itself. While this is OK-ish for a tiny example like yours, using global variables like that normally causes a huge maintenance burden and you don't want to do it. A much better solution would be to extend the generic StoppableThread class for your particular task, so you can access self properly:
class MyTask(StoppableThread):
def run(self):
while not self.stopped():
time.sleep(1)
print("Hello")
testthread = MyTask()
testthread.start()
time.sleep(5)
testthread.stop()
If you absolutely do not want to extend StoppableThread, you can use the current_thread function in your task in preference to reading a global variable:
def funct():
while not current_thread().stopped():
time.sleep(1)
print("Hello")
testthread = StoppableThread(target=funct)
testthread.start()
sleep(5)
testthread.stop()
I found some implementation of a stoppable thread - and it does not rely that You check if it should continue to run inside the thread - it "injects" an exception into the wrapped function - that will work as long as You dont do something like :
while True:
try:
do something
except:
pass
definitely worth looking at !
see : https://github.com/kata198/func_timeout
maybe I will extend my wrapt_timeout_decorator with such kind of mechanism, which You can find here : https://github.com/bitranox/wrapt_timeout_decorator
Inspired by above solution I created a small library, ants, for this problem.
Example
from ants import worker
#worker
def do_stuff():
...
thread code
...
do_stuff.start()
...
do_stuff.stop()
In above example do_stuff will run in a separate thread being called in a while 1: loop
You can also have triggering events , e.g. in above replace do_stuff.start() with do_stuff.start(lambda: time.sleep(5)) and you will have it trigger every 5:th second
The library is very new and work is ongoing on GitHub https://github.com/fa1k3n/ants.git

python apply_async does not call method

I have a method which needs to process through a large database, that would take hours/days to dig through
The arguments are stored in a (long) list of which max X should be processed in one batch. The method does not need to return anything, yet i return "True" for "fun"...
The function is working perfectly when I'm iterating through it linearly (generating/appending the results in other tables not seen here), yet I am unable to get apply_async or map_async work. (it worked before in other projects)
Any hint of what might I be doing wrong would be appreciated, thanks in advance!
See code below:
import multiprocessing as mp
class mainClass:
#loads of stuff
def main():
multiprocess = True
batchSize = 35
mC = mainClass()
while True:
toCheck = [key for key, value in mC.lCheckSet.items()] #the tasks are stored in a dictionary, I'm referring to them with their keys, which I turn to a list here for iteration.
if multiprocess == False:
#this version works perfectly fine
for i in toCheck[:batchSize]:
mC.check(i)
else:
#the async version does not, either with apply_async...
with mp.Pool(processes = 8) as pool:
temp = [pool.apply_async(mC.check, args=(toCheck[n],)) for n in range(len(toCheck[:batchSize]))]
results = [t.get() for t in temp]
#...or as map_async
pool = mp.Pool(processes = 8)
temp = pool.map_async(mC.check, toCheck[:batchSize])
pool.close()
pool.join()
if __name__=="__main__":
main()
The "smell" here is that you are instantiating your maincClass on the main Process, just once, and then trying to call a method on it on the different processes - but note that when you pass mC.check to your process pool, it is a method already bound to the class instantiated in this process.
I'd guess there is where your problem lies. Although that could possibly work - and it does - I made this simplified version and it works as intended :
import multiprocessing as mp
import random, time
class MainClass:
def __init__(self):
self.value = 1
def check(self, arg):
time.sleep(random.uniform(0.01, 0.3))
print(id(self),self.value, arg)
def main():
mc = MainClass()
with mp.Pool(processes = 4) as pool:
temp = [pool.apply_async(mc.check, (i,)) for i in range(8)]
results = [t.get() for t in temp]
main()
(Have you tried just adding some prints to make sure the method is not running at all?)
So, the problem lies likely in some complex state in your MainClass that does not make it to the parallel processes in a good way. A possible work-around is to instantiate your mainclasses inside each process - that can be easily done since MultiProcessing allow you to get the current_process, and use this object as a namespace to keep data in the process instantiated in the worker Pool, across different calls to apply async.
So, create a new check function like the one bellow - and instead of instantiating your mainclass in the mainprocess, instantiate it inside each process in the pool:
import multiprocessing as mp
import random, time
def check(arg):
process = mp.current_process
if not hasattr(process, "main_class"):
process.main_class = MainClass()
process.main_class.check(arg)
class MainClass:
def __init__(self):
self.value = random.randrange(100)
def check(self, arg):
time.sleep(random.uniform(0.01, 0.3))
print(id(self),self.value, arg)
def main():
mc = MainClass()
with mp.Pool(processes = 2) as pool:
temp = [pool.apply_async(check, (i,)) for i in range(8)]
results = [t.get() for t in temp]
main()
I got to this question with the same problem, my apply_async calls not called at all, but the reason on my case was that the parameters number on apply_async call was different to the number on function declaration

An "On Stop" method for python Threads

Me and a friend are having a programming challenge to who can make a good VOS (Virtual Operating System) and currently mine is running custom programs from Threads within the program, I am using Tkinter currently so the separate Threads have their own self.master.mainloop(). I have all the Threads stored in a list but I was wondering whether I could call a function in the Thread which would call a subroutine in the program telling it to do self.master.destroy(). Is there any way to do this?
I would like something along the lines of
class ToBeThread():
def __init__(self):
self.master = Tk()
self.master.mainloop()
def on_stop(self, reason):
self.master.destroy()
Then in my main class
from threading import Thread
thread = Thread(ToBeThread())
thread.setDaemon(True)
thread.on_stop += ToBeThread.on_stop # Similar to how it is done in c#
thread.start()
...
...
thread.stop() # This calls the functions related to the "on_stop"
I have found a way to do this, so for any wondering I did:
from threading import Thread
class MyThread(Thread):def __init__(self, method, delay=-1):
Thread.__init__(self)
self.method = method
self._running = False
self.delay = delay
self.setDaemon(True)
def run(self):
self._running = True
while self._running == True:
self.method()
if self.delay != -1:
time.sleep(self.delay)
def stop(self):
self._running = False
This allows me to write pass a function in through the initialiser, and it will run it ever x seconds or as many times as possible until I do thread.stop()

Threaded result not giving same result as un-threaded result (python)

I have created a program to generate data points of functions that I later plot. The program takes a class which defines the function, creates a data outputting object which when called generates the data to a text file. To make the whole process faster I put the jobs in threads, however when I do, the data generated is not always correct. I have attached a picture to show what I mean:
Here are some of the relevant bits of code:
from queue import Queue
import threading
import time
queueLock = threading.Lock()
workQueue = Queue(10)
def process_data(threadName, q, queue_window, done):
while not done.get():
queueLock.acquire() # check whether or not the queue is locked
if not workQueue.empty():
data = q.get()
# data is the Plot object to be run
queueLock.release()
data.parent_window = queue_window
data.process()
else:
queueLock.release()
time.sleep(1)
class WorkThread(threading.Thread):
def __init__(self, threadID, q, done):
threading.Thread.__init__(self)
self.ID = threadID
self.q = q
self.done = done
def get_qw(self, queue_window):
# gets the queue_window object
self.queue_window = queue_window
def run(self):
# this is called when thread.start() is called
print("Thread {0} started.".format(self.ID))
process_data(self.ID, self.q, self.queue_window, self.done)
print("Thread {0} finished.".format(self.ID))
class Application(Frame):
def __init__(self, etc):
self.threads = []
# does some things
def makeThreads(self):
for i in range(1, int(self.threadNum.get()) +1):
thread = WorkThread(i, workQueue, self.calcsDone)
self.threads.append(thread)
# more code which just processes the function etc, sorts out the gui stuff.
And in a separate class (as I'm using tkinter, so the actual code to get the threads to run is called in a different window) (self.parent is the Application class):
def run_jobs(self):
if self.running == False:
# threads are only initiated when jobs are to be run
self.running = True
self.parent.calcsDone.set(False)
self.parent.threads = [] # just to make sure that it is initially empty, we want new threads each time
self.parent.makeThreads()
self.threads = self.parent.threads
for thread in self.threads:
thread.get_qw(self)
thread.start()
# put the jobs in the workQueue
queueLock.acquire()
for job in self.job_queue:
workQueue.put(job)
queueLock.release()
else:
messagebox.showerror("Error", "Jobs already running")
This is all the code which relates to the threads.
I don't know why when I run the program with multiple threads some data points are incorrect, whilst running it with just 1 single thread the data is all perfect. I tried looking up "threadsafe" processes, but couldn't find anything.
Thanks in advance!

Resources