Show the running executors identifications in ProcessPoolExecutor - python-3.x

This is the problem I am having. I wont share code because of condfidentiality but instead I will provide some dummy example.
Assume that we have a class as follows:
class SayHello:
def __init__(self, name, id):
self.name=name
self.id=id
#public func
def doSomething(self, arg1, arg2 ):
DoAHugeTaskWithArgument
Lets say now that in an other modules we have this:
class CallOperations:
def __init__(self):
self.dummydict={1: {"james":20, "peter":30, "victor":40, "john":45, "ali":21, "tom":41, "hector":37}, 2:{"james":23, "peter":31, "victor":44, "john":46, "ali":23, "tom":44, "hector":35} }
def runProcessors(self):
#runprocess
for _, v in self.dummydict.items():
Instances = [SayHello(g,b) for g ,b in v.items()]
with ProcessPoolExecutor(max_workers=2) as executor:
future = [executor.submit(ins.doSomething, 2, 1235) for ins in Instances]
So the problem starts here. I want to know what instances are running doSomething() funtion in their respective process. I want to set a variable = 1 when the function of that instance is running in the process and set it to zero when it is completed.
Each instance has its own name and id. Is there way to find out the name of the running instance in the process?
This problem is making me very confused and can not find a proper solution.
Thank you alot.

If I understand your question correctly, you want to know when an instance of SayHello is executing and when it is not. You can set a variable (1 or 0) by using a Manager - but the usefulness of this can be debated. You might want to use a lock instead.
I had to tweak your code a bit but this is a running example. It picks one of your tasks as the one to monitor in the while loop. It is a dummy loop that never exits but you'll get the idea. It will keep polling the variable of one of your instances and you can see it change when that task is running, and then revert back to zero.
from time import sleep
from concurrent.futures import ProcessPoolExecutor
from multiprocessing import Manager
class SayHello:
def __init__(self, name, id):
self.name=name
self.id=id
self.status = Manager().Value("i",0)
#public func
def doSomething(self, arg1, arg2 ):
self.status.value = 1
sleep(5)
self.status.value = 0
class CallOperations:
def __init__(self):
self.dummydict={1: {"james":20, "peter":30, "victor":40, "john":45, "ali":21, "tom":41, "hector":37}, 2:{"james":23, "peter":31, "victor":44, "john":46, "ali":23, "tom":44, "hector":35} }
def runProcessors(self):
#runprocess
for _, v in self.dummydict.items():
Instances = [SayHello(g,b) for g ,b in v.items()]
f = Instances[3]
executor = ProcessPoolExecutor(max_workers=2)
future = [executor.submit(ins.doSomething, 2, 1235) for ins in Instances]
while True:
print(f.status.value)
# Insert break condition here
sleep(0.5)
executor.shutdown()
foo = CallOperations()
foo.runProcessors()
The problem with this is that it can lead to a race condition depending on what you do in your main program. If you want to do any operations on the instance when it is in passive state, it might progress to active just after you check the variable but before you have completed your actions in the main program.
Locks come to rescue here, as you can also create a shared lock Manager().Lock(). If your DoSomething() tries to acquire the lock and your main process does the same when operating on a passive instance, you avoid this problem. Of course your main program could then block the executor from processing if it reserves locks for lengthy operations, as then your two workers would be stuck waiting on locks if the execution processed to those instances where locks are being held by the main program. This case would not be suitable for parallel processing implemented using executors.
EDIT: if you are only interested in the running status, you can check the Future.running() status of your future objects, in this case items in your future array.

Related

Python Asyncio and Multithreading

I have created a greatly simplified version of an application below that intends to use Python's asyncio and threading modules. The general structure is as follows:
import asyncio
import threading
class Node:
def __init__(self, loop):
self.loop = loop
self.tasks = set()
async def computation(self, x):
print("Node: computation called with input ", x)
await asyncio.sleep(1)
def schedule_computation(self, x):
print("Node: schedule_computation called with input ", x)
task = self.loop.create_task(self.computation(x))
self.tasks.add(task)
class Router:
def __init__(self, loop):
self.loop = loop
self.nodes = {}
def register_node(self, id):
self.nodes[id] = Node(self.loop)
def schedule_computation(self, node_id, x):
print("Router: schedule_computation called with input ", x)
self.nodes[node_id].schedule_computation(x)
class Client:
def __init__(self, router):
self.router = router
self.counter = 0
def run(self):
while True:
if self.counter == 1000000:
self.router.schedule_computation(1, 5)
self.counter += 1
def main():
loop = asyncio.get_event_loop()
# construct Router instance and register a node
router = Router(loop)
router.register_node(1)
# construct Client instance
client = Client(router)
client_thread = threading.Thread(target=client.run)
client_thread.start()
loop.run_forever()
main()
In practice the Node.computation method is doing some network I/O and thus I'd like to perform said work asynchronously. The Client.run method is synchronous and blocking and I'd like to give this function it's own thread to execute in (in fact I'd like the ability to run this method in a separate process if possible).
Upon executing this application we get the following output:
Router: schedule_computation called with input 5
Node: schedule_computation called with input 5
However, I expect that "Node: computation called with input 5" should print as well because the Node.schedule_computation method creates a task to run on loop. In summary, why does it seem that Node.computation is never scheduled?
Use loop.call_soon_threadsafe
In general, asyncio isn't thread safe
Almost all asyncio objects are not thread safe, which is typically not
a problem unless there is code that works with them from outside of a
Task or a callback. If there’s a need for such code to call a
low-level asyncio API, the loop.call_soon_threadsafe() method should
be used
https://docs.python.org/3/library/asyncio-dev.html#concurrency-and-multithreading
SCHEDULE COMPUTATION
loop.call_soon_threadsafe(self.nodes[node_id].schedule_computation,x)
Node.computation runs on main thread
Not sure if you are aware, but even though you can use call_soon_threadsafe to initiate a coroutine from another thread. The coroutine always runs in the thread the loop was created in. If you want to run coroutines on another thread, then your background thread will need its own EventLoop also.

Python multiprocess: run several instances of a class, keep all child processes in memory

First, I'd like to thank the StackOverflow community for the tremendous help it provided me over the years, without me having to ask a single question.
I could not find anything that I can relate to my problem, though it is probably due to my lack of understanding of the subject, rather than the absence of a response on the website. My apologies in advance if this is a duplicate.
I am relatively new to multiprocess; some time ago I succeeded in using multiprocessing.pools in a very simple way, where I didn't need any feedback between the child processes.
Now I am facing a much more complicated problem, and I am just lost in the documentation about multiprocessing. I hence ask for you help, your kindness and your patience.
I am trying to build a parallel tempering monte-carlo algorithm, from a class.
The basic class very roughly goes as follows:
import numpy as np
class monte_carlo:
def __init__(self):
self.x=np.ones((1000,3))
self.E=np.mean(self.x)
self.Elist=[]
def simulation(self,temperature):
self.T=temperature
for i in range(3000):
self.MC_step()
if i%10==0:
self.Elist.append(self.E)
return
def MC_step(self):
x=self.x.copy()
k = np.random.randint(1000)
x[k] = (x[k] + np.random.uniform(-1,1,3))
temp_E=np.mean(self.x)
if np.random.random()<np.exp((self.E-temp_E)/self.T):
self.E=temp_E
self.x=x
return
Obviously, I simplified a great deal (actual class is 500 lines long!), and built fake functions for simplicity: __init__ takes a bunch of parameters as arguments, there are many more lists of measurement else than self.Elist, and also many arrays derived from self.X that I use to compute them. The key point is that each instance of the class contains a lot of informations that I want to keep in memory, and that I don't want to copy over and over again, to avoid dramatic slowing down. Else I would just use the multiprocessing.pool module.
Now, the parallelization I want to do, in pseudo-code:
def proba(dE,pT):
return np.exp(-dE/pT)
Tlist=[1.1,1.2,1.3]
N=len(Tlist)
G=[]
for _ in range(N):
G.append(monte_carlo())
for _ in range(5):
for i in range(N): # this loop should be ran in multiprocess
G[i].simulation(Tlist[i])
for i in range(N//2):
dE=G[i].E-G[i+1].E
pT=G[i].T + G[i+1].T
p=proba(dE,pT) # (proba is a function, giving a probability depending on dE)
if np.random.random() < p:
T_temp = G[i].T
G[i].T = G[i+1].T
G[i+1].T = T_temp
Synthesis: I want to run several instances of my monte-carlo class in parallel child processes, with different values for a parameter T, then periodically pause everything to change the different T's, and run again the child processes/class instances, from where they paused.
Doing this, I want each class-instance/child-process to stay independent from one another, save its current state with all internal variables while it is paused, and do as few copies as possible. This last point is critical, as the arrays inside the class are quite big (some are 1000x1000), and a copy will therefore very quickly become quite time-costly.
Thanks in advance, and sorry if I am not clear...
Edit:
I am using a distant machine with many (64) CPUs, running on Debian GNU/Linux 10 (buster).
Edit2:
I made a mistake in my original post: in the end, the temperatures must be exchanged between the class-instances, and not inside the global Tlist.
Edit3: Charchit answer works perfectly for the test code, on both my personal machine and the distant machine I am usually using for running my codes. I hence check this as the accepted answer.
However, I want to report here that, inserting the actual, more complicated code, instead of the oversimplified monte_carlo class, the distant machine gives me some strange errors:
Unable to init server: Could not connect: Connection refused
(CMC_temper_all.py:55509): Gtk-WARNING **: ##:##:##:###: Locale not supported by C library.
Using the fallback 'C' locale.
Unable to init server: Could not connect: Connection refused
(CMC_temper_all.py:55509): Gdk-CRITICAL **: ##:##:##:###:
gdk_cursor_new_for_display: assertion 'GDK_IS_DISPLAY (display)' failed
(CMC_temper_all.py:55509): Gdk-CRITICAL **: ##:##:##:###: gdk_cursor_new_for_display: assertion 'GDK_IS_DISPLAY (display)' failed
The "##:##:##:###" are (or seems like) IP adresses.
Without the call to set_start_method('spawn') this error shows only once, in the very beginning, while when I use this method, it seems to show at every occurrence of result.get()...
The strangest thing is that the code seems otherwise to work fine, does not crash, produces the datafiles I then ask it to, etc...
I think this would deserve to publish a new question, but I put it here nonetheless in case someone has a quick answer.
If not, I will resort to add one by one the variables, methods, etc... that are present in my actual code but not in the test example, to try and find the origin of the bug. My best guess for now is that the memory space required by each child-process with the actual code, is too large for the distant machine to accept it, due to some restrictions implemented by the admin.
What you are looking for is sharing state between processes. As per the documentation, you can either create shared memory, which is restrictive about the data it can store and is not thread-safe, but offers better speed and performance; or you can use server processes through managers. The latter is what we are going to use since you want to share whole objects of user-defined datatypes. Keep in mind that using managers will impact speed of your code depending on the complexity of the arguments that you pass and receive, to and from the managed objects.
Managers, proxies and pickling
As mentioned, managers create server processes to store objects, and allow access to them through proxies. I have answered a question with better details on how they work, and how to create a suitable proxy here. We are going to use the same proxy defined in the linked answer, with some variations. Namely, I have replaced the factory functions inside the __getattr__ to something that can be pickled using pickle. This means that you can run instance methods of managed objects created with this proxy without resorting to using multiprocess. The result is this modified proxy:
from multiprocessing.managers import NamespaceProxy, BaseManager
import types
import numpy as np
class A:
def __init__(self, name, method):
self.name = name
self.method = method
def get(self, *args, **kwargs):
return self.method(self.name, args, kwargs)
class ObjProxy(NamespaceProxy):
"""Returns a proxy instance for any user defined data-type. The proxy instance will have the namespace and
functions of the data-type (except private/protected callables/attributes). Furthermore, the proxy will be
pickable and can its state can be shared among different processes. """
def __getattr__(self, name):
result = super().__getattr__(name)
if isinstance(result, types.MethodType):
return A(name, self._callmethod).get
return result
Solution
Now we only need to make sure that when we are creating objects of monte_carlo, we do so using managers and the above proxy. For that, we create a class constructor called create. All objects for monte_carlo should be created with this function. With that, the final code looks like this:
from multiprocessing import Pool
from multiprocessing.managers import NamespaceProxy, BaseManager
import types
import numpy as np
class A:
def __init__(self, name, method):
self.name = name
self.method = method
def get(self, *args, **kwargs):
return self.method(self.name, args, kwargs)
class ObjProxy(NamespaceProxy):
"""Returns a proxy instance for any user defined data-type. The proxy instance will have the namespace and
functions of the data-type (except private/protected callables/attributes). Furthermore, the proxy will be
pickable and can its state can be shared among different processes. """
def __getattr__(self, name):
result = super().__getattr__(name)
if isinstance(result, types.MethodType):
return A(name, self._callmethod).get
return result
class monte_carlo:
def __init__(self, ):
self.x = np.ones((1000, 3))
self.E = np.mean(self.x)
self.Elist = []
self.T = None
def simulation(self, temperature):
self.T = temperature
for i in range(3000):
self.MC_step()
if i % 10 == 0:
self.Elist.append(self.E)
return
def MC_step(self):
x = self.x.copy()
k = np.random.randint(1000)
x[k] = (x[k] + np.random.uniform(-1, 1, 3))
temp_E = np.mean(self.x)
if np.random.random() < np.exp((self.E - temp_E) / self.T):
self.E = temp_E
self.x = x
return
#classmethod
def create(cls, *args, **kwargs):
# Register class
class_str = cls.__name__
BaseManager.register(class_str, cls, ObjProxy, exposed=tuple(dir(cls)))
# Start a manager process
manager = BaseManager()
manager.start()
# Create and return this proxy instance. Using this proxy allows sharing of state between processes.
inst = eval("manager.{}(*args, **kwargs)".format(class_str))
return inst
def proba(dE,pT):
return np.exp(-dE/pT)
if __name__ == "__main__":
Tlist = [1.1, 1.2, 1.3]
N = len(Tlist)
G = []
# Create our managed instances
for _ in range(N):
G.append(monte_carlo.create())
for _ in range(5):
# Run simulations in the manager server
results = []
with Pool(8) as pool:
for i in range(N): # this loop should be ran in multiprocess
results.append(pool.apply_async(G[i].simulation, (Tlist[i], )))
# Wait for the simulations to complete
for result in results:
result.get()
for i in range(N // 2):
dE = G[i].E - G[i + 1].E
pT = G[i].T + G[i + 1].T
p = proba(dE, pT) # (proba is a function, giving a probability depending on dE)
if np.random.random() < p:
T_temp = Tlist[i]
Tlist[i] = Tlist[i + 1]
Tlist[i + 1] = T_temp
print(Tlist)
This meets the criteria you wanted. It does not create any copies at all, rather, all arguments to the simulation method call are serialized inside the pool and sent to the manager server where the object is actually stored. It gets executed there, and the results (if any) are serialized and returned in the main process. All of this, with only using the builtins!
Output
[1.2, 1.1, 1.3]
Edit
Since you are using Linux, I encourage you to use multiprocessing.set_start_method inside the if __name__ ... clause to set the start method to "spawn". Doing this will ensure that the child processes do not have access to variables defined inside the clause.

QObject and QThread relations

I have a pyqt4 gui which allows me to import multiple .csv files. I've created a loop that goes through this list of tuples that have the following parameters (filename + location of file, filename, bool,bool, set of dates in file)=tup.
I've created several classes that my gui frequently refers to in order to pull parameters off a projects profile. Let's call this class profile(). I also have another class that has a lot of functions based on formatting, such as datetime, text edits,etc...let's call this classMyFormatting(). Then I created a QThread class that is created for each file in the list, and this one is called Import_File(QThread). And lets say this class takes in a few parameters for the __init__(self,tup).
My ideal goal is to be able to make an independent instance of MyFormatting() and profile() for the Import_File(QThread). I am trying to get my head around on how to utilize the QObject capabilities to solve this..but I keep getting the error that the thread is being destroyed while still running.
for tup in importedlist:
importfile = Import_File(tup)
self.connect(importfile,QtCore.SIGNAL('savedfile(PyQt_PyObject()'),self.printstuffpassed)
importfile.start()
I was thinking of having the two classes be declared as
MyFormatting(QObject):
def __init__(self):
QObject.__init__(self)
def func1(self,stuff):
dostuff
def func2(self):
morestuff
profile(QObject):
def __init__(self):
QObject.__init__(self)
def func11(self,stuff):
dostuff
def func22(self):
morestuff
AND for the QThread:
Import_File(QThread):
def __init__(self,tup):
QThread.__init(self)
common_calc_stuff = self.calc(tup[4])
f = open(tup[0] + '.csv', 'w')
self.tup = tup
# this is where I thought of pulling an instance just for this thread
self.MF = MyFormatting()
self.MF_thread = QtCore.QThread()
self.MF.moveToThread(self.MF_thread)
self.MF_thread.start()
self.prof = profile()
self.prof_thread = QtCore.QThread()
self.prof.moveToThread(self.prof_thread)
self.prof_thread.start()
def func33(self,stuff):
dostuff
self.prof.func11(tup[4])
def func44(self):
morestuff
def run(self):
if self.tup[3] == True:
self.func33
self.MF.func2
elif self.tup[3] ==False:
self.func44
if self.tup[2] == True:
self.prof.func22
self.emit(QtCore.SIGNAL('savedfile()',)
Am I totally thinking of it the wrong way? How can I keep to somewhat of the same structure that I have for the coding and still be able to implement the multithreading and not have the same resource tapped at the same time, which I think is the reason why my qui keeps crashing? Or how can I make sure that each instance of those objects get turned off that they don't interfere with the other instances?

Implementation of QThread in PyQt5 Application

I wrote a PyQt5 GUI (Python 3.5 on Win7). While it is running, its GUI is not responsive. To still be able to use the GUI, I tried to use QThread from this answer: https://stackoverflow.com/a/6789205/5877928
The module is now designed like this:
class MeasureThread(QThread):
def __init(self):
super().__init__()
def get_data(self):
data = auto.data
def run(self):
time.sleep(600)
# measure some things
with open(outputfile) as f:
write(things)
class Automation(QMainWindow):
[...]
def measure(self):
thread = MeasureThread()
thread.finished.connect(app.exit)
for line in open(inputfile):
thread.get_data()
thread.start()
measure() gets called once per measurement but starts the thread once per line in inputfile. The module now exits almost immediately after starting it (I guess it runs all thread at once and does not sleep) but I only want it to do all the measurements in another single thread so the GUI can still be accessed.
I also tried to apply this to my module, but could not connect the methods used there to my methods: http://www.xyzlang.com/python/PyQT5/pyqt_multithreading.html
The way I used to use the module:
class Automation(QMainWindow):
[...]
def measure(self):
param1, param2 = (1,2)
for line in open(inputfile):
self.measureValues(param1, param2)
def measureValues(self, param1, param2):
time.sleep(600)
# measure some things
with open(outputfile) as f:
write(things)
But that obviously used only one thread. Can you help me to find the right method to use here( QThread, QRunnable) or to map the example of the second link to my module?
Thanks!

Gifs shown inside a label doesn't update itself regulary in Pyqt5

I have a loading widget that consists of two labels, one is the status label and the other one is the label that the animated gif will be shown in. If I call show() method before heavy stuff gets processed, the gif at the loading widget doesn't update itself at all. There's nothing wrong with the gif btw(looping problems etc.). The main code(caller) looks like this:
self.loadingwidget = LoadingWidgetForm()
self.setCentralWidget(self.loadingwidget)
self.loadingwidget.show()
...
...
heavy stuff
...
...
self.loadingwidget.hide()
The widget class:
class LoadingWidgetForm(QWidget, LoadingWidget):
def __init__(self, parent=None):
super().__init__(parent=parent)
self.setupUi(self)
self.setWindowFlags(self.windowFlags() | Qt.FramelessWindowHint)
self.setAttribute(Qt.WA_TranslucentBackground)
pince_directory = SysUtils.get_current_script_directory() # returns current working directory
self.movie = QMovie(pince_directory + "/media/loading_widget_gondola.gif", QByteArray())
self.label_Animated.setMovie(self.movie)
self.movie.setScaledSize(QSize(50, 50))
self.movie.setCacheMode(QMovie.CacheAll)
self.movie.setSpeed(100)
self.movie.start()
self.not_finished=True
self.update_thread = Thread(target=self.update_widget)
self.update_thread.daemon = True
def showEvent(self, QShowEvent):
QApplication.processEvents()
self.update_thread.start()
def hideEvent(self, QHideEvent):
self.not_finished = False
def update_widget(self):
while self.not_finished:
QApplication.processEvents()
As you see I tried to create a seperate thread to avoid workload but it didn't make any difference. Then I tried my luck with the QThread class by overriding the run() method but it also didn't work. But executing QApplication.processEvents() method inside of the heavy stuff works well. I also think I shouldn't be using seperate threads, I feel like there should be a more elegant way to do this. The widget looks like this btw:
Processing...
Full version of the gif:
Thanks in advance! Have a good day.
Edit: I can't move the heavy stuff to a different thread due to bugs in pexpect. Pexpect's spawn() method requires spawned object and any operations related with the spawned object to be in the same thread. I don't want to change the working flow of the whole program
In order to update GUI animations, the main Qt loop (located in the main GUI thread) has to be running and processing events. The Qt event loop can only process a single event at a time, however because handling these events typically takes a very short time control is returned rapidly to the loop. This allows the GUI updates (repaints, including animation etc.) to appear smooth.
A common example is having a button to initiate loading of a file. The button press creates an event which is handled, and passed off to your code (either via events directly, or via signals). Now the main thread is in your long-running code, and the event loop is stalled — and will stay stalled until the long-running job (e.g. file load) is complete.
You're correct that you can solve this with threads, but you've gone about it backwards. You want to put your long-running code in a thread (not your call to processEvents). In fact, calling (or interacting with) the GUI from another thread is a recipe for a crash.
The simplest way to work with threads is to use QRunner and QThreadPool. This allows for multiple execution threads. The following wall of code gives you a custom Worker class that makes it simple to handle this. I normally put this in a file threads.py to keep it out of the way:
import sys
from PyQt5.QtCore import QObject, QRunnable
class WorkerSignals(QObject):
'''
Defines the signals available from a running worker thread.
error
`tuple` (exctype, value, traceback.format_exc() )
result
`dict` data returned from processing
'''
finished = pyqtSignal()
error = pyqtSignal(tuple)
result = pyqtSignal(dict)
class Worker(QRunnable):
'''
Worker thread
Inherits from QRunnable to handler worker thread setup, signals and wrap-up.
:param callback: The function callback to run on this worker thread. Supplied args and
kwargs will be passed through to the runner.
:type callback: function
:param args: Arguments to pass to the callback function
:param kwargs: Keywords to pass to the callback function
'''
def __init__(self, fn, *args, **kwargs):
super(Worker, self).__init__()
# Store constructor arguments (re-used for processing)
self.fn = fn
self.args = args
self.kwargs = kwargs
self.signals = WorkerSignals()
#pyqtSlot()
def run(self):
'''
Initialise the runner function with passed args, kwargs.
'''
# Retrieve args/kwargs here; and fire processing using them
try:
result = self.fn(*self.args, **self.kwargs)
except:
traceback.print_exc()
exctype, value = sys.exc_info()[:2]
self.signals.error.emit((exctype, value, traceback.format_exc()))
else:
self.signals.result.emit(result) # Return the result of the processing
finally:
self.signals.finished.emit() # Done
To use the above, you need a QThreadPool to handle the threads. You only need to create this once, for example during application initialisation.
threadpool = QThreadPool()
Now, create a worker by passing in the Python function to execute:
from .threads import Worker # our custom worker Class
worker = Worker(fn=<Python function>) # create a Worker object
Now attach signals to get back the result, or be notified of an error:
worker.signals.error.connect(<Python function error handler>)
worker.signals.result.connect(<Python function result handler>)
Then, to execute this Worker, you can just pass it to the QThreadPool.
threadpool.start(worker)
Everything will take care of itself, with the result of the work returned to the connected signal... and the main GUI loop will be free to do it's thing!

Resources