I am trying to using QPixmapCache in my PyQt4 app, however it still seems to take time, Can anyone please have a look at my code below:
def data(self, index, role):
if index.isValid() and role == QtCore.Qt.DecorationRole:
row = index.row()
column = index.column()
value = self._listdata[row][column]
key = "image:%s"% value
pixmap = QtGui.QPixmap()
# pixmap.load(value)
if not QtGui.QPixmapCache.find(key, pixmap):
pixmap=self.generatePixmap(value)
QtGui.QPixmapCache.insert(key, pixmap)
# pixmap.scaled(400,300, QtCore.Qt.KeepAspectRatio)
return QtGui.QImage(pixmap)
if index.isValid() and role == QtCore.Qt.DisplayRole:
row = index.row()
column = index.column()
value = self._listdata[row][column]
fileName = os.path.split(value)[-1]
return os.path.splitext(fileName)[0]
def generatePixmap(self, value):
pixmap=QtGui.QPixmap()
pixmap.load(value)
pixmap.scaled(100, 120, QtCore.Qt.KeepAspectRatio)
return pixmap
It looks like you are trying create a cache of images from a list of filenames.
When loading the images for the first time, a cache will only improve performance if there are lots of images that are the same. Each image has to be created at least once, so if the images are all (or mostly) different, they won't load any quicker. In fact, the overall performance could possibly be slightly worse, due to the overhead of the caching itself.
A cache will only provide a performance gain if the application needs to constantly refresh the list of images (in which case, you'd need to check that the files on disk haven't changed in the meantime).
You should also note that QPixmapCache has a cache limit which, by default, is 10 MiB for desktop systems. This can be set to whatever value you like, though.
Related
I am trying to make a program that takes video as input and then performs some deep learning algorithm on frames. The deep learning algorithms take approximately 1-2 minutes per video frame.
I want to show the deep learning algorithm output to users; converting all the frames would be a very costly operation.
The solution to this problem is I created a buffering data structure that would basically cache up to (say 20 frames output). Whenever the user would click at any timeframe on the slider (let's say he clicked on frame 53), so if the deep learning output of this frame is present in this buffer data structure then it would return it else it would find deep learning output of this frame as well as of next (say 4 frames, frames [54,55,56,57]) and store it in the cache. Also if the cache is full it would remove the lowest indexed frames from the cache and fill this space with new frames.
Application UI diagram
Buffer Data-structure
from queue import PriorityQueue
class Buffer:
def __init__(self, limit):
self.cache_map = {}
self.cached_frames = PriorityQueue()
self.buffer_limit = limit
def push(self, index, value):
if len(self.cache_map) >= self.buffer_limit:
#buffer is completely filled so delete the lowest one and add current
lowest_frame = self.cached_frames.get()
del self.cache_map[lowest_frame]
#now add new frame in cache
self.cache_map[index] = value
self.cached_frames.put(value)
def get(self, index):
if index in self.cache_map:
return self.cache_map[index]
return False
Driver code
the on_current_frame would be called when the user changes the slider position (clicks on the slider). I want to initiate finding the deep learning output of the next five frames and store them in the cache. But the problem here is I want to show the output of the current frame as soon as it's ready and process other frames in the background. Waiting for all the frames to complete and then return the output would be a costly operation.
Is there some way to return the DL output of the current frame and keep calculating the deep learning output of other frames in the background?
class MediaBuffer:
def __init___(self, limit:int):
self.buffer_limit = limit
self.buffer = Buffer(limit)
def on_current_frame(self,index):
frame = self.buffer.get(index)
if(frame):
return frame
# return current frame output as soon as it's ready. and keep calculating the output of other frames.
for i in range(index, index+self.buffer_limit):
#calculate deep-learning output of frame and push in buffer.
self.buffer.push()
First, I'd like to thank the StackOverflow community for the tremendous help it provided me over the years, without me having to ask a single question.
I could not find anything that I can relate to my problem, though it is probably due to my lack of understanding of the subject, rather than the absence of a response on the website. My apologies in advance if this is a duplicate.
I am relatively new to multiprocess; some time ago I succeeded in using multiprocessing.pools in a very simple way, where I didn't need any feedback between the child processes.
Now I am facing a much more complicated problem, and I am just lost in the documentation about multiprocessing. I hence ask for you help, your kindness and your patience.
I am trying to build a parallel tempering monte-carlo algorithm, from a class.
The basic class very roughly goes as follows:
import numpy as np
class monte_carlo:
def __init__(self):
self.x=np.ones((1000,3))
self.E=np.mean(self.x)
self.Elist=[]
def simulation(self,temperature):
self.T=temperature
for i in range(3000):
self.MC_step()
if i%10==0:
self.Elist.append(self.E)
return
def MC_step(self):
x=self.x.copy()
k = np.random.randint(1000)
x[k] = (x[k] + np.random.uniform(-1,1,3))
temp_E=np.mean(self.x)
if np.random.random()<np.exp((self.E-temp_E)/self.T):
self.E=temp_E
self.x=x
return
Obviously, I simplified a great deal (actual class is 500 lines long!), and built fake functions for simplicity: __init__ takes a bunch of parameters as arguments, there are many more lists of measurement else than self.Elist, and also many arrays derived from self.X that I use to compute them. The key point is that each instance of the class contains a lot of informations that I want to keep in memory, and that I don't want to copy over and over again, to avoid dramatic slowing down. Else I would just use the multiprocessing.pool module.
Now, the parallelization I want to do, in pseudo-code:
def proba(dE,pT):
return np.exp(-dE/pT)
Tlist=[1.1,1.2,1.3]
N=len(Tlist)
G=[]
for _ in range(N):
G.append(monte_carlo())
for _ in range(5):
for i in range(N): # this loop should be ran in multiprocess
G[i].simulation(Tlist[i])
for i in range(N//2):
dE=G[i].E-G[i+1].E
pT=G[i].T + G[i+1].T
p=proba(dE,pT) # (proba is a function, giving a probability depending on dE)
if np.random.random() < p:
T_temp = G[i].T
G[i].T = G[i+1].T
G[i+1].T = T_temp
Synthesis: I want to run several instances of my monte-carlo class in parallel child processes, with different values for a parameter T, then periodically pause everything to change the different T's, and run again the child processes/class instances, from where they paused.
Doing this, I want each class-instance/child-process to stay independent from one another, save its current state with all internal variables while it is paused, and do as few copies as possible. This last point is critical, as the arrays inside the class are quite big (some are 1000x1000), and a copy will therefore very quickly become quite time-costly.
Thanks in advance, and sorry if I am not clear...
Edit:
I am using a distant machine with many (64) CPUs, running on Debian GNU/Linux 10 (buster).
Edit2:
I made a mistake in my original post: in the end, the temperatures must be exchanged between the class-instances, and not inside the global Tlist.
Edit3: Charchit answer works perfectly for the test code, on both my personal machine and the distant machine I am usually using for running my codes. I hence check this as the accepted answer.
However, I want to report here that, inserting the actual, more complicated code, instead of the oversimplified monte_carlo class, the distant machine gives me some strange errors:
Unable to init server: Could not connect: Connection refused
(CMC_temper_all.py:55509): Gtk-WARNING **: ##:##:##:###: Locale not supported by C library.
Using the fallback 'C' locale.
Unable to init server: Could not connect: Connection refused
(CMC_temper_all.py:55509): Gdk-CRITICAL **: ##:##:##:###:
gdk_cursor_new_for_display: assertion 'GDK_IS_DISPLAY (display)' failed
(CMC_temper_all.py:55509): Gdk-CRITICAL **: ##:##:##:###: gdk_cursor_new_for_display: assertion 'GDK_IS_DISPLAY (display)' failed
The "##:##:##:###" are (or seems like) IP adresses.
Without the call to set_start_method('spawn') this error shows only once, in the very beginning, while when I use this method, it seems to show at every occurrence of result.get()...
The strangest thing is that the code seems otherwise to work fine, does not crash, produces the datafiles I then ask it to, etc...
I think this would deserve to publish a new question, but I put it here nonetheless in case someone has a quick answer.
If not, I will resort to add one by one the variables, methods, etc... that are present in my actual code but not in the test example, to try and find the origin of the bug. My best guess for now is that the memory space required by each child-process with the actual code, is too large for the distant machine to accept it, due to some restrictions implemented by the admin.
What you are looking for is sharing state between processes. As per the documentation, you can either create shared memory, which is restrictive about the data it can store and is not thread-safe, but offers better speed and performance; or you can use server processes through managers. The latter is what we are going to use since you want to share whole objects of user-defined datatypes. Keep in mind that using managers will impact speed of your code depending on the complexity of the arguments that you pass and receive, to and from the managed objects.
Managers, proxies and pickling
As mentioned, managers create server processes to store objects, and allow access to them through proxies. I have answered a question with better details on how they work, and how to create a suitable proxy here. We are going to use the same proxy defined in the linked answer, with some variations. Namely, I have replaced the factory functions inside the __getattr__ to something that can be pickled using pickle. This means that you can run instance methods of managed objects created with this proxy without resorting to using multiprocess. The result is this modified proxy:
from multiprocessing.managers import NamespaceProxy, BaseManager
import types
import numpy as np
class A:
def __init__(self, name, method):
self.name = name
self.method = method
def get(self, *args, **kwargs):
return self.method(self.name, args, kwargs)
class ObjProxy(NamespaceProxy):
"""Returns a proxy instance for any user defined data-type. The proxy instance will have the namespace and
functions of the data-type (except private/protected callables/attributes). Furthermore, the proxy will be
pickable and can its state can be shared among different processes. """
def __getattr__(self, name):
result = super().__getattr__(name)
if isinstance(result, types.MethodType):
return A(name, self._callmethod).get
return result
Solution
Now we only need to make sure that when we are creating objects of monte_carlo, we do so using managers and the above proxy. For that, we create a class constructor called create. All objects for monte_carlo should be created with this function. With that, the final code looks like this:
from multiprocessing import Pool
from multiprocessing.managers import NamespaceProxy, BaseManager
import types
import numpy as np
class A:
def __init__(self, name, method):
self.name = name
self.method = method
def get(self, *args, **kwargs):
return self.method(self.name, args, kwargs)
class ObjProxy(NamespaceProxy):
"""Returns a proxy instance for any user defined data-type. The proxy instance will have the namespace and
functions of the data-type (except private/protected callables/attributes). Furthermore, the proxy will be
pickable and can its state can be shared among different processes. """
def __getattr__(self, name):
result = super().__getattr__(name)
if isinstance(result, types.MethodType):
return A(name, self._callmethod).get
return result
class monte_carlo:
def __init__(self, ):
self.x = np.ones((1000, 3))
self.E = np.mean(self.x)
self.Elist = []
self.T = None
def simulation(self, temperature):
self.T = temperature
for i in range(3000):
self.MC_step()
if i % 10 == 0:
self.Elist.append(self.E)
return
def MC_step(self):
x = self.x.copy()
k = np.random.randint(1000)
x[k] = (x[k] + np.random.uniform(-1, 1, 3))
temp_E = np.mean(self.x)
if np.random.random() < np.exp((self.E - temp_E) / self.T):
self.E = temp_E
self.x = x
return
#classmethod
def create(cls, *args, **kwargs):
# Register class
class_str = cls.__name__
BaseManager.register(class_str, cls, ObjProxy, exposed=tuple(dir(cls)))
# Start a manager process
manager = BaseManager()
manager.start()
# Create and return this proxy instance. Using this proxy allows sharing of state between processes.
inst = eval("manager.{}(*args, **kwargs)".format(class_str))
return inst
def proba(dE,pT):
return np.exp(-dE/pT)
if __name__ == "__main__":
Tlist = [1.1, 1.2, 1.3]
N = len(Tlist)
G = []
# Create our managed instances
for _ in range(N):
G.append(monte_carlo.create())
for _ in range(5):
# Run simulations in the manager server
results = []
with Pool(8) as pool:
for i in range(N): # this loop should be ran in multiprocess
results.append(pool.apply_async(G[i].simulation, (Tlist[i], )))
# Wait for the simulations to complete
for result in results:
result.get()
for i in range(N // 2):
dE = G[i].E - G[i + 1].E
pT = G[i].T + G[i + 1].T
p = proba(dE, pT) # (proba is a function, giving a probability depending on dE)
if np.random.random() < p:
T_temp = Tlist[i]
Tlist[i] = Tlist[i + 1]
Tlist[i + 1] = T_temp
print(Tlist)
This meets the criteria you wanted. It does not create any copies at all, rather, all arguments to the simulation method call are serialized inside the pool and sent to the manager server where the object is actually stored. It gets executed there, and the results (if any) are serialized and returned in the main process. All of this, with only using the builtins!
Output
[1.2, 1.1, 1.3]
Edit
Since you are using Linux, I encourage you to use multiprocessing.set_start_method inside the if __name__ ... clause to set the start method to "spawn". Doing this will ensure that the child processes do not have access to variables defined inside the clause.
I have a list of issues (jira issues):
listOfKeys = [id1,id2,id3,id4,id5...id30000]
I want to get worklogs of this issues, for this I used jira-python library and this code:
listOfWorklogs=pd.DataFrame() (I used pandas (pd) lib)
lst={} #dictionary for help, where the worklogs will be stored
for i in range(len(listOfKeys)):
worklogs=jira.worklogs(listOfKeys[i]) #getting list of worklogs
if(len(worklogs)) == 0:
i+=1
else:
for j in range(len(worklogs)):
lst = {
'self': worklogs[j].self,
'author': worklogs[j].author,
'started': worklogs[j].started,
'created': worklogs[j].created,
'updated': worklogs[j].updated,
'timespent': worklogs[j].timeSpentSeconds
}
listOfWorklogs = listOfWorklogs.append(lst, ignore_index=True)
########### Below there is the recording to the .xlsx file ################
so I simply go into the worklog of each issue in a simple loop, which is equivalent to referring to the link:
https://jira.mycompany.com/rest/api/2/issue/issueid/worklogs and retrieving information from this link
The problem is that there are more than 30,000 such issues.
and the loop is sooo slow (approximately 3 sec for 1 issue)
Can I somehow start multiple loops / processes / threads in parallel to speed up the process of getting worklogs (maybe without jira-python library)?
I recycled a piece of code I made into your code, I hope it helps:
from multiprocessing import Manager, Process, cpu_count
def insert_into_list(worklog, queue):
lst = {
'self': worklog.self,
'author': worklog.author,
'started': worklog.started,
'created': worklog.created,
'updated': worklog.updated,
'timespent': worklog.timeSpentSeconds
}
queue.put(lst)
return
# Number of cpus in the pc
num_cpus = cpu_count()
index = 0
# Manager and queue to hold the results
manager = Manager()
# The queue has controlled insertion, so processes don't step on each other
queue = manager.Queue()
listOfWorklogs=pd.DataFrame()
lst={}
for i in range(len(listOfKeys)):
worklogs=jira.worklogs(listOfKeys[i]) #getting list of worklogs
if(len(worklogs)) == 0:
i+=1
else:
# This loop replaces your "for j in range(len(worklogs))" loop
while index < len(worklogs):
processes = []
elements = min(num_cpus, len(worklogs) - index)
# Create a process for each cpu
for i in range(elements):
process = Process(target=insert_into_list, args=(worklogs[i+index], queue))
processes.append(process)
# Run the processes
for i in range(elements):
processes[i].start()
# Wait for them to finish
for i in range(elements):
processes[i].join(timeout=10)
index += num_cpus
# Dump the queue into the dataframe
while queue.qsize() != 0:
listOfWorklogs.append(q.get(), ignore_index=True)
This should work and reduce the time by a factor of little less than the number of CPUs in your machine. You can try and change that number manually for better performance. In any case I find it very strange that it takes about 3 seconds per operation.
PS: I couldn't try the code because I have no examples, it probably has some bugs
I have some troubles((
1) indents in the code where the first "for" loop appears and the first "if" instruction begins (this instruction and everything below should be included in the loop, right?)
for i in range(len(listOfKeys)-99):
worklogs=jira.worklogs(listOfKeys[i]) #getting list of worklogs
if(len(worklogs)) == 0:
....
2) cmd, conda prompt and Spyder did not allow your code to work for a reason:
Python Multiprocessing error: AttributeError: module '__ main__' has no attribute 'spec'
After researching in the google, I had to set a bit higher in the code: spec = None (but I'm not sure if this is correct) and this error disappeared.
By the way, the code in Jupyter Notebook worked without this error, but listOfWorklogs is empty and this is not right.
3) when I corrected indents and set __spec __ = None, a new error occurred in this place:
processes[i].start ()
error like this:
"PicklingError: Can't pickle : attribute lookup PropertyHolder on jira.resources failed"
if I remove the parentheses from the start and join methods, the code will work, but I will not have any entries in the listOfWorklogs(((
I ask again for your help!)
How about thinking about it not from a technical standpoint but a logical one? You know your code works, but at a rate of 3sec per 1 issue which means it would take 25 hours to complete. If you have the ability to split up the # of Jira issues that are passed into the script (maybe use date or issue key, etc) you could create multiple different .py files with basically the same code, you would just be passing each one a different list of Jira tickets. So you could just run say 4 of them at the same time and you would reduce your time to 6.25 hours each.
I want to change the number of workers in the pool that are currently used.
My current idea is
while True:
current_connection_number = get_connection_number()
forced_break = False
with mp.Pool(current_connection_number) as p:
for data in p.imap_unordered(fun, some_infinite_generator):
yield data
if current_connection_number != get_connection_number():
forced_break = True
break
if not forced_break:
break
The problem is that it just terminates the workers and so the last items that were gotten from some_infinite_generator and weren't processed yet are lost. Is there some standard way of doing this?
Edit: I've tried printing inside some_infinite_generator and it turns out p.imap_unordered requests 1565 items with just 2 pool workers even before anything is processed, how do I limit the number of items requested from generator? If I use the code above and change number of connections after just 2 items, I will loose 1563 items
The problem is that the Pool will consume the generator internally in a separate thread. You have no way to control that logic.
What you can do, is feeding to the Pool.imap_unordered method a portion of the generator and get that consumed before scaling according to the available connections.
CHUNKSIZE = 100
while True:
current_connection_number = get_connection_number()
with mp.Pool(current_connection_number) as p:
while current_connection_number == get_connection_number():
for data in p.imap_unordered(fun, grouper(CHUNKSIZE, some_infinite_generator)):
yield data
def grouper(n, iterable):
it = iter(iterable)
while True:
chunk = tuple(itertools.islice(it, n))
if not chunk:
return
yield chunk
It's a bit less optimal as the scaling happens every chunk instead of every iteration but with a bit of fine tuning of the CHUNKSIZE value you can easily get it right.
The grouper recipe.
I have run into another issue with a program I am working on. Basically what my program does is it takes up to 4 input files, processes them and stores the information I collect from them in a SQLite3 database on my computer. This has allowed me to view the data any time I want without having to run the input files again. The program uses a main script that is essentially just an AUI Notebook that imports an input script, and output scripts to use as panels.
To add the data to the database I am able to use threading since I am not returning the results directly to my output screen(s). However, when I need to view the entire contents from my main table I end up with 25,000 records that are being loaded. While these are loading my GUI is locked and almost always displays: "Program not responding".
I would like to use threading/multiprocessing to grab the 25k records from the database and load them into my ObjectListView widget(s) so that my GUI is still usable during this process. When I attempted to use a similar threading class that is used to add the data to the database I get nothing returned. When I say I get nothing I am not exaggerating.
So here is my big question, is there a way to thread the query and return the results without using global variables? I have not been able to find a solution with an example that I could understand, but I may be using the wrong search terms.
Here are the snippets of code pertaining to the issue at hand:
This is what I use to make sure the data is ready for my ObjectListView widget.
class OlvMainDisplay(object):
def __init__(self, id, name, col01, col02, col03, col04, col05,
col06, col07, col08, col09, col10, col11,
col12, col13, col14, col15):
self.id = id
self.name = name
self.col01 = col01
self.col02 = col02
self.col03 = col03
self.col04 = col04
self.col05 = col05
self.col06 = col06
self.col07 = col07
self.col08 = col08
self.col09 = col09
self.col10 = col10
self.col11 = col11
self.col12 = col12
self.col13 = col13
self.col14 = col14
self.col15 = col15
The 2 tables I am pulling data from:
class TableMeta(base):
__tablename__ = 'meta_extra'
id = Column(String(20), ForeignKey('main_data.id'), primary_key=True)
col06 = Column(String)
col08 = Column(String)
col02 = Column(String)
col03 = Column(String)
col04 = Column(String)
col09 = Column(String)
col10 = Column(String)
col11 = Column(String)
col12 = Column(String)
col13 = Column(String)
col14 = Column(String)
col15 = Column(String)
class TableMain(base):
__tablename__ = 'main_data'
id = Column(String(20), primary_key=True)
name = Column(String)
col01 = Column(String)
col05 = Column(String)
col07 = Column(String)
extra_data = relation(
TableMeta, uselist=False, backref=backref('main_data', order_by=id))
I use 2 queries to collect from these 2 tables, one grabs all records while the other one is part of a function definition that takes multiple dictionaries and applies filters based on the dictionary contents. Both queries are part of my main "worker" script that is imported by each of my notebook panels.
Here is the function that applies the filter(s):
def multiFilter(theFilters, table, anOutput, qType):
session = Session()
anOutput = session.query(table)
try:
for x in theFilters:
for attr, value in x.items():
anOutput = anOutput.filter(getattr(table, attr).in_(value))
except AttributeError:
for attr, value in theFilters.items():
anOutput = anOutput.filter(getattr(table, attr).in_(value))
anOutput = convertResults(anOutput.all())
return anOutput
session.close()
The theFilters can either be a single dictionary or a list of dictionaries, hence the "Try:" statement. Once the function has applied the filters it then runs the returned results through another function that puts each result returned through the OlvMainDisplay class and adds them to a list to be passed on to the OLV Widget.
Again the big question, is there a way to thread the query (or queries) and return the results without using global variables? Or possibly grab around 200 records at a time and add the data "in chunks" to the OLV widget?
Thank you in advance.
-MikeS
--UPDATE--
I have reviewed "how to get the return value from a thread in python" and the accepted answer does not return anything or still locked the GUI (not sure what is causing the variance). I would like to limit the number of threads created to about 5 at the most.
--New Update--
I made some corrections to the filter function.
You probably don't want to load the entire database into memory at once. That is usually a bad idea. Because ObjectListView is a wrapper of the ListCtrl, I would recommend using the Virtual version of the the underlying widget. The flag is wx.LC_VIRTUAL. Take a look at the wxPython demo for an example, but basically you load data on demand via virtual methods OnGetItemText(), OnGetItemImage(), and OnGetItemAttr(). Note that that refers to the ListCtrl methods...that may be different in OLV land. Anyway, I know that the OLV version is called VirtualObjectListView and works in much the same way. I'm pretty sure there's an example in the source download.
Ok, I finally managed to get the query to run in a thread and be able to display the results in a standard ObjectListView. I used the answer HERE with some modifications.
I added the code to my main worker script which is imported into my output panel as EW.
Since I am not passing arguments to my query these lines were changed:
def start(self, params):
self.thread = threading.Thread(target=self.func, args=params)
to
def start(self):
self.thread = threading.Thread(target=self.func)
In my output panel I changed how I call upon my default query, the one that returns 25,000+ records. In my output panel's init I added self.worker = () as a placeholder and in my function that runs the default query:
def defaultView(self, evt):
self.worker = EW.ThreadWorker(EW.defaultQuery)
self.worker.start()
pub.sendMessage('update.statusbar', msg='Full query started.')
I also added:
def threadUpdateOLV(self):
time.sleep(10)
anOutput = self.worker.get_results()
self.dataOLV.SetObjects(anOutput)
pub.subscribe(self.threadUpdateOLV, 'thread.completed')
the time.sleep(10) was added after trial an error to get the full 25,000+ results, and I found a 10 seconds delay worked fine.
And finally, at the end of my default query I added the PubSub send right before my output return:
wx.CallAfter(pub.sendMessage, 'thread.completed')
return anOutput
session.close()
To be honest I am sure there is a better way to accomplish this, but as of right now it is serving the purpose needed. I will work on finding a better solution though.
Thanks
-Mike S