I'm writing an app that embeds a bunch of matplotlib figures into a PyQt GUI. The updating of these figures can take up to a few seconds, so I would like to introduce a waiting indicator to display while the plots are being drawn. I've moved all the data processing code into its own thread, but it seems the actual plotting calls are often making up the majority of processing time.
I have written a waiting indicator that uses a QTimer instance to trigger paintEvent on the widget. This works just fine when all the intensive processing can be pushed into another thread. The problem is that these calls to construct the matplotlib plots cannot be moved outside of the main thread due to the way Qt is designed, and so block the updating of the waiting indicator, rendering it kind of useless.
I've introduced some calls to QCoreApplication.processEvents() after the updating of each figure, which improves the performance a little. I've also toyed with the idea of monkeypatching a bunch of methods of matplotlib.axes.Axes to include calls to QCoreApplication.processEvents(), but I can see that getting messy. Is this the best I can do? Is there any way to interrupt the main thread at regular intervals and force it to process new events?
It should also help a big deal to do the actual drawing on a QPixmap in a thread. Drawing that pixmap with QPainters drawPixmap() method is very fast. And you need to recreate the pixmap only when really needed (e.g. after Zooming or so). In the meantime you just have to reuse that already drawn pixmap. The actual paintEvents using drawPixmap() will cost close to nothing and your GUI will be completely responsive.
Clobbering the code with processEvent() is not only ugly but can cause very nasty and very hard to debug malfunctions. E.g. it might cause premature deletion of objects which are still in use but were scheduled for deletion using deleteLater().
This Answer might be also of use: Python - matplotlib - PyQT: plot to QPixmap
I havn't used matplotlib, yet. But in case it uses directly QWidgets and can not be used without it won't be so easy as you mentioned above. But you could do the drawing in another process started by your GUI which uses matplotlib as in the link above and stores the pixmap to disk and your gui loads whenever a new pixmap is ready. QFileSystemWatcher might help here to avoid polling.
Related
I switched to Vulkan from OpenGL to use multi-threading improvements.
In OpenGL, I was able to load dynamically object to the scene (buffer, textures, etc) while rendering by using a waiting system. I was loading all app-side stuffs in a thread, then when it was ready, just before a frame render in the main thread, I was sending everything into the video memory. That was fine.
With Vulkan, I know I can call some functions between threads without provoking the well known segfault from OpenGL. But, this doesn't works with vkQueueSubmit(). I already know, I tried the naive way. To me, it seems logical you can't bother a queue from multiple threads.
I came with some ideas, but I don't know which one is good or bad.
First, I would go the OpenGL way, I will prepare everything I can from the CPU/App side, then just before render a frame, I will submit buffers (with transfer queue) to the video memory. But I feel there is no a real improvement from OpenGL way...
Second, I will try to use the synchronization mechanism to be able to send buffers in a thread and render from an other. But I keep reading there is a lot of way to slow down everything by causing irrelevant locks or by using incorrectly semaphores and fences.
So my question, is basically what path to pick to solve this problem ? How can I load a buffer dynamically from an other thread while the main thread is rendering without making too much pain to performances ? How Vulkan can help ?
If you want to stream resources for immediate use (i.e. the main render cannot proceed without them), then you're pretty much going to either block the main thread waiting, or have it spin doing something visually interesting (e.g. an animated loading screen) waiting for the resources to load.
If you want to stream resources while the app is doing real rendering then the main trick here is to load resources asynchronously in the background and only switch to using those resources in the main thread once they are already loaded. If the main thread ever ends up actually blocked on a semaphore then you've probably already started dropping frames, so your "engine" design needs to ensure that never happens. A lot of game use simple low-detail proxy objects as stand-in versions while the high-detail version is loading in the background.
None of this is particularly related to the graphics API - both GL and Vulkan need the same macro-scale behavior. Vulkan API features don't particularly help because the major bottlenecks which cause problems here are storage/network/CPU which have nothing to do with the graphics part of the problem.
I decided to trust threads !
In the first place it seems to work, I get a lot of :
[MESSAGE:Validation Error: [ UNASSIGNED-Threading-MultipleThreads ] Object 0: handle = 0x56414228bad8, type = VK_OBJECT_TYPE_QUEUE; | MessageID = 0x141cb623 | THREADING ERROR : vkQueueSubmit(): object of type VkQueue is simultaneously used in thread 0x7f6b977fe640 and thread 0x7f6bc2bcb740]
But it works !
So, the basic idea is to have a thread for loading objects while the engine is drawing. This thread takes care of creating the UBO for the location of the object, then when the geometry is loaded from RAM, it creates the VBO and IBO (I left material with image/UBO on hold for now), then creates the graphics pipeline (with layout, descriptor layout, shaders compiled with GLSLang on the fly) (The next idea is to reuse pipeline for similar needs) and finallly sets a flag to say the object is ready to use. In the other hand, I have my main thread rendering and takes new objects when they shows up ready.
I think it works because I have a gentle video card (GTX 1070) with multiple queues setup, I had one for graphics and an other one for transfer setup.
I'm pretty sure, this will crash or goes poorly with a GPU with a single queue, and this should be why the validation layers tolds me these messages.
I'm writing a graphics program in Python and I would like to know how to make a Canvas update only on-demand; that is, stop a canvas from updating every run of the event loop and instead update only when I tell it to.
I want to do this because in my program I have a separate thread that reads graphics data from standard input to prevent blocking the event loop (given that there's no reliable, portable way to poll standard input in Python, and polling sucks anyway), but I want the screen to be updated only at intervals of a certain amount of time, not whenever the separate thread starts reading input.
You can't pause the update of the canvas without pausing the entire GUI.
A simple solution would be for you to not draw to the canvas until you're ready for the update. Instead of calling canvas commands, push those commands onto a queue. When you're ready to refresh the display, iterate over the commands and run them.
You could also do your own double-buffering, where you have two canvases. The one you are actively drawing would be behind the visible one. When you are ready to display the results, swap the stacking order of the canvases.
I have an experiment in which I present stimuli using PsychoPy / PyGaze and track eye movements with an EyeTribe eye tracker. In this experiment I update the size of two visual stimuli on each frame (at 60 Hz). I prepare each frame beforehand and afterwards loop through all of the screen objects and present them. Meanwhile, a continuous sound is playing. When I run this experiment in dummy mode (mouse movement is used as a simulation for gaze position), there are no timing issues for the visual presentation. However, when I run the experiment while performing eye tracking, the timing of the visual presentation is no longer accurate (higher variability in duration of frames).
I tried looking into the multi threading more, but in the pytribe script of PyGaze I can't find any evidence that one thread is waiting for an event coming from the eye tracking thread. So, I have no idea how to figure out what is causing the timing issues or how to solve this? (I hope I explained the problem sufficiently specific).
It's worse than just needing a separate thread for eyetrack versus stimulus rendering. What you really need is a separate process that avoids the python Global Interpreter Lock (GIL). The GIL prevents different threads from running on different processors.
For improved temporal precision I would really recommend you switch from pygaze to iohub (which also has support for eyetribe I believe). iohub does run genuinely on a different core of the machine where possible so that your stimuli and eye data can be processed independently in time, and it handles all the sync stuff for you.
Adding to Jon's answer: Hanne also emailed about the problem, and it turns out she was running her experiments from Spyder. When run from the command prompt, there shouldn't be any timing issues. (Obviously, the GIL is still around, but in practice this doesn't seem to affect screen timing.)
To prevent any issues in the future, I've added a class that allows for running the EyeTribe in a parallel Process. See: https://github.com/esdalmaijer/PyTribe/blob/master/pytribe.py#L365
Example use:
if __name__ == "__main__":
from pygaze.display import Display
from pygaze.screen import Screen
from pytribe import ParallelEyeTribe
disp = Display()
scr = Screen()
scr.draw_fixation(fixtype='cross')
tracker = ParallelEyeTribe()
tracker.start_recording()
disp.fill(scr)
disp.show()
tracker.log("Stimulus onset")
time.sleep(10)
disp.show()
tracker.log("Stimulus offset")
tracker.stop_recording()
tracker.close()
disp.close()
I am using LibGDX to make a game. I want to simultaneously load/unload assets on the fly as needed. However, waiting for assets to load in the main thread causes lag. In order to remedy this, I've created a background thread that monitors which assets need to be loaded (textures, sounds, etc.) and loads/unloads them appropriately.
Unfortunately, I get the following error when calling AssetManager.update() from that thread.
com.badlogic.gdx.utils.GdxRuntimeException: java.lang.RuntimeException: No OpenGL context found in the current thread.
I've tried runing the background thread in the main thread in the beginning and just dealing with the first few screens, and everything works fine. I can also change the algorithm to just load everything into memory from the start in the same thread, and that works as well. However, neither works in the background thread.
When I run this on Android with OpenGL ES 2.0 (which is flexible in odd ways) instead of on Windows, everything runs fine, and I can even get the pixel dimensions of the images - but the textures render black.
My searches have told me that this is an issue of the OpenGL context being bound to a single thread, but not much else. This explains why everything works when I shove it in the main thread, and not when I put it in a different one. How do I fix this context problem?
First things first, you should not access the OpenGL context outside of the rendering thread.
I assume you have looked at these already, but just to make sure read up on the AssetManager wiki article, which talks a bit about how to use the AssetManager for asynchronous managing of assets. In addition to the wiki article, check out the AssetManagerTest to better understand how to use it. The asset manager test is probably your best bet into loading at how to dynamically load assets.
If you are loading a ton of stuff, you may want to look into creating a loading bar to load anything large upfront. It might work to check assets and such from another thread (and set a flag to call update), but at the end of the day you will need to call update() on the rendering thread.
Keeping in mind you have to call update() it from a different thread, I don't see why you would want another thread to check conditions and set a flag. There is probably more overhead using another thread and synchronizing the update() call than to just do it all on the rendering thread. Also, the update() method only pauses for a couple milliseconds at a time as it incrementally loads files. Typically, you would simply call load() for your asset, then check isLoaded() on your asset. If it isn't loaded you would then call update() once per frame until isLoaded() returns true. Once it returns true, you can then call get() and get whatever asset you were loading. This can all be done via the main rendering thread without having the app lag while its loading.
If you really want your other thread to call update(), you need to create a Runnable object and call postRunnable() such as how they have it described in the wiki article on multi-threading with libGDX. However, this defeats the whole point of using other threads because anything you use with postRunnable runs synchronously on the rendering thread.
"The current vertex declaration does not include all the elements required by the current vertex shader. TextureCoordinate0 is missing."
I get this error when I try to use a spriteFont to draw my FPS on the screen, on the line where I call spriteBatch.End()
My effect doesn't even use texture coordinates.
But I have found the root of the problem, just not how to fix it.
I have a separate thread that builds the geometry (an LOD algorithm) and somehow this seems to be why I have the problem.
If I make it non-threaded and just update my mesh per frame I don't get an error.
And also if I keep it multithreaded but don't try to draw text on the screen it works fine.
I just can't do both.
To make it even more strange, it actually compiles and runs for a little bit. But always crashes.
I put a Thread.Sleep in the method that builds the mesh so that it happens less often and I saw that the more I make this thread sleep, and the less it gets called, the longer it will run on average before crashing.
If it sleeps for 1000ms it runs for maybe a minute. If it sleeps for 10ms it doesn't even show one frame before crashing. This makes me believe that it has to do with a certain line of code being executed on the mesh building thread at the same time you are drawing text on the screen.
It seems like maybe I have to lock something when drawing the text, but I have no clue.
Any ideas?
My information comes from the presentation "Understanding XNA Framework Performance" from GDC 2008. It says:
GraphicsDevice is somewhat thread-safe
Cannot render from more than one thread at a time
Can create resources and SetData while another thread renders
ContentManager is not thread-safe
Ok to have multiple instances, but only one per thread
My guess is that you're breaking one of these rules somewhere. Or you're modifying a buffer that is being used to render without the appropriate locking.