I am using openGL and "freeglut" library for volume rendering and display. In the main thread I initialize the openGL window, and then acquire volume data frame by frame, the volume rendering is done after one volume data is acquired. This works well but takes much time. Is it possible that I keep initializing openGL window in the main thread, and do the volume rendering and display in another thread? I have checked wglMakeCurrent, it does not update the window initialized in the main thread.
Multithreaded OpenGL operation is a nasty beast. You can however, and this is what I strongly suggest, map a Pixel Buffer Object into the program's address space. And that region of address space is visible to all threads. So you can update the volume data from another thread (or, like in the case of the program I'm currently working on, on another GPU), then signal the main thread to update the texture from the new data in the PBO. You can also update only sub portions of the volume from the PBO with glTexSubImage3D.
Related
I have a setup where an scene is rendered in an offscreen OpenGL framebuffer, then a compute shader extracts some data from it, and buts it into a ring buffer allocated on the device. This ring buffer is mapped with glMapBufferRange() to host-readable memory.
On the host side, there should be an interface where a Push() function enqueues the OpenGL operations on the command queue, followed by a glFenceSync().
And a Pull() function uses glClientWaitSync() to wait for a sync object to be finished, and then reads and returns the data from part of the ring buffer.
Ideally it should be possible to call Push() and Pull() from different threads.
But there is the problem that an OpenGL context can only be current on one thread at a time, but glClientWaitSync() like all other GL functions needs the proper context to be current.
So with this the Pull() thread would take the OpenGL context, and then call glClientWaitSync() which can be blocking. During that time Push() cannot be called because the context still belongs to the other context while it is waiting.
Is there a way to temporarily release the thread's current OpenGL context while waiting in glClientWaitSync() (in a way similar to how std::condition_variable::wait() unlocks the mutex), or to wait on a GLSync object belonging to another context?
The only solution seems to be to periodically poll glClientWaitSync() with zero timeout instead (and release the context inbetween), or to setup a second OpenGL context with resource sharing.
You cannot change the current context in someone else's thread. Well you can (by making that context current in yours), but that causes a data race if the other thread is currently in, or tries to call, an OpenGL function.
Instead, you should have two contexts with objects shared between them. Sync objects are shared between contexts, so that's not a problem. However, you need to flush the fence after you create it on the context which created the fence before another thread tries to wait on it.
I have a shared DX11 texture that is being used with 2 different devices in separate threads.
Thread1 (operating on device 1): Called every frame and updates the shared texture
Thread2(operating on device2): Consumes the shared texture by copying it to another texture. Frequency is much lesser than thread 1.
According to MSDN "If a shared texture is updated on one device ID3D11DeviceContext::Flush must be called on that device."
However calling flush on thread1 every frame is very expensive and we see a massive performance hit. We can't flush device 1 on thread 2, because a device context is not thread safe.
Is there a way to efficiently make the shared texture update when threads 2 needs to consume it?
Thanks for your help! MSDN is not very helpful when dealing with shared textures.
emphasized text
In order to synchronize the access to the shared resource between two threads (or interprocess) you can use IDXGIKeyedMutex. It is described here in details: https://msdn.microsoft.com/en-us/library/windows/desktop/ee913554(v=vs.85).aspx#dxgi_1.1_synchronized_shared_surfaces
You can check the sample code provided as well although they show only resource sharing between two DX10 devices. It is the same for DX11 devices.
The essential part is to QueryInterface the shared texture for IDXGIResource first and then for IDXGIKeyedMutex. After that you use the mutex for synchronization by using AcquireSync and ReleaseSync functions.
I have vertex buffers holding meshes of terrain chunks. Whenever the player edits terrain, the mesh of the corresponding chunk must be regenerated and uploaded to the vertex buffer. Since regenerating the mesh takes some time, I do it in an asynchronous worker thread.
The issue is that the main threads draws the buffer in the same moment the worker thread uploads new data. That means, after the player editing the terrain, a corrupted chunk gets rendered for one frame. It just flares up once and after that, the correct buffers gets drawn.
This kind of made sense to me, we shouldn't write and read the same data at the same time of course. So instead of updating the old buffer, I created a new one, filled it and swapped them. The swapping was just changing the buffer id stored within the terrain chunk struct, so that should be atomic. Hoever, that didn't help.
Due to the fact that OpenGL commands are sent to a queue on GPU, they don't have to be executed when the application on the CPU continues. So I may have swapped the buffers before the new one was actually ready.
I also tried an alternative to switching the buffers, using a mutex for buffer access. The main thread locks the mutex while drawing and the worker thread locks it while uploading new buffer data. However, this didn't help either and it may be because of OpenGL's asynchronous nature, too. The main thread didn't actually draw, but just send draw commands to the GPU. On the other hand, when there really is only one command queue, uploading buffers and drawing them could never occur at the same time, does it?
How can I synchronize the vertex buffer access from my two threads to prevent that an undefined buffer gets drawn for one frame?
You must make sure that the buffer update is actually completed before you can use that buffer in your draw thread. The easieast solution would be to call glFinish in your update thread after you issued all the update GL commands, and only notify the the draw thread after that returned.
To have a more fine grained control over the synchronization, I would advice you to have a look at fence sync objects (as described in the GL_ARB_sync extension). You can issue a fence sync after you issued your update commands and actually store the sync object handle with your buffer handle so that the draw thread can check if the update actually completed (or wait for it). Note that sync objects are kind of special since they are the only objects not tied to the GL context, so that they can be used in multi-context setups.
I want to load assets for my OpenGL application in a separate thread so that I can create a loading screen, but when I try to call an OpenGL function in another thread, the application crashes, and I need them to load textures. Is there anyway I can use multithreading and OpenGL? Or am I going to say, load an asset every frame, and make the screen kinda choppy and bad looking? I've seen that you can accomplish this on Windows, but I want this to run on Unix (specifically MacOSX) much more than windows.
Messing with a single OpenGL context in different threads is usually bound to result in trouble. What you can do though is make use of a Pixel Buffer Object (PBO) for the texture update, map this in the main (OpenGL) thread, pass the mapped pointer to the loading thread to be filled with file contents, and unmap the PBO followed by a glTexImage2D (using the PBO, of course) in the main thread once the loading thread has finished. By using two different PBOs, one that is currently filled by the loading thread and one that is currently copied into a texture by the main thread and some proper synchronization, you can get the file loading and texture update to work concurrently (look at the linked tutorial for some, yet single-threaded, examples).
suppose I use the QGLWidget's paintGL() method to draw into the widget using OpenGL. After the Qt called the paintGL() method, it automatically triggers a buffer swap. In OpenGL, this buffer swap usually blocks the calling thread until the frame rendering to the background buffer is completed, right? I wonder which Qt thread calls the paintGL as well as the buffer swap. Is it the main Qt UI thread? If it is, wouldn't that mean that the block during the buffer swap also blocks the whole UI? I could not find any information about this process in general..
Thanks
I don't use the QGLWidget very often, but consider that yes, if swapBuffers() is synchronous the Qt GUI thread is stuck. This means that during that operation you'll be unable to process events.
Anyway, if you're experiencing difficulties while doing this, consider reading this article which manage to allow multithreaded OpenGL to overcome this difficulty.
Even better, this article explains well the situation and introduces the new multithreading OpenGL capabilities in Qt 4.8, which is now in release candidate.
In OpenGL, this buffer swap usually blocks the calling thread until the frame rendering to the background buffer is completed, right?
It depends on how it is implemented. Which means that it varies from hardware to hardware and driver to driver.
If it is, wouldn't that mean that the block during the buffer swap also blocks the whole UI?
Even if it does block, it will only do so for 1/60th of a second. Maybe 1/30th if your game is slowing down. If you're really slow, 1/15th. The at most one keypress or mouse action that the user gives will still be in the message queue.
The issue with blocking isn't about the UI. It will be responsive enough for the user to not notice. But if you have strict timings (such as you might for a game), I would suggest avoiding paintGL at all. You should be rendering when you want to, not when Qt tells you to.