shared context with GLUT on Linux - linux

In my current setup, I have two displays that are being driven by two GPUs. Using GLUT, I create two windows (one per display) and render each one from the main thread by calling glutSetWindow() in the draw call, for each window.
The draw calls renders a Texture2D as a sphere (using gluSphere()) but the Texture2D is swapped for another image every few seconds. I have set up an array of 2 Texture2D so I can load the next image while the current Texture2D is shown. This works well as long as everything runs in the main thread.
The problem is that the call to glTexImage2D(), to load the next image, hangs my draw call, so I need to call glTexImage2D() on a different thread. Calling glTexImage2D() on a different thread crashes, as it seems the OpenGL context is not shared. GLUT does not seem to provide a way to share the context, but I should be able to get the context on Linux via glXGetCurrentContext().
My question is if I get the context via this call, how can I make it a shared context? And would this even work with GLUT? Another option has been to switch to different library to replace GLUT, like GLFW but in that case I will loose some handy function such as gluSphere(). Any recommendation if the context cannot be shared with GLUT please?

With GLX context sharing is established at context creation; unlike WGL you can't establish that sharing as an afterthought. Since GLUT doesn't have a context sharing feature (FreeGLUT may have one, but I'm not sure about that) this is not going to be straightforward.
I have two displays that are being driven by two GPUs.
Unless those GPUs are SLi-ed or CrossFire-ed you can't establish context sharing between them.
The problem is that the call to glTexImage2D(), to load the next image, hangs my draw call, so I need to call glTexImage2D() on a different thread.
If the images are of the same size, use glTexSubImage2D to replace it. Also image data can be loaded asynchronously using pixel buffer objects, using a secondary thread that doesn't even need a OpenGL context!
Outlining the steps:
In the OpenGL context thread:
initiating transfer
glBindBuffer(GL_PIXEL_UNPACK_BUFFER, pboID)
void *p = glMapBuffer(GL_PIXEL_UNPACK_BUFFER, GL_WRITE_ONLY);
signal transfer thread
continue with normal drawing operations
In the transfer thread
on signal to start transfer
copy data to the mapped buffer
signal OpenGL context thread
In the OpenGL context thread:
on signal to complete transfer
glUnmapBuffer
glTex[Sub]Image
sync = glFenceSync
keep on drawing with the old texture
on further iterations of the drawing loop
poll sync with glClientWaitSync using a timeout of 0
if the wait sync returns signalled switch to the new texture and delete the old one
else keep on drawing with the old texture

Related

MFC: how to draw opengl from a different thread?

I am trying to do some opengl 1.0 animation in a CWnd window with 60 fps. I create a sub class of CWnd:
class COpenGLControl : public CWnd
{
...
}
I found if I use the build-in timer "setTimer()" and set it to fire on every 1000/60 ms, all the opengl commands were able to render correctly. However, If I implement my own timer using a separate thread, nothing is drawn. All I got was a black screen.
Is there a way to issue opengl commands from a different thread?
Even if you are not intending to issue GL calls from multiple threads, you have to take OpenGL's rules for threading into account: An OpenGL context can only be used by at most one thread at a time. (And, per thread, there can be at most one active GL context at any time). That does not mean that you cannot use the same context in multiple threads, or create it in one and use it in another, you just have to explicitely "hand over" the context from one thread to another.
I don't know if you use some further librariy for GL context handling, so I'm assuming you are using the native API of your OS - in this case wgl. The relevant function is wglMakeCurrent(). So, to hand over a context which is "current" in thread A to thread B, thread A must first call wglMakeCurrent(NULL,NULL) before thread B can get the context via wglMakeCurrent(someDC, myGLCtx). You can of course switch around a GL context as many times you like, but that will introduce a huge synchronization overhead and should be avoided.
From your comments:
Would work If I create the context also in the timer thread?
Yes, it would.
Just a side note: Creating is not the issue here at all, since creating a GL context does not automatically make it "current" to a thread - so you can just create it in thread A and afterwards make it current directly to thread B.

Segfault in multithreaded OpenGL?

I'm running into an issue where OpenGL calls in multiple threads sometimes cause a segfault, and I can't figure out what I'm doing wrong. I'm not sharing a context or anything else between threads.
invalid CoreGraphics connection
Segmentation fault: 11
The actual CGL result code is
kCGLBadConnection - Invalid connection to Core Graphics.
https://developer.apple.com/library/mac/documentation/graphicsimaging/reference/cgl_opengl/Reference/reference.html#//apple_ref/doc/uid/TP40001186-CH3g-BBCDCEBD
The end use case here is to render images asynchronously with libuv (doing some processing on the CPU then uploading data to the GPU for rendering), but I've worked up a simple test case which replicates this issue.
https://github.com/mikemorris/headless-gl-multithreaded
You need a valid OpenGL context bound to the thread when calling glReadPixels. The CGL variant of View::resize unbinds the OpenGL context at the end, so glReadPixels is called without a OpenGL context being active. I think this might be part of the reason of your problem.
It appears that the cause of the crash is multiple threads simultaneously trying to open a display connection in CGLChoosePixelFormat (or XOpenDisplay/glXChooseVisual in GLX). Opening a single connection in the main thread and then using this connection when instantiating new threads (each of which creates their own context) seems to fix this.

(D3D11) Reading texel on separate thread

In D3D10, I load a staging texture onto my GPU memory, then map it in order to access its texel data on the CPU. This is done on a separate thread, not the thread I render with. I just call the device methods, and it works.
In D3D11 I load the staging texture onto my GPU, but to access it (i.e. Map it) I need to use the Context, not the device. Can't use the immediate context, since the immediate context can only be used by a single thread at a time. But I also can't use a deferred context to Read from the texture to the CPU:
"If you call Map on a deferred context, you can only pass D3D11_MAP_WRITE_DISCARD, D3D11_MAP_WRITE_NO_OVERWRITE, or both to the MapType parameter. Other D3D11_MAP-typed values are not supported for a deferred context."
http://msdn.microsoft.com/en-us/library/ff476457.aspx
Ok, so what am I supposed to do now? It is common to use textures to store certain data (heightmaps for instance) and you obviously have to be able to access that data for it to be useful. Is there no way for me to do this in a separate thread with D3D11?
You should map the staging texture using the immediate context on the render thread, then use the contents as you wish on your second thread. Even in D3D10, the call to map the texture for read ends up putting a synchronization point in the command buffer (refer to this article), effectively serializing your threads. The D3D11 API makes an effort to discourage hidden performance costs like this.

How to paint onto QLabel in another thread

I've got a specific target: to draw a road-net. So i have a number of dots (x,y) and I'd like to connect them (using drawLine function). Because of their amount (about 2-3 millions) I need to do in in another thread, so there a problem how should i do it ? I have a special area for drawing - QLabel. I've tried to do it through QPixmap in main thread and everything is fine, but when I try to do it through signal/slot in another thread no image appear :(
Actually, when I transform my coordinates into GUI-coordinates they become fractional so I don't know how to paint them, because drawLine functions has integer argument: (int x1, int y1, int x2, int y2).
This is how i create another thread (I need to run only one function, so it is the best way i think)
QtConcurrent::run(this,&MainWindow::parseXML)
Hope you will help me, because I will become mad %)
P.S I've read that QPixmap is not supported in multi-threading drawing. So now i have no idea how to do that.
QPainter can be used in a thread to paint onto QImage, QPrinter, and QPicture paint devices. Painting onto QPixmaps and QWidgets is not supported. On Mac OS X the automatic progress dialog will not be displayed if you are printing from outside the GUI thread.
If you need to do your painting in a thread other than the Qt GUI thread, do this:
In your non-GUI thread, create a QImage object
Use a QPainter to paint into the QImage object
Use QApplication::postEvent or a queued signal/slot connection to pass the QImage object over to the main thread in a thread-safe manner
The main thread can now convert the QImage object into a QPixmap (this will be relatively quick to do) and then display it as usual.
You are apparently looking for a QGraphicsView (or preferably QQuickView if you care about performance and are working with Qt5). That's the solution which Qt offers for exactly this purpose.
To your question -- there is no way in Qt to do the painting in a separate thread; any widget class cannot be touched from another thread. The proposed invokeMethod call is actually an asynchronous callback which gets queued for execution in the main thread. You could generate a QImage, pass it to the GUI thread and let the GUI use it, but I'd seriously suggest working with the scene graph (the QGraphicsView) because it was designed and optimized for precisely this purpose.
Though it's really bad practice - to update GUI thread from within worker thread and you should really do it via signal-slot (with connection type -queued) , you still can update GUI via QMetaObject::invokeMethod()
You have to run every function in a worker thread, that updates GUI through invokeMethod(). For example - in your main class, add function like void MainWindow::drawLine(int x1, int y1, int x2, int y2) which will draw line on your QImage. And within your thread you can call that function like this:
QMetaObject::invokeMethod(this,"drawLine", Q_ARG(int,x1), Q_ARG(int,y1), Q_ARG(int,x2), Q_ARG(int,y2));
The simplest approach is to concurrently distribute the drawing across several images, then composite the images (also concurrently), and then finally submit them for painting on the gui.
This can be done using QtConcurrent::map on the sequence of images. The map functor draws into an image that's specific to the current thread - e.g. via QThreadStorage. The reference to that image can also be stored, upon allocation, in a list within the functor. The functor of course has to outlive the call to QtConcurrent::map. Once map returns, the images from the list within the functor can be asynchronously combined pair-wise, until only one image remains. That image is then submitted to the display widget.
If the full-size image compositing is to be avoided, then a similar approach will work, but the lines have to grouped into spatial groups, i.e. those intersecting some rectangles that cover the area to be drawn. To fully utilized all cores, you'd want say 2-3x as many rectangular areas as QThread::idealThreadCount(). Then treat the painting of each of those groups onto its sub-image to be a concurrent task, to be submitted to QtConcurrent::run. When all tasks are done, the images get submitted to the display widget, which paints them on its backing store, in sequence.
The painting of the images on the backing store can also be multi-threaded, see this answer for a complete example. Generally speaking, the images need to have a width the multiple of (CPU cacheline size/4 since we use 32-bit pixels). The painting of those images on the backing store is fully parallelizable.

OpenGL Rendering in a secondary thread

I'm writing a 3D model viewer application as a hobby project, and also as a test platform to try out different rendering techniques. I'm using SDL to handle window management and events, and OpenGL for the 3D rendering. The first iteration of my program was single-threaded, and ran well enough. However, I noticed that the single-threaded program caused the system to become very sluggish/laggy. My solution was to move all of the rendering code into a different thread, thereby freeing the main thread to handle events and prevent the app from becoming unresponsive.
This solution worked intermittently, the program frequently crashed due to a changing (and to my mind bizarre) set of errors coming mainly from the X window system. This led me to question my initial assumption that as long as all of my OpenGL calls took place in the thread where the context was created, everything should still work out. After spending the better part of a day searching the internet for an answer, I am thoroughly stumped.
More succinctly: Is it possible to perform 3D rendering using OpenGL in a thread other than the main thread? Can I still use a cross-platform windowing library such as SDL or GLFW with this configuration? Is there a better way to do what I'm trying to do?
So far I've been developing on Linux (Ubuntu 11.04) using C++, although I am also comfortable with Java and Python if there is a solution that works better in those languages.
UPDATE: As requested, some clarifications:
When I say "The system becomes sluggish" I mean interacting with the desktop (dragging windows, interacting with the panel, etc) becomes much slower than normal. Moving my application's window takes time on the order of seconds, and other interactions are just slow enough to be annoying.
As for interference with a compositing window manager... I am using the GNOME shell that ships with Ubuntu 11.04 (staying away from Unity for now...) and I couldn't find any options to disable desktop effects such as there was in previous distributions. I assume this means I'm not using a compositing window manager...although I could be very wrong.
I believe the "X errors" are server errors due to the error messages I'm getting at the terminal. More details below.
The errors I get with the multi-threaded version of my app:
XIO: fatal IO error 11 (Resource temporarily unavailable) on X server ":0.0"
after 73 requests (73 known processed) with 0 events remaining.
X Error of failed request: BadColor (invalid Colormap parameter)
Major opcode of failed request: 79 (X_FreeColormap)
Resource id in failed request: 0x4600001
Serial number of failed request: 72
Current serial number in output stream: 73
Game: ../../src/xcb_io.c:140: dequeue_pending_request: Assertion `req == dpy->xcb->pending_requests' failed.
Aborted
I always get one of the three errors above, which one I get varies, apparently at random, which (to my eyes) would appear to confirm that my issue does in fact stem from my use of threads. Keep in mind that I'm learning as I go along, so there is a very good chance that in my ignorance I've something rather stupid along the way.
SOLUTION: For anyone who is having a similar issue, I solved my problem by moving my call to SDL_Init(SDL_INIT_VIDEO) to the rendering thread, and locking the context initialization using a mutex. This ensures that the context is created in the thread that will be using it, and it prevents the main loop from starting before initialization tasks have finished. A simplified outline of the startup procedure:
1) Main thread initializes struct which will be shared between the two threads, and which contains a mutex.
2) Main thread spawns render thread and sleeps for a brief period (1-5ms), giving the render thread time to lock the mutex. After this pause, the main thread blocks while trying to lock the mutex.
3) Render thread locks mutex, initializes SDL's video subsystem and creates OpenGL context.
4) Render thread unlocks mutex and enters its "render loop".
5) The main thread is no longer blocked, so it locks and unlocks the mutex before finishing its initialization step.
Be sure and read the answers and comments, there is a lot of useful information there.
As long as the OpenGL context is touched from only one thread at a time, you should not run into any problems. You said even your single threaded program made your system sluggish. Does that mean the whole system or only your own application? The worst that should happen in a single threaded OpenGL program is, that processing user inputs for that one program gets laggy but the rest of the system is not affected.
If you use some compositing window manager (Compiz, KDE4 kwin), please try out what happens if you disable all compositing effects.
When you say X errors do you mean client side errors, or errors reported in the X server log? The latter case should not happen, because any kind of kind of malformed X command stream the X server must be able to cope with and at most emit a warning. If it (the X server) crashes this is a bug and should reported to X.org.
If your program crashes, then there's something wrong in its interaction with X; in that case please provide us with the error output in its variations.
What I did in a similar situation was to keep my OpenGL calls in the main thread but move the vertex arrays preparation to a separate thread (or threads).
Basically, if you manage to separate the cpu intensive stuff from the OpenGL calls you don't have to worry about the unfortunately dubious OpenGL multithreading.
It worked out beautifully for me.
Just in case - the X-Server has its' own sync subsystem.
Try following while drawing:
man XInitThreads - for initialization
man XLockDisplay/XUnlockDisplay -- for drawing (not sure for events processing);
I was getting one of your errors:
../../src/xcb_io.c:140: dequeue_pending_request: Assertion `req ==
dpy->xcb->pending_requests' failed. Aborted
and a whole host of different ones as well. Turns out that SDL_PollEvent needs an a pointer with initialized memory. So this fails:
SDL_Event *event;
SDL_PollEvent(event);
while this works:
SDL_Event event;
SDL_PollEvent(&event);
In case anyone else runs across this from google.
This is half an answer and half a question.
Rendering in SDL in a separate thread is possible. It works usually on any OS. What you need to do is, that you make sure you make the GL context current when the render thread takes over. At the same time, before you do so, you need to release it from the main thread, e.g.:
Called from the main thread:
void Renderer::Init()
{
#ifdef _WIN32
m_CurrentContext = wglGetCurrentContext();
m_CurrentDC = wglGetCurrentDC();
// release current context
wglMakeCurrent( nullptr, nullptr );
#endif
#ifdef __linux__
if (!XInitThreads())
{
THROW( "XLib is not thread safe." );
}
SDL_SysWMinfo wm_info;
SDL_VERSION( &wm_info.version );
if ( SDL_GetWMInfo( &wm_info ) ) {
Display *display = wm_info.info.x11.gfxdisplay;
m_CurrentContext = glXGetCurrentContext();
ASSERT( m_CurrentContext, "Error! No current GL context!" );
glXMakeCurrent( display, None, nullptr );
XSync( display, false );
}
#endif
}
Called from the render thread:
void Renderer::InitGL()
{
// This is important! Our renderer runs its own render thread
// All
#ifdef _WIN32
wglMakeCurrent(m_CurrentDC,m_CurrentContext);
#endif
#ifdef __linux__
SDL_SysWMinfo wm_info;
SDL_VERSION( &wm_info.version );
if ( SDL_GetWMInfo( &wm_info ) ) {
Display *display = wm_info.info.x11.gfxdisplay;
Window window = wm_info.info.x11.window;
glXMakeCurrent( display, window, m_CurrentContext );
XSync( display, false );
}
#endif
// Init GLEW - we need this to use OGL extensions (e.g. for VBOs)
GLenum err = glewInit();
ASSERT( GLEW_OK == err, "Error: %s\n", glewGetErrorString(err) );
The risks here is, that SDL does not have a native MakeCurrent() function, unfortunately. So, we have to poke around a little in SDL internals (1.2, 1.3 might have solved this by now).
And one problem remains, that for some reason, I run into a problem when SDL is shutting down. Maybe someone can tell me how to safely release the context when the thread terminates.
C++, SDL, OpenGl:::
on main thread: SDL_CreateWindow( );
SDL_CreateSemaphore( );
SDL_SemWait( );
on renderThread: SDL_CreateThread( run, "rendererThread", (void*)this )
SDL_GL_CreateContext( )
"initialize the rest of openGl and glew"
SDL_SemPost( ) //unlock the previously created semaphore
P.S: SDL_CreateThread( ) only takes functions as its first parameter not methods, if a method is wanted than you simulate a method/function in your class by making it a friend function. this way it will have method traits while still able to be used as a functor for the SDL_CreateThread( ).
P.S.S: inside of the "run( void* data )" created for the thread, the "(void*)" this is important and in order to re-obtain "this" inside of the function this line is needed "ClassName* me = (ClassName*)data;"

Resources