OpenGL context and thread error with QGLWidget - multithreading

I'm working on the migration, from Qt 4.8 to Qt 5.5, of a quite big app using a QGLWidget.
My error is "Cannot make QOpenGLContext current in a different thread".
What I understood about this error is that, as the error say, the openGL context can only be binded to one thread, and only this thread can use the context.
Old code was like :
myQGLWidget->makeCurrent();
.. some openGL
myQGLWidget->doneCurrent();
This was working in Qt 4.8, with some thread lock to avoid concurrent call to that "makeCurrent".
After reading all what I could on the subject, my last try was :
myQGLWidget->context()->moveToThread(QThread::currentThread());
myQGLWidget->context()->makeCurrent();
... openGL again
myQGLWidget->context()->doneCurrent();
I still get the same error...
I'm a bit confused about how it's supposed to work, can someone help me ?
Regards,

The error is about a combination of two things:
QGLContext in Qt 5 is a small wrapper around QOpenGLContext;
QOpenGLContext is-a QObject (hence, it has QObject semantics when it comes to threading), and for some reason it also employs a fatal check that forbids you to call makeCurrent on it from a thread different than the thread it has affinity with ("affinity" here being the QObject concept).
If you're in control of both threads the best thing to do might be simply creating a new QGLContext, sharing with your QGLWidget's context; and then move and use this new context into the thread. You still need the locking you already have -- you can't make two contexts current on the same surface.
More in general, this is recognized as a limitation of QOpenGLContext. I've already prepared a patch for Qt 5.8 to try to get rid of it, you can find it here.

Related

Vulkan theaded application get error message on queue submissions under mutex

I have an application with Vulkan for rendering and glfw for windowing. If I start several threads, each with a different window, I get errors on threading and queue submission even though ALL vulkan calls are protected by a common mutex. The vulkan layer says:
THREADING ERROR : object of type VkQueue is simultaneously used in thread 0x0 and thread 0x7fc365b99700
Here is the skeleton of the loop under which this happens in each thread:
while (!finished) {
window.draw(...);
std::this_thread::sleep_for(std::chrono::milliseconds(10));
}
The draw function skeleton looks like:
draw(Arg arg) {
static std::mutex mtx;
std::lock_guard lock{mtx};
// .... drawing calls. Including
device.acquireNextImageKHR(...);
// Fill command bufers
graphicsQueue.submit(...);
presentQueue.presentKHR(presentInfo);
}
This is C++17 which slightly simplifies the syntax but is otherwise irrelevant.
Clearly everything is under a mutex. I also intercept the call to the debug message. When I do so, I see that one thread is waiting for glfw events, one is printing the vulkan layer message and the other two threads are trying to acquire the mutex for the lock_guard.
I am at a loss as to what is going on or how to even figure out what is causing this.
I am running on linux, and it does not crash. However on Mac OS X, after a random amount of time, the code will crash in a queue submit call of MoltenVK and when the crash happens, I see a similar situation of the threads. That is to say no other thread is inside a Vulkan call.
I'd appreciate any ideas. My next move would be to move all queue submissions to a single thread, though that is not my favorite solution.
PS: I created a complete MCVE under the Vookoo framework. It is at https://github.com/FunMiles/Vookoo/tree/lock_guard_queues and is the example 00-parallelTriangles
To try it, do the following:
git clone https://github.com/FunMiles/Vookoo.git
cd Vookoo
git checkout lock_guard_queues
mkdir build
cd build
cmake ..
make
examples/00-parallelTriangles
The way you call the draw is:
window.draw(device, fw.graphicsQueue(), [&](){//some lambda});
The insides of draw is protected by mutex, but the fw.graphicsQueue() isn't.
fw.graphicsQueue() million abstraction layers below just calls vkGetDeviceQueue. I found executing vkGetDeviceQueue in parallel with vkQueueSubmit causes the validation error.
So there are few issues here:
There is a bug in layers that causes multiple initialization of VkQueue state on vkGetDeviceQueue, which is the cause of the validation error
KhronosGroup/Vulkan-ValidationLayers#1751
Thread id 0 is not a separate issue. As there are not any actual previous accesses, thread id is not recorded. The problem is the layers issue the error because the access count goes into negative because it is previously wrongly reset to 0.
Arguably there is some spec issue here. It is not immediatelly obvious from the text that VkQueue is not actually accessed in vkGetDeviceQueue, except the silent assumption that it is the sane thing to do.
KhronosGroup/Vulkan-Docs#1254

Boost asio with Qt

I am trying to use boost::asio async client example with a simple Qt GUI like:
A little snippet from my app:
The button click SLOT:
void RestWidget::restGetCall()
{
networkService ntwkSer("www.boost.org","80");
connect(&ntwkSer, SIGNAL(responseReady(std::string)), this, SLOT(showResponse(std::string)));
ntwkSer.get("/LICENSE_1_0.txt");
}
The networkService class is just a wrapper of the above linked boost sample code.Its derived from QObject class for signal,slot mechanism.
void networkService::get(const std::string & path)
{
// boost::thread (boost::bind(&networkService::networkCall,this,path));//this gives me sigabrt
networkCall(path);//this works fine, and i get output as pictured above.
}
void networkService::networkCall(const std::string path)
{
tcp::resolver::query query(host_, port_);//also these host,port fields come out to be invalid/garbage.
//tcp::resolver::query query("www.boost.org","80");//still doesnt resolve the SIGABRT
resolver_.async_resolve(query,
boost::bind(&networkService::handle_resolve, this,
boost::asio::placeholders::error,
boost::asio::placeholders::iterator,
path));
io_service.run();
}
The problem, is when i run the io_service.run() from the boost::thread. i get SIGABRT.
also he host_,port_ networkService wrapper class fields inside the networkService::networkCall(path) function on debugging come out to be invalid, which get saved while constructing:
networkService ntwkSer("www.boost.org","80");
The obvious reason for boost::thread is to make GUI non-blocking,since io_service() has its own eventloop. My intention is to run boost::asio async calls in a seperate boost thread, and notify the GUI thread with QT's Q_OBJECT signal slot mechanism.
I don't get the reason of SIGABRT and also why could the field values of host_ and port_ become invalid once i start using boost::thread.
PS: This same setup, behaves correctly with boost::thread from a similar commandline application (no Qt GUI code), i.e when the networkService class is not hacked for Qt signal/slot to notify the main GUI thread. Here, i use the boost::asio's response from within the boost::thread.
Edit:
as per responses to my question, i tried this... i disabled Q_OBJECT signal/slot and QObject derivation of the networkservice class, to be sure MOC isnt messing things up.. but still, the issue prevails, i get access violation on windows vs sigabrt on linux. The issue of the networkservice object's fields getting corrupted is also present, eventually getting access violation.
In effect no change in behaviour.
before launching thread:
from inside thread
access violation on continue...
So, even without MOC , the issue is still there.
Edit 2:
Im sorry for bothering.. i did a huge mistake, of using a local networkService object from within the boost::thread, which got scoped out when the thread actually ran!
It's difficult to get the asio io_service.run() function to "play well" with the Qt event loop.
It's easier to use a Qt slot that calls io_service::poll() or io_service::poll_one() and then connect that slot to a QTimerEvent.
And it's even easier to use QNetworkAccessManager instead of asio see Qt Client Example
The problem is that with qt only one thread is allowed to manipulate the gui in qt. That is the one calling QApplication::exec. This is done to remove complexity for the users of qt and due to that QApplication / message loop being a singleton. That being said there is some magic going on in qt with threads. All QObjects are assigned a thread. By default the one on which they are created. When a signal slot connection is made it is determined how to actually dispatch the call. If the objects belong to the same thread a signal is dispatched by directly / synchronous invoking the slot. If the objects are assigned to distinguished threads a message is send from one thread to another to invoke the slot on the thread that is assigned to the object where the slot lives. This is what you actually need here.
The problem with your code is that both of your QObjects are created on the same thread. They are therefore assigned the same thread. So the slot which manipulates the GUI is called directly from your worker thread and remember this is prohibited! Since your worker is not the one calling QApplication::exec. To override the automatics and convince to correctly do the thread switch when calling the slot you must use Qt::QueuedConnection when doing the connect.
connect(&ntwkSer, SIGNAL(responseReady(std::string)), this, SLOT(showResponse(std::string)), Qt::QueuedConnection);

MFC: how to draw opengl from a different thread?

I am trying to do some opengl 1.0 animation in a CWnd window with 60 fps. I create a sub class of CWnd:
class COpenGLControl : public CWnd
{
...
}
I found if I use the build-in timer "setTimer()" and set it to fire on every 1000/60 ms, all the opengl commands were able to render correctly. However, If I implement my own timer using a separate thread, nothing is drawn. All I got was a black screen.
Is there a way to issue opengl commands from a different thread?
Even if you are not intending to issue GL calls from multiple threads, you have to take OpenGL's rules for threading into account: An OpenGL context can only be used by at most one thread at a time. (And, per thread, there can be at most one active GL context at any time). That does not mean that you cannot use the same context in multiple threads, or create it in one and use it in another, you just have to explicitely "hand over" the context from one thread to another.
I don't know if you use some further librariy for GL context handling, so I'm assuming you are using the native API of your OS - in this case wgl. The relevant function is wglMakeCurrent(). So, to hand over a context which is "current" in thread A to thread B, thread A must first call wglMakeCurrent(NULL,NULL) before thread B can get the context via wglMakeCurrent(someDC, myGLCtx). You can of course switch around a GL context as many times you like, but that will introduce a huge synchronization overhead and should be avoided.
From your comments:
Would work If I create the context also in the timer thread?
Yes, it would.
Just a side note: Creating is not the issue here at all, since creating a GL context does not automatically make it "current" to a thread - so you can just create it in thread A and afterwards make it current directly to thread B.

Segfault in multithreaded OpenGL?

I'm running into an issue where OpenGL calls in multiple threads sometimes cause a segfault, and I can't figure out what I'm doing wrong. I'm not sharing a context or anything else between threads.
invalid CoreGraphics connection
Segmentation fault: 11
The actual CGL result code is
kCGLBadConnection - Invalid connection to Core Graphics.
https://developer.apple.com/library/mac/documentation/graphicsimaging/reference/cgl_opengl/Reference/reference.html#//apple_ref/doc/uid/TP40001186-CH3g-BBCDCEBD
The end use case here is to render images asynchronously with libuv (doing some processing on the CPU then uploading data to the GPU for rendering), but I've worked up a simple test case which replicates this issue.
https://github.com/mikemorris/headless-gl-multithreaded
You need a valid OpenGL context bound to the thread when calling glReadPixels. The CGL variant of View::resize unbinds the OpenGL context at the end, so glReadPixels is called without a OpenGL context being active. I think this might be part of the reason of your problem.
It appears that the cause of the crash is multiple threads simultaneously trying to open a display connection in CGLChoosePixelFormat (or XOpenDisplay/glXChooseVisual in GLX). Opening a single connection in the main thread and then using this connection when instantiating new threads (each of which creates their own context) seems to fix this.

OpenGL Rendering in a secondary thread

I'm writing a 3D model viewer application as a hobby project, and also as a test platform to try out different rendering techniques. I'm using SDL to handle window management and events, and OpenGL for the 3D rendering. The first iteration of my program was single-threaded, and ran well enough. However, I noticed that the single-threaded program caused the system to become very sluggish/laggy. My solution was to move all of the rendering code into a different thread, thereby freeing the main thread to handle events and prevent the app from becoming unresponsive.
This solution worked intermittently, the program frequently crashed due to a changing (and to my mind bizarre) set of errors coming mainly from the X window system. This led me to question my initial assumption that as long as all of my OpenGL calls took place in the thread where the context was created, everything should still work out. After spending the better part of a day searching the internet for an answer, I am thoroughly stumped.
More succinctly: Is it possible to perform 3D rendering using OpenGL in a thread other than the main thread? Can I still use a cross-platform windowing library such as SDL or GLFW with this configuration? Is there a better way to do what I'm trying to do?
So far I've been developing on Linux (Ubuntu 11.04) using C++, although I am also comfortable with Java and Python if there is a solution that works better in those languages.
UPDATE: As requested, some clarifications:
When I say "The system becomes sluggish" I mean interacting with the desktop (dragging windows, interacting with the panel, etc) becomes much slower than normal. Moving my application's window takes time on the order of seconds, and other interactions are just slow enough to be annoying.
As for interference with a compositing window manager... I am using the GNOME shell that ships with Ubuntu 11.04 (staying away from Unity for now...) and I couldn't find any options to disable desktop effects such as there was in previous distributions. I assume this means I'm not using a compositing window manager...although I could be very wrong.
I believe the "X errors" are server errors due to the error messages I'm getting at the terminal. More details below.
The errors I get with the multi-threaded version of my app:
XIO: fatal IO error 11 (Resource temporarily unavailable) on X server ":0.0"
after 73 requests (73 known processed) with 0 events remaining.
X Error of failed request: BadColor (invalid Colormap parameter)
Major opcode of failed request: 79 (X_FreeColormap)
Resource id in failed request: 0x4600001
Serial number of failed request: 72
Current serial number in output stream: 73
Game: ../../src/xcb_io.c:140: dequeue_pending_request: Assertion `req == dpy->xcb->pending_requests' failed.
Aborted
I always get one of the three errors above, which one I get varies, apparently at random, which (to my eyes) would appear to confirm that my issue does in fact stem from my use of threads. Keep in mind that I'm learning as I go along, so there is a very good chance that in my ignorance I've something rather stupid along the way.
SOLUTION: For anyone who is having a similar issue, I solved my problem by moving my call to SDL_Init(SDL_INIT_VIDEO) to the rendering thread, and locking the context initialization using a mutex. This ensures that the context is created in the thread that will be using it, and it prevents the main loop from starting before initialization tasks have finished. A simplified outline of the startup procedure:
1) Main thread initializes struct which will be shared between the two threads, and which contains a mutex.
2) Main thread spawns render thread and sleeps for a brief period (1-5ms), giving the render thread time to lock the mutex. After this pause, the main thread blocks while trying to lock the mutex.
3) Render thread locks mutex, initializes SDL's video subsystem and creates OpenGL context.
4) Render thread unlocks mutex and enters its "render loop".
5) The main thread is no longer blocked, so it locks and unlocks the mutex before finishing its initialization step.
Be sure and read the answers and comments, there is a lot of useful information there.
As long as the OpenGL context is touched from only one thread at a time, you should not run into any problems. You said even your single threaded program made your system sluggish. Does that mean the whole system or only your own application? The worst that should happen in a single threaded OpenGL program is, that processing user inputs for that one program gets laggy but the rest of the system is not affected.
If you use some compositing window manager (Compiz, KDE4 kwin), please try out what happens if you disable all compositing effects.
When you say X errors do you mean client side errors, or errors reported in the X server log? The latter case should not happen, because any kind of kind of malformed X command stream the X server must be able to cope with and at most emit a warning. If it (the X server) crashes this is a bug and should reported to X.org.
If your program crashes, then there's something wrong in its interaction with X; in that case please provide us with the error output in its variations.
What I did in a similar situation was to keep my OpenGL calls in the main thread but move the vertex arrays preparation to a separate thread (or threads).
Basically, if you manage to separate the cpu intensive stuff from the OpenGL calls you don't have to worry about the unfortunately dubious OpenGL multithreading.
It worked out beautifully for me.
Just in case - the X-Server has its' own sync subsystem.
Try following while drawing:
man XInitThreads - for initialization
man XLockDisplay/XUnlockDisplay -- for drawing (not sure for events processing);
I was getting one of your errors:
../../src/xcb_io.c:140: dequeue_pending_request: Assertion `req ==
dpy->xcb->pending_requests' failed. Aborted
and a whole host of different ones as well. Turns out that SDL_PollEvent needs an a pointer with initialized memory. So this fails:
SDL_Event *event;
SDL_PollEvent(event);
while this works:
SDL_Event event;
SDL_PollEvent(&event);
In case anyone else runs across this from google.
This is half an answer and half a question.
Rendering in SDL in a separate thread is possible. It works usually on any OS. What you need to do is, that you make sure you make the GL context current when the render thread takes over. At the same time, before you do so, you need to release it from the main thread, e.g.:
Called from the main thread:
void Renderer::Init()
{
#ifdef _WIN32
m_CurrentContext = wglGetCurrentContext();
m_CurrentDC = wglGetCurrentDC();
// release current context
wglMakeCurrent( nullptr, nullptr );
#endif
#ifdef __linux__
if (!XInitThreads())
{
THROW( "XLib is not thread safe." );
}
SDL_SysWMinfo wm_info;
SDL_VERSION( &wm_info.version );
if ( SDL_GetWMInfo( &wm_info ) ) {
Display *display = wm_info.info.x11.gfxdisplay;
m_CurrentContext = glXGetCurrentContext();
ASSERT( m_CurrentContext, "Error! No current GL context!" );
glXMakeCurrent( display, None, nullptr );
XSync( display, false );
}
#endif
}
Called from the render thread:
void Renderer::InitGL()
{
// This is important! Our renderer runs its own render thread
// All
#ifdef _WIN32
wglMakeCurrent(m_CurrentDC,m_CurrentContext);
#endif
#ifdef __linux__
SDL_SysWMinfo wm_info;
SDL_VERSION( &wm_info.version );
if ( SDL_GetWMInfo( &wm_info ) ) {
Display *display = wm_info.info.x11.gfxdisplay;
Window window = wm_info.info.x11.window;
glXMakeCurrent( display, window, m_CurrentContext );
XSync( display, false );
}
#endif
// Init GLEW - we need this to use OGL extensions (e.g. for VBOs)
GLenum err = glewInit();
ASSERT( GLEW_OK == err, "Error: %s\n", glewGetErrorString(err) );
The risks here is, that SDL does not have a native MakeCurrent() function, unfortunately. So, we have to poke around a little in SDL internals (1.2, 1.3 might have solved this by now).
And one problem remains, that for some reason, I run into a problem when SDL is shutting down. Maybe someone can tell me how to safely release the context when the thread terminates.
C++, SDL, OpenGl:::
on main thread: SDL_CreateWindow( );
SDL_CreateSemaphore( );
SDL_SemWait( );
on renderThread: SDL_CreateThread( run, "rendererThread", (void*)this )
SDL_GL_CreateContext( )
"initialize the rest of openGl and glew"
SDL_SemPost( ) //unlock the previously created semaphore
P.S: SDL_CreateThread( ) only takes functions as its first parameter not methods, if a method is wanted than you simulate a method/function in your class by making it a friend function. this way it will have method traits while still able to be used as a functor for the SDL_CreateThread( ).
P.S.S: inside of the "run( void* data )" created for the thread, the "(void*)" this is important and in order to re-obtain "this" inside of the function this line is needed "ClassName* me = (ClassName*)data;"

Resources