I am coding a graphic application in Qt (4.8.2)(VS 2008), using QGLWidget and therefore OpenGL. Here is short description of the application: It's a physics simulation. It consists of 2 threads. Main application thread handles scene drawing (QGLWidget subclass) and events. Computing thread is in a loop computing next steps of the simulation.
Now... I would like to interact with the simulation using cursor (dragging objects or similar stuff). Since I decided not to do intelligent ray shooting in the scene along with some spatial space decomposition (maybe I'll have to do it afterall), I would like the computing thread to execute something like this:
glGetDoublev(GL_MODELVIEW_MATRIX, modelviewMatrix);
glGetDoublev(GL_PROJECTION_MATRIX, projectionMatrix);
glGetIntegerv(GL_VIEWPORT, viewport);
gluProject(px, py, pz, modelviewMatrix, projectionMatrix, viewport, &x, &y, &z);
in order to get screen [x;y] coordinates of the [px;py;pz] point and use them in computing next step of a simulation. Well, turns out main thread always gets correct modelviewMatrix and projectionMatrix arrays, but when this is executed by compute thread, it gets trash matrix data. I tried many things ... running makeCurrent() before querying OpenGL, locking any OpenGL actions so matrices should be untouched, but no success. I wonder, is this even possible ? Does OpenGL preserve matrices after drawing of a scene ? Is there any way how to make this piece of code thread safe ?
Do the GL queries in the main thread and pass the results to your compute thread.
Related
In my current setup, I have two displays that are being driven by two GPUs. Using GLUT, I create two windows (one per display) and render each one from the main thread by calling glutSetWindow() in the draw call, for each window.
The draw calls renders a Texture2D as a sphere (using gluSphere()) but the Texture2D is swapped for another image every few seconds. I have set up an array of 2 Texture2D so I can load the next image while the current Texture2D is shown. This works well as long as everything runs in the main thread.
The problem is that the call to glTexImage2D(), to load the next image, hangs my draw call, so I need to call glTexImage2D() on a different thread. Calling glTexImage2D() on a different thread crashes, as it seems the OpenGL context is not shared. GLUT does not seem to provide a way to share the context, but I should be able to get the context on Linux via glXGetCurrentContext().
My question is if I get the context via this call, how can I make it a shared context? And would this even work with GLUT? Another option has been to switch to different library to replace GLUT, like GLFW but in that case I will loose some handy function such as gluSphere(). Any recommendation if the context cannot be shared with GLUT please?
With GLX context sharing is established at context creation; unlike WGL you can't establish that sharing as an afterthought. Since GLUT doesn't have a context sharing feature (FreeGLUT may have one, but I'm not sure about that) this is not going to be straightforward.
I have two displays that are being driven by two GPUs.
Unless those GPUs are SLi-ed or CrossFire-ed you can't establish context sharing between them.
The problem is that the call to glTexImage2D(), to load the next image, hangs my draw call, so I need to call glTexImage2D() on a different thread.
If the images are of the same size, use glTexSubImage2D to replace it. Also image data can be loaded asynchronously using pixel buffer objects, using a secondary thread that doesn't even need a OpenGL context!
Outlining the steps:
In the OpenGL context thread:
initiating transfer
glBindBuffer(GL_PIXEL_UNPACK_BUFFER, pboID)
void *p = glMapBuffer(GL_PIXEL_UNPACK_BUFFER, GL_WRITE_ONLY);
signal transfer thread
continue with normal drawing operations
In the transfer thread
on signal to start transfer
copy data to the mapped buffer
signal OpenGL context thread
In the OpenGL context thread:
on signal to complete transfer
glUnmapBuffer
glTex[Sub]Image
sync = glFenceSync
keep on drawing with the old texture
on further iterations of the drawing loop
poll sync with glClientWaitSync using a timeout of 0
if the wait sync returns signalled switch to the new texture and delete the old one
else keep on drawing with the old texture
I am trying to do some opengl 1.0 animation in a CWnd window with 60 fps. I create a sub class of CWnd:
class COpenGLControl : public CWnd
{
...
}
I found if I use the build-in timer "setTimer()" and set it to fire on every 1000/60 ms, all the opengl commands were able to render correctly. However, If I implement my own timer using a separate thread, nothing is drawn. All I got was a black screen.
Is there a way to issue opengl commands from a different thread?
Even if you are not intending to issue GL calls from multiple threads, you have to take OpenGL's rules for threading into account: An OpenGL context can only be used by at most one thread at a time. (And, per thread, there can be at most one active GL context at any time). That does not mean that you cannot use the same context in multiple threads, or create it in one and use it in another, you just have to explicitely "hand over" the context from one thread to another.
I don't know if you use some further librariy for GL context handling, so I'm assuming you are using the native API of your OS - in this case wgl. The relevant function is wglMakeCurrent(). So, to hand over a context which is "current" in thread A to thread B, thread A must first call wglMakeCurrent(NULL,NULL) before thread B can get the context via wglMakeCurrent(someDC, myGLCtx). You can of course switch around a GL context as many times you like, but that will introduce a huge synchronization overhead and should be avoided.
From your comments:
Would work If I create the context also in the timer thread?
Yes, it would.
Just a side note: Creating is not the issue here at all, since creating a GL context does not automatically make it "current" to a thread - so you can just create it in thread A and afterwards make it current directly to thread B.
I've got a specific target: to draw a road-net. So i have a number of dots (x,y) and I'd like to connect them (using drawLine function). Because of their amount (about 2-3 millions) I need to do in in another thread, so there a problem how should i do it ? I have a special area for drawing - QLabel. I've tried to do it through QPixmap in main thread and everything is fine, but when I try to do it through signal/slot in another thread no image appear :(
Actually, when I transform my coordinates into GUI-coordinates they become fractional so I don't know how to paint them, because drawLine functions has integer argument: (int x1, int y1, int x2, int y2).
This is how i create another thread (I need to run only one function, so it is the best way i think)
QtConcurrent::run(this,&MainWindow::parseXML)
Hope you will help me, because I will become mad %)
P.S I've read that QPixmap is not supported in multi-threading drawing. So now i have no idea how to do that.
QPainter can be used in a thread to paint onto QImage, QPrinter, and QPicture paint devices. Painting onto QPixmaps and QWidgets is not supported. On Mac OS X the automatic progress dialog will not be displayed if you are printing from outside the GUI thread.
If you need to do your painting in a thread other than the Qt GUI thread, do this:
In your non-GUI thread, create a QImage object
Use a QPainter to paint into the QImage object
Use QApplication::postEvent or a queued signal/slot connection to pass the QImage object over to the main thread in a thread-safe manner
The main thread can now convert the QImage object into a QPixmap (this will be relatively quick to do) and then display it as usual.
You are apparently looking for a QGraphicsView (or preferably QQuickView if you care about performance and are working with Qt5). That's the solution which Qt offers for exactly this purpose.
To your question -- there is no way in Qt to do the painting in a separate thread; any widget class cannot be touched from another thread. The proposed invokeMethod call is actually an asynchronous callback which gets queued for execution in the main thread. You could generate a QImage, pass it to the GUI thread and let the GUI use it, but I'd seriously suggest working with the scene graph (the QGraphicsView) because it was designed and optimized for precisely this purpose.
Though it's really bad practice - to update GUI thread from within worker thread and you should really do it via signal-slot (with connection type -queued) , you still can update GUI via QMetaObject::invokeMethod()
You have to run every function in a worker thread, that updates GUI through invokeMethod(). For example - in your main class, add function like void MainWindow::drawLine(int x1, int y1, int x2, int y2) which will draw line on your QImage. And within your thread you can call that function like this:
QMetaObject::invokeMethod(this,"drawLine", Q_ARG(int,x1), Q_ARG(int,y1), Q_ARG(int,x2), Q_ARG(int,y2));
The simplest approach is to concurrently distribute the drawing across several images, then composite the images (also concurrently), and then finally submit them for painting on the gui.
This can be done using QtConcurrent::map on the sequence of images. The map functor draws into an image that's specific to the current thread - e.g. via QThreadStorage. The reference to that image can also be stored, upon allocation, in a list within the functor. The functor of course has to outlive the call to QtConcurrent::map. Once map returns, the images from the list within the functor can be asynchronously combined pair-wise, until only one image remains. That image is then submitted to the display widget.
If the full-size image compositing is to be avoided, then a similar approach will work, but the lines have to grouped into spatial groups, i.e. those intersecting some rectangles that cover the area to be drawn. To fully utilized all cores, you'd want say 2-3x as many rectangular areas as QThread::idealThreadCount(). Then treat the painting of each of those groups onto its sub-image to be a concurrent task, to be submitted to QtConcurrent::run. When all tasks are done, the images get submitted to the display widget, which paints them on its backing store, in sequence.
The painting of the images on the backing store can also be multi-threaded, see this answer for a complete example. Generally speaking, the images need to have a width the multiple of (CPU cacheline size/4 since we use 32-bit pixels). The painting of those images on the backing store is fully parallelizable.
So I have a single-threaded game engine class, which has separate functions for input, update and rendering, and I've just started learning to use the wonderful boost library (asio and thread components). And I was thinking of separating my update and render functions into separate threads (and perhaps separate the input and update functions from each other as well). Of course these functions will sometimes access the same locations in memory, so I decided to use boost/thread's strand functionality to prevent them from executing at the same time.
Right now my main game loop looks like this:
void SDLEngine::Start()
{
int update_time=0;
quit=false;
while(!quit)
{
update_time=SDL_GetTicks();
DoInput();//get user input and alter data based on it
DoUpdate();//update game data once per loop
if(!minimized)
DoRender();//render graphics to screen
update_time=SDL_GetTicks()-update_time;
SDL_Delay(max(0,target_time-update_time));//insert delay to run at desired FPS
}
}
If I used separate threads it would look something like this:
void SDLEngine::Start()
{
boost::asio::io_service io;
boost::asio::strand strand_;
boost::asio::deadline_timer input(io,boost::posix_time::milliseconds(16));
boost::asio::deadline_timer update(io,boost::posix_time::milliseconds(16));
boost::asio::deadline_timer render(io,boost::posix_time::milliseconds(16));
//
input.async_wait(strand_.wrap(boost::bind(&SDLEngine::DoInput,this)));
update.async_wait(strand_.wrap(boost::bind(&SDLEngine::DoUpdate,this)));
render.async_wait(strand_.wrap(boost::bind(&SDLEngine::DoRender,this)));
//
io.run();
}
So as you can see, before the loop went: Input->Update->Render->Delay->Repeat
Each one was run one after the other. If I used multithreading I would have to use strands so that updates and rendering wouldn't be run at the same time. So, is it still worth it to use multithreading here? They would still basically be running one at a time in separate cores. I basically have no experience in multithreaded applications so any help is appreciated.
Oh, and another thing: I'm using OpenGL for rendering. Would multithreading like this affect the way OpenGL renders in any way?
You are using same strand for all handlers, so there is no multithreading at all. Also, your deadline_timer is in scope of Start() and you do not pass it anywhere. In this case you will not able to restart it from the handler (note its not "interval" timer, its just a "one-call timer").
I see no point in this "revamp" since you are not getting any benefit from asio and/or threads at all in this example.
These methods (input, update, render) are too big and they do many things, you cannot call them without blocking. Its hard to say precisely because i dont know whats the game and how it works, but I'd prefer to do following steps:
Try to revamp network i/o so its become fully async
Try to use all CPU cores
About what you have tried: i think its possible if you search your code for actions that really can run in parallel right now. For example: if you calculate for each NPC something that is not depending on other characters you can io_service.post() each to make use all threads that running io_service.run() at the moment. So your program stay singlethreaded, but you can use, say, 7 other threads on some "big" operations
I'm 100% new to threading, as a start I've decided I want to muck around with using it to update my physics in a separate thread. I'm using a third party physics engine called Farseer, here's what I'm doing:
// class level declarations
System.Threading.Thread thread;
Stopwatch threadUpdate = new Stopwatch();
//In the constructor:
PhysicsEngine()
{
(...)
thread = new System.Threading.Thread(
new System.Threading.ThreadStart(PhysicsThread));
threadUpdate.Start();
thread.Start();
}
public void PhysicsThread()
{
int milliseconds = TimeSpan.FromTicks(111111).Milliseconds;
while(true)
{
if (threadUpdate.Elapsed.Milliseconds > milliseconds)
{
world.Step(threadUpdate.Elapsed.Milliseconds / 1000.0f);
threadUpdate.Stop();
threadUpdate.Reset();
threadUpdate.Start();
}
}
}
Is this an ok way to update physics or should there be some stuff I should look out for?
In a game you need to synchronise your physics update to the game's frame rate. This is because your rendering and gameplay will depend on the output of your physics engine each frame. And your physics engine will depend on user input and gameplay events each frame.
This means that the only advantage of calculating your physics on a separate thread is that it can run on a separate CPU core to the rest of your game logic and rendering. (Pretty safe for PC these days, and the mobile space is just starting to get dual-core.)
This allows them to both physics and gameplay/rendering run concurrently - but the drawback is that you need to have some mechanism to prevent one thread from modifying data while the other thread is using that data. This is generally quite difficult to implement.
Of course, if your physics isn't dependent on user input - like Angry Birds or The Incredible Machine (ie: the user presses "play" and the simulation runs) - in that case it's possible for you to calculate your physics simulation in advance, recording its output for playback. But instead of blocking the main thread you could move this time-consuming operation to a background thread - which is a well understood problem. You could even go so far as to start playing your recording back in the main thread, even before it is finished recording!
there is nothing wrong with your approach in general. Moving time-consuming operations, such as physics engine calculations to a separate thread is often a good idea. However, I am assuming that your application includes some sort of visual representation of your physics objects in the UI? If this is the case you are going to run into problems.
The UI controls in Silverlight have thread affinity, i.e. you cannot update their state from within the thread you have created in the above example. In order to update their state you are going to have to invoke via the Dispatcher, e.g. TextBox.Dispatcher.Invoke(...).
Another alternative is to use a Silverlight BackgroundWorker. This is a useful little class that allows you to do time-consuming work. It will move your work onto a background thread, avoiding the need to create your own System.Threading.Thread. It will also provide events that marshal results back onto the UI thread for you.
Much simpler!