Short Question
If my application is already rendering at 60fps, is there any additional benefit (from perspective of human perception) to render additional frames on state changes?
Long question
I'm writing an interactive application in OpenGL, something like:
while(true) {
render a frame;
sleep right amount of time to hit 60fps
}
Now, in the MVC model, I would fire off a repaint on every state changes.
My question:
Given that I'm already rendering at 60fps from the model, is there any benefit to fire off a new repaint at every state change? Can humans actually perceive the slight difference in 33 milliseconds?
Yes they can, especially elite FPS gamers. You could set up a fixed time step system or some similar system to render as fast as possible and use interpolation during your extra frames to smooth animation. This can definitely make a difference, but then again time-resources-cost, maybe you have no need to have super smooth rendering.
Related
I am developing on a linux system using latest (at the moment) SDL2 (2.0.8) + openGL ES 2.0 (GLSL 1.0) eventually targeting a raspberry pi 3 board. I have so far done a few things like drawing text with freetype, drawing lines, text boxes (editable), text lists, waveform boxes (all i need to pass to a function is an array of vertices) and other shapes with glDrawArrays(). Now, there are things that need to be refreshed at, let's say, 10 times per sec and others that need 1 time per second. What would be the best approach to skip re-rendering everything at the rate of 10 times per sec? Because obviously openGL works by drawing everything from scratch on every 'frame'. However i know and you know that other approaches exist that include: rendering on top of the screen you already have or taking a screenshot and rendering on top of it only the fast changing things as well as other solutions. What do you thing would be the best approach to skip re-doing everything before calling SDL_GL_SwapWindow() ? How can i take a screen shot and render it on the invisible buffer then render only the fast changing objects and then call SDL_GL_SwapWindow() ?
This is a screen shot of the app so far drawing basic things
Thanks in advance.
i eventually had to realize that i should not have posted the question in the first place but since this is a place where people learn from others i now feel somewhat nicer :) . So, the thing i had to do was to simply stop clearing the invisible buffer (i will call it that for simplicity) and render on top of it only controls that change. Those that change are updated by covering the area that they take by a rectangle and then draw new stuff on that area. I have already done it and the frame rate just 'exploded'. I do not really think that there is a better approach since the way i do it requires no action at all. All i had to do was to add a few if conditions that selectively rendered or skipped every time the execution reached the point where functions iterate through the controls that have to be drawn on screen and therefore decide what to render and what not. However a well thought set of structures is required for every control instead of declaring and defining endlessly global variables which will only makes things confusing and difficult to maintain.
Regards to all.
I am working on a 3D simulation program using openGL which uses a render loop with a fixed framerate to keep the screen updated as the world changes. Standard procedure really, and for a typical video game this is certainly the best approach (I originally took this code from an openGL game tutorial). But for me, the 3D scene will not be changing as rapidly and unpredictably as in a computer game. It will be possible for the 3D scene itself to change from time to time but in general it won't change between render calls (it's more of a visualisation tool for geometric problems). The user will be able to control the position/orientation of the camera but in general there will be times when the camera won't move for several seconds/minutes (potentially hundreds of render calls) and since the 3D scene is likely be static for the majority of the time, I wonder if I really need a continuous render loop...?
My thinking is that I will remove the automatic render loop and instead I will explicitly call my update method when either,
The 3D scene changes (very rare)
The camera moves (somewhat rare)
As I will be using this largely for research purposes, the scene/camera is likely to stay in one state for several minutes at a time and it seems silly to be continuously updating the frame buffer when it's not changing.
My question then is, is this a good approach? All the online tutorials for 3D graphics rendering seem to deal with game design but that's not really my requirement. In other words, what are the pros and cons of using a render loop vs. manually calling "update()" whenever something changes?
Thanks
There's no problem with this approach, in fact many 3D apps, like 3DS MAX use explicit rendering. You just pick what is better for your needs, in most games scene changes each frame so it's better to have update loop, but if you were doing some chess game, without animated UI you could also use explicit rendering only when the scene changes.
For apps with rare changes, like 3DS or Blender it would be better to call rendering only on change. This way you save the CPU/GPU but also power and your PC don't heat up so much.
With explicit rendering you can also have some performance tricks, like drawing simplified scene when camera moves, for better performance. Then when camera stops you render the full scene in background once again, and replace the low-quality rendering with the new one.
I have used pbrt to render my scene. I have specified the viewing angle in the scene file and on rendering it with pbrt I see the image from that specific viewing angle. I want to know if there exists a way by which I can rotate the scene rendered by pbrt using my mouse in real time
No.
To see if it is even possible, render a scene and time how ling it takes. In order to get it real-time you will need pbrt to render at least a few frames a second, preferably 60!
I don't think this is going to happen in 2016.
Alternatively you will need something like an OpenGL representation to perform the real-time interaction and then the rendered scene can only be displayed over the top (once the rendering has been finished). the frustums need to match in order for you to do this otherwise what the user interacts with will not be the same as what they see rendered.
If your editing the scene file, it sounds like your not in coding land and so the only possibility is to write some program that can display the scene (in GL) and update the scene file information to be the same as the current camera and render using pbrt. Its all going to take a long time (pbrt needs to parse the file each time, and re-buffer all the geometry) since supplying the file means pbrt won't save anything from the previous state and so will have to construct acceleration structures etc as well as rendering the scene. Each frame!
Even in code pbrt is not going to give you great performance. It's not designed for that, more to be a physically accurate path tracer (as the name suggests). In order to get anything remotely near real-time, you'll need some bad ass acceleration structures and better command of the light model you are using. If you really are interested your probably need to write your own renderer. Look into Metropolis Light Transport (MLT) and Vertex connect merge (VCM), which are much more refined/efficient models using Monte Carlo method.
Plus some pretty decent hardware with lots of cores, or a decent gfx card if wish to employ SIMD through Cuda or equivalent.
[EDIT] Also note that the pbrt renderer, is based on a book "Physically Based Rendering (From Theory to Implementation)" ISBN-13: 978-0123750792. Which outlines how to implement your own version of pbrt.
I'm working on an app which needs to draw with OpengGL at a refresh rate at least equal to the refresh rate of the monitor. And I need to perform the drawing in a separate thread so that drawing is never locked by intense UI actions.
Actually I'm using a NSOpenGLView in combination with CVDisplayLink and I'm able to achive 60-80FPS without any problem.
Since I need also to display some cocoa controls on top of this view I tried to subclass NSOpenGLView and make it layer-backed, following LayerBackedOpenGLView Apple example.
The result isn't satisfactory and I get a lot of artifacts.
Therefore I've solved the problem using a separate NSWindow to host the cocoa controls and adding this window as a child window of the main window containing the NSOpenGLView.
It works fine and I'm able to get quite the same FPS as the initial implementation.
Since I consider this solution quite like a dirty hack, I'm looking for an alternative and more clean way of achiving what I need.
Few days ago I came across NSOpenGLLayer and I thought that it could be used as a viable solution for my problem.
So finally, after all this preamble, here comes my question:
is it possible to draw to a NSOpenGLLayer from a separate thread using CVDisplayLink callback?.
So far I've tried to implement this but I'm not able to draw from the CVDisplayLink callback. I can only -setNeedsDisplay:TRUE on the NSOpenGLLayer from the CVDisplayLink callback and then perform the drawing in -drawInOpenGLContext:pixelFormat:forLayerTime:displayTime: when it gets automatically called by cocoa. But I suppose that this way I'm drawing from the main thread, isn't it?
After googling for this I've even found this post in which the user claims that under Lion drawing can occur only inside -drawInOpenGLContext:pixelFormat:forLayerTime:displayTime:.
I'm on Snow Leopard at the moment but the app should run flawlessly even on Lion.
Am I missing something?
Yes, it is possible, though not recommend. Call display on the layer from within your CVDisplayLink. This will cause canDrawInContext:... to be called and if it returns YES, drawInContext:... will be called and all this on whatever thread called display. To make the rendered image visible on screen, you have to call [CATransaction flush]. This method has been suggested on the Apple mailing list, though it is not completely problem free (the display method of other view may get called on your background thread as well and not all views support rendering from a background thread).
The recommend way is to make the layer asynchronous and render the OpenGL context on main thread. If you cannot achieve a good framerate that way, since your main thread is busy elsewhere, it is recommend to rather move everything else (pretty much your whole application logic) to other threads (e.g. using Grand Central Dispatch) and only keep user input and drawing code on the main thread. If your window is very big, you may still not get anything better than 30 FPS (one frame ever two screen refreshes), yet that comes from the fact, that CALayer composition seems a rather expensive process and it has been optimized for more or less static layers (e.g. layers containing a picture) and not for layers updating themselves 60 FPS.
E.g. if you are writing a 3D game, you are advised not to mix CALayers with OpenGL content at all. If you need Cocoa UI elements, either keep them separated from your OpenGL content (e.g. split the window horizontally into a part that displays only OpenGL and a part that only displays controls) or draw all controls yourself (which is pretty common for games).
Last but not least, the two window approach is not as exotic as you may think, that's how VLC (the video player) draws its controls over the video image (which is also rendered by OpenGL on Mac).
I've been playing with pygame (on Debian/Lenny).
It seems to work nicely, except for annoying tearing of blits (fullscreen or windowed mode).
I'm using the default SDL X11 driver. Googling suggests that it's a known issue with SDL that X11 provides no vsync facility (even with a display created with FULLSCREEN|DOUBLEBUF|HWSURFACE flags), and I should use the "dga" driver instead.
However, running
SDL_VIDEODRIVER=dga ./mygame.py
throws in pygame initialisation with
pygame.error: No available video device
(despite xdpyinfo showing an XFree86-DGA extension present).
So: what's the trick to getting tear-free vsynced flips ? Either by getting this dga thing working or some other mechanism ?
The best way to keep tearing to a minimum is to keep your frame rate as close to the screen's frequency as possible. The SDL library doesn't have a vsync unless you're running OpenGL through it, so the only way is to approximate the frame rate yourself.
The SDL hardware double buffer isn't guaranteed, although nice when it works. I've seldomly seen it in action.
In my experience with SDL you have to use OpenGL to completely eliminate tearing. It's a bit of an adjustment, but drawing simple 2D textures isn't all that complicated and you get a few other added bonuses that you're able to implement like rotation, scaling, blending and so on.
However, if you still want to use the software rendering, I'd recommend using dirty rectangle updating. It's also a bit difficult to get used to, but it saves loads of processing which may make it easier to keep the updates up to pace and it avoids the whole screen being teared (unless you're scrolling the whole play area or something). As well as the time it takes to draw to the buffer is at a minimum which may avoid the blitting taking place while the screen is updating, which is the cause of the tearing.
Well my eventual solution was to switch to Pyglet, which seems to support OpenGL much better than Pygame, and doesn't have any flicker problems.
Use the SCALED flag and vsync=True when calling set_mode and you should be all set (at least on any systems which actually support this; in some scenarios SDL still can't give you a VSync-capable surface but they are increasingly rare).