I am trying to identify what mesh/actor is being rendered from within the thread responsible for doing the actual graphics API draw calls (RHI thread) in Unreal Engine 4. I understand that most of the information passed to the RHI thread is from the rendering thread which mostly is a queue of draw commands, callbacks, and shader information. After doing some digging in the ue4 source code I am left with these structs,
FMeshDrawCommand
FMeshBatch
RHICommandList
None of which give any information about what is actually being rendered currently at a low level
I understand what I want to do is hacky and that the correct way to achieve would be from within the rendering thread, not the RHI thread. I am doing this for some experimentation, learning and some fun. If any ue4 rendering wizards have a clue that would be great, thank you.
Related
I am working on a 3D simulation program using openGL which uses a render loop with a fixed framerate to keep the screen updated as the world changes. Standard procedure really, and for a typical video game this is certainly the best approach (I originally took this code from an openGL game tutorial). But for me, the 3D scene will not be changing as rapidly and unpredictably as in a computer game. It will be possible for the 3D scene itself to change from time to time but in general it won't change between render calls (it's more of a visualisation tool for geometric problems). The user will be able to control the position/orientation of the camera but in general there will be times when the camera won't move for several seconds/minutes (potentially hundreds of render calls) and since the 3D scene is likely be static for the majority of the time, I wonder if I really need a continuous render loop...?
My thinking is that I will remove the automatic render loop and instead I will explicitly call my update method when either,
The 3D scene changes (very rare)
The camera moves (somewhat rare)
As I will be using this largely for research purposes, the scene/camera is likely to stay in one state for several minutes at a time and it seems silly to be continuously updating the frame buffer when it's not changing.
My question then is, is this a good approach? All the online tutorials for 3D graphics rendering seem to deal with game design but that's not really my requirement. In other words, what are the pros and cons of using a render loop vs. manually calling "update()" whenever something changes?
Thanks
There's no problem with this approach, in fact many 3D apps, like 3DS MAX use explicit rendering. You just pick what is better for your needs, in most games scene changes each frame so it's better to have update loop, but if you were doing some chess game, without animated UI you could also use explicit rendering only when the scene changes.
For apps with rare changes, like 3DS or Blender it would be better to call rendering only on change. This way you save the CPU/GPU but also power and your PC don't heat up so much.
With explicit rendering you can also have some performance tricks, like drawing simplified scene when camera moves, for better performance. Then when camera stops you render the full scene in background once again, and replace the low-quality rendering with the new one.
I have been tearing my hair out for a while over this. I need an OpenGL 3.2 Core (no deprecated stuff!) way to efficiently render many sprites, using batching (no instancing).
I've seen examples that do this with geometry alone, but mine also needs to send textures to it, I don't know how to do this.
I need a well done example of it working in action. And looking at how other libs like monogame and such do it isn't much help, because all I'm interested in is the GL code, and it has to have no deprecated stuff in it.
Basically I want to be able to efficiently render thousands+ of sprites, all having textures. The texture is just a spritesheet, so I just need to tell it to render a region of that spritesheet.
I'm disappointed in the amount of material available for programmable pipeline. To the point where it seems like it'd be so much easier to just say screw it and use fixed pipeline, even though I definitely don't want to do that.
So yeah, any full examples that do what I want? Or could somebody more knowledgable write one up? :)
A lot of the examples are "oh, here's how you render 1 triangle". Well that's great, except nobody needs to render only 1 triangle/quad. And they need to be textured in addition to that!
An example that uses VBOs/VAOs/EBOs
ALSO: this means the code can't use glTexPointer and that stuff, but just in raw VBOs/VAOs...
I saw this question and decided to write a little program that does some "sprite" rendering using points and gl_PointSize. I'm not quite sure what you mean by "batching" as opposed to "instancing," but my program uses the glDrawArraysInstanced() call so that I can render multiple points without needing my VBO to be variable sized. My code also doesn't texture the sprites, but that's easy enough to add in (upload the active texture index (the one that was active during your call to glTexSubImage), to a GLSL sampler2D using glUniform1i).
Anyway, here's the program I wrote: http://litherum.blogspot.com/2013/02/sprites-in-opengl-programmable-pipeline.html Hope you can learn from it!
I'm working on an app which needs to draw with OpengGL at a refresh rate at least equal to the refresh rate of the monitor. And I need to perform the drawing in a separate thread so that drawing is never locked by intense UI actions.
Actually I'm using a NSOpenGLView in combination with CVDisplayLink and I'm able to achive 60-80FPS without any problem.
Since I need also to display some cocoa controls on top of this view I tried to subclass NSOpenGLView and make it layer-backed, following LayerBackedOpenGLView Apple example.
The result isn't satisfactory and I get a lot of artifacts.
Therefore I've solved the problem using a separate NSWindow to host the cocoa controls and adding this window as a child window of the main window containing the NSOpenGLView.
It works fine and I'm able to get quite the same FPS as the initial implementation.
Since I consider this solution quite like a dirty hack, I'm looking for an alternative and more clean way of achiving what I need.
Few days ago I came across NSOpenGLLayer and I thought that it could be used as a viable solution for my problem.
So finally, after all this preamble, here comes my question:
is it possible to draw to a NSOpenGLLayer from a separate thread using CVDisplayLink callback?.
So far I've tried to implement this but I'm not able to draw from the CVDisplayLink callback. I can only -setNeedsDisplay:TRUE on the NSOpenGLLayer from the CVDisplayLink callback and then perform the drawing in -drawInOpenGLContext:pixelFormat:forLayerTime:displayTime: when it gets automatically called by cocoa. But I suppose that this way I'm drawing from the main thread, isn't it?
After googling for this I've even found this post in which the user claims that under Lion drawing can occur only inside -drawInOpenGLContext:pixelFormat:forLayerTime:displayTime:.
I'm on Snow Leopard at the moment but the app should run flawlessly even on Lion.
Am I missing something?
Yes, it is possible, though not recommend. Call display on the layer from within your CVDisplayLink. This will cause canDrawInContext:... to be called and if it returns YES, drawInContext:... will be called and all this on whatever thread called display. To make the rendered image visible on screen, you have to call [CATransaction flush]. This method has been suggested on the Apple mailing list, though it is not completely problem free (the display method of other view may get called on your background thread as well and not all views support rendering from a background thread).
The recommend way is to make the layer asynchronous and render the OpenGL context on main thread. If you cannot achieve a good framerate that way, since your main thread is busy elsewhere, it is recommend to rather move everything else (pretty much your whole application logic) to other threads (e.g. using Grand Central Dispatch) and only keep user input and drawing code on the main thread. If your window is very big, you may still not get anything better than 30 FPS (one frame ever two screen refreshes), yet that comes from the fact, that CALayer composition seems a rather expensive process and it has been optimized for more or less static layers (e.g. layers containing a picture) and not for layers updating themselves 60 FPS.
E.g. if you are writing a 3D game, you are advised not to mix CALayers with OpenGL content at all. If you need Cocoa UI elements, either keep them separated from your OpenGL content (e.g. split the window horizontally into a part that displays only OpenGL and a part that only displays controls) or draw all controls yourself (which is pretty common for games).
Last but not least, the two window approach is not as exotic as you may think, that's how VLC (the video player) draws its controls over the video image (which is also rendered by OpenGL on Mac).
First I want to apologize for my approximate English, as I'm French. I'm currently making a real-time game in java, using LWJGL.
I have some questions regarding game loops:
I'm running the rendering routine in a thread. Is it a good idea? Usually, the rendering routine is fairly slow and should not slow down the world update (tick) routine, which is way more important. So I guess using a thread here seems like a good idea (minus the complications from using a thread).
In the world update routine, I'm updating a list of entities with the current time. Each entity can then compute their own deltaTime, corresponding to the last time they were updated. This differs from the usual update loop, which updates every entity in the list with the same deltaTime. This seemed appropriate because of the threaded rendering. Is it a good idea? Should I use the second method instead? If so, is the threaded rendering still needed? If so, do I have to add a maximum deltaTime?
In general, is it a good idea to have a maximum deltaTime?
Thanks for your time!
Is it a good idea? Separate threads are fairly advanced stuff, I see no reason to do multithreading to begin with. All the mobile games I have worked on so far have not needed multiple threads, even though they are 'real-time'. Hardcore PC and console games are where multithreading really starts to come into play. Here is a link to a recent talk on the subject if interested : http://archive.assembly.org/2011/seminars/adventures-in-multithreaded-gameplay-coding.
Sounds like this could cause some strange things if the physics are not handled in one go. Not sure about this. Colliding an object that has already been updated to another position with an object that comes another time, for example, correcting this sort of situation may become problematic? Fast moving collisions may need to be subdivided, which may be why you have the separate update thread, but why not have them all calculated as happening at the same time?
'Variable timestep' and 'Fixed timestep' are the options available for rendering. Most games at the moment seem to choose a 30 fps fixed timestep. The rendering has to be kept under the limits so no catching up should be needed.
One problem with variable timestep is you are forced to pass deltaTime to all time-dependent areas. Fixed timestep is handy as you can assume you are running at say 30 fps, and use that value everywhere. It is a preferred method at the moment as far as I know.
Though this question is a few years old…
AFAIK,
Rendering is usually done in separated processor — GPU, so they're already a separated thread. But, drawing command must be processed by graphics driver (which is running in CPU) before dispatched to GPU, and this processing may be saved by being multi-threaded. Anyway in this case, you're responsible to manage synchronization between logics and rendering thread.
Generally speaking, games are all about interactions between objects, and it's very hard to divide state-graph into fully separated divisions. As a result, whole game state usually becomes single graph, and this graph cannot be updated while being rendered. In this case, you have no benefit by being multi-threaded.
If you can keep a separated immutable data for rendering, than you may gain some benefit from rendering in separated thread. But otherwise, I don't recommend it.
In addition, you should consider GC if you truly want a realtime game. GC related performance issues usually the biggest obstacles to make realtime stuffs.
Are there any howtos?
What is the best practice here for background thread drawing.
Is it okay to store the rectangle data from [NSView drawRect] in a queue and let the background thread take the rectangle and create some bitmap and render the data into the bitmap and then use performSelectorOnMainThread:withObject to draw it? Or can i directly draw into the a context from the background.
I bought the book "Programming with Quartz 2D" from Bunny Laden but haven't read it yet and there is no hint about multithreading in the book. Also couldn't find anything in the normal Apple API Reference pages.
Yes it's ok to store the rectangle data from [NSView drawRect] in a queue and let the background thread take the rectangle and create some bitmap and render the data into the bitmap and then use performSelectorOnMainThread:withObject to draw it.
As long as you do it in a thread safe manner.
That has nothing to do with drawing, so there is no reason it would be mentioned in "Programming with Quartz 2D" (which is a great book by the way - you should definitely get round to reading it). You probably want a companion book on multithreading.
Just consider the first part of your question. How are you going to store the rect in a queue? Add it to an NSMutableArray? Not thread safe.
Grand Central Dispatch is going to help a lot (you don't mention what platform you wish to support).