I have some questions about the SSAO technique implementation:
Does it really need a second (or more) pipeline with every geometry? I mean, I found some tutorials and stuff about it but mostly they just give you the directions to do it without entering in further details.
Is there any optimization possible? I'm using OSG and the I've got the impression that if you send the textures for the CPU and throwing again to the GPU isn't the best solution possible.
Is it possible to make the shaders generate a texture with the samples depth in a buffer and send it to the second pipe line using only the quad for the screen, the colors, the depth of the scene and the depth for the tests? I'm using osg and couldn't find how to properly do it in documentations.
In general, SSAO is best suited to being implemented as part of a deferred shading approach. A strictly forward shading approach is possible, but would still require two rendering passes, and SSAO can easily be added to the second rendering pass of a deferred shading engine. In SSAO, you need the complete depth buffer of your scene to be able to calculate occlusion, so the short answer to section 1 of your question is yes, SSAO requires two rendering passes.
Note that in deferred shading, although there are two rendering passes, the complex geometry (i.e. your models) is only rendered during the first pass, and the second pass is generally made up of simple polygon shapes rendered for each type of light. This is almost what you're suggesting in section 3 of your question.
With regards to section 2 of your question, when set up correctly, you shouldn't need to move your intermediate textures back to the CPU and then back to the GPU between the two rendering passes; you merely make your first rendering pass's textures available as a resource to your second rendering pass.
Related
I am developing on a linux system using latest (at the moment) SDL2 (2.0.8) + openGL ES 2.0 (GLSL 1.0) eventually targeting a raspberry pi 3 board. I have so far done a few things like drawing text with freetype, drawing lines, text boxes (editable), text lists, waveform boxes (all i need to pass to a function is an array of vertices) and other shapes with glDrawArrays(). Now, there are things that need to be refreshed at, let's say, 10 times per sec and others that need 1 time per second. What would be the best approach to skip re-rendering everything at the rate of 10 times per sec? Because obviously openGL works by drawing everything from scratch on every 'frame'. However i know and you know that other approaches exist that include: rendering on top of the screen you already have or taking a screenshot and rendering on top of it only the fast changing things as well as other solutions. What do you thing would be the best approach to skip re-doing everything before calling SDL_GL_SwapWindow() ? How can i take a screen shot and render it on the invisible buffer then render only the fast changing objects and then call SDL_GL_SwapWindow() ?
This is a screen shot of the app so far drawing basic things
Thanks in advance.
i eventually had to realize that i should not have posted the question in the first place but since this is a place where people learn from others i now feel somewhat nicer :) . So, the thing i had to do was to simply stop clearing the invisible buffer (i will call it that for simplicity) and render on top of it only controls that change. Those that change are updated by covering the area that they take by a rectangle and then draw new stuff on that area. I have already done it and the frame rate just 'exploded'. I do not really think that there is a better approach since the way i do it requires no action at all. All i had to do was to add a few if conditions that selectively rendered or skipped every time the execution reached the point where functions iterate through the controls that have to be drawn on screen and therefore decide what to render and what not. However a well thought set of structures is required for every control instead of declaring and defining endlessly global variables which will only makes things confusing and difficult to maintain.
Regards to all.
I have used pbrt to render my scene. I have specified the viewing angle in the scene file and on rendering it with pbrt I see the image from that specific viewing angle. I want to know if there exists a way by which I can rotate the scene rendered by pbrt using my mouse in real time
No.
To see if it is even possible, render a scene and time how ling it takes. In order to get it real-time you will need pbrt to render at least a few frames a second, preferably 60!
I don't think this is going to happen in 2016.
Alternatively you will need something like an OpenGL representation to perform the real-time interaction and then the rendered scene can only be displayed over the top (once the rendering has been finished). the frustums need to match in order for you to do this otherwise what the user interacts with will not be the same as what they see rendered.
If your editing the scene file, it sounds like your not in coding land and so the only possibility is to write some program that can display the scene (in GL) and update the scene file information to be the same as the current camera and render using pbrt. Its all going to take a long time (pbrt needs to parse the file each time, and re-buffer all the geometry) since supplying the file means pbrt won't save anything from the previous state and so will have to construct acceleration structures etc as well as rendering the scene. Each frame!
Even in code pbrt is not going to give you great performance. It's not designed for that, more to be a physically accurate path tracer (as the name suggests). In order to get anything remotely near real-time, you'll need some bad ass acceleration structures and better command of the light model you are using. If you really are interested your probably need to write your own renderer. Look into Metropolis Light Transport (MLT) and Vertex connect merge (VCM), which are much more refined/efficient models using Monte Carlo method.
Plus some pretty decent hardware with lots of cores, or a decent gfx card if wish to employ SIMD through Cuda or equivalent.
[EDIT] Also note that the pbrt renderer, is based on a book "Physically Based Rendering (From Theory to Implementation)" ISBN-13: 978-0123750792. Which outlines how to implement your own version of pbrt.
I lock and fill a vertex buffer every frame in Direct3d9 with data from my blendshape code. My shading uses two steps, so I render once with one shader, then draw an additive blend with my other shader.
For reasons beyond me, the data in my vertex buffer is (apparently) slightly different between those two drawing calls, because I have flickering z-fighting where the second pass sometimes renders 'behind' the first.
This is all done in one thread, and the buffer is unlocked a long time before the render calls. Additionally, no changes to any shader instruction take place, so the data should be exactly the same in both calls. If the blendshape happens not to change, no z-fighting takes place.
For now I 'push' the depth a little in my shader, but this is a very inelegant solution.
Why might this data be changed? Why may DirectX make changes to the data in my buffer after I unlock it? Can I force it not to change it?
1st. Are you sure the data is really changed by D3D, or this is just assumption? I'm sure D3D doesn't change your data
2nd. As you said, you have two different shaders drawing your geometry. They mave have different transformation operations. Or because of optimization the transformation in your shaders could be different, thats why your transformed vertices may differ slightly (but enough for z-fighting). I suggest using two passes in one shader/technique.
Or if your still want to use two shaders, you better use shared code for transformation and other identitcal operation.
I can sure that the D3D runtime will not change any data you passed in by a vertex buffer, I did the same thing like you when render two layers terrain, no Z-fighting. But there are indeed some render states will change it while rasterizing the triangles into pixels, they're D3DRS_DEPTHBIAS and D3DRS_SLOPESCALEDEPTHBIAS in D3D9, or the equal values in D3D10_RASTERIZER_DESC structure. If these render states were changed, you should check them.
You also need to be sure that all of the transform matrices or other constants which do calculation with position in the shader are precisely equal, otherwise there will be z-fighting.
I suggest you use some graphic debugging tools to check it. You can use PIX, or PerfHUD or Nsight if you were using NVIDIA card.
I'm sorry for my poor English, it must be hard to understand. But I wish this could help you, thanks.
I was looking aground trying to understand why we are still using fixed function blending modes in newer 3D API's (like D3D11). In D3D10 fixed function Alpha Clipping was removed in favor of using the shaders. Why because its a much more powerful approach to almost any situation.
So why then can we not calculate or own blending operations (aka texture sample from the RenderTarget we are currently rendering into)?? Is there some hardware design issue in the video card pipelines that make this difficult to accomplish?
The reason this would be useful, is because you could do things like make refraction shaders run way faster as you wouldn't have to swap back and forth between two renderTargets for each refractive object overlay. Such as a refractive windowing system for an OS or game UI.
Where might be the best place to suggest an idea like this as this is not a discussion forum as I would love to see this in D3D12? Or is this already possible in D3D11?
So why then can we not calculate or own blending operations
Who says you can't? With shader_image_load_store (and the D3D11 equivalent), you can do pretty much anything you want with images. Provided that you follow the rules. That last part is generally what trips people up. Doing a full read/modify/write in a shader, such that later fragment shader invocations don't read the wrong value is almost impossible in the most general case. You have to restrict it by saying that each rendered object will not overlap with itself, and you have to insert a memory barrier between rendered objects (which can overlap with other rendered objects). Or you use the linked list approach.
But the point is this: with these mechanisms, not only have people implemented blending in shaders, but they've implemented order-independent transparency (via linked lists). Nothing is stopping you from doing what you want right now.
Well, nothing except performance of course. The fixed-function blender will always be faster because it can run in parallel with the fragment shader operations. The blending units are separate hardware from the fragment shaders, so you can be doing blending operations while simultaneously doing fragment shader ops (obviously from later fragments, not the ones being blended).
The read/modify/write mechanism in the blend hardware is designed specifically for blending, while the image_load_store is a more generic mechanism. And while generic may beat specific in the long-term of hardware evolution, for the immediate and near-future, you can expect fixed-function blending to beat image_load_store blending performance-wise every time.
You should use it only when you must. And even the, decide if you really, really need it.
Is there some hardware design issue in the video card pipelines that make this difficult to accomplish?
Yes, this is actually the case. If one could do blending in the fragment shader, this would introduce possible feedback loops, and this really complicates things. Blending is done in a separate hardwired stage for performance and parallelization reasons.
I've been studying 3D graphics on my own for a while now and I want to get a greater understanding of just how everything works. What I would like to do is to create a simple game without using DirectX or OpenGL. I understand most of the math I believe, but the problem I am running up against is I do not know how to get control of the pixels being displayed in a window.
How do I specify what color I want each pixel in my window to be?
I understand I will probably run into issues with buffers and image shearing and probably terrible efficiency problems, but I want to create my own program so that I could see from the very lowest level, of the high level language, how the rendering process works. I really have no idea where to start though. I've figured out how to output BMPs, but I would like to have a running program spitting out 20+ frames per second. How do I accomplish this?
You could pick a environment that allows you to fill an array with values for pixels and display it as a bitmap. This way you come closest to poking RGB values in video memory. WPF, Silverlight, HTML5/Javascript can do this. If you do not make it full screen these technologies should suffice for now.
In WPF and Silverlight, use the WriteableBitmap.
In HTML5, use the canvas
Then it is up to you to implement the logic to draw lines, circles, bezier curves, 3D projections.
This is a lot of fun and you will learn a lot.
I'm reading between the lines that you're more interested in having full control over the rendering process from a low level, rather than having a specific interest in how to achieve that on one specific platform.
If that's the case then you will probably get a good bang for your buck looking at a library like SDL which provides you with a frame buffer that you can render to directly but abstracts away a lot of the platform specifics issues. It has been around for quite a while and there are some good tutorials to give you an idea of whether it's the kind of thing you're looking for - see this tutorial and the subsequent one in the same series, which should be enough to get you up and running.
You say you want to create some kind of a rendering engine, meaning desinging you own Pipeline and matrice classes. Which you are to use to transform 3D coordinates to 2D points.
When you have got the 2D points you've been looking for. You can use say for instance on windows, you can select a brush and draw you triangle values while coloring them at the same time.
I do not know why you would need Bitmaps, but if you want to practice say Texturing you can also do that yourself although off course on a weak computer this might take your frames per second significantly.
If you aim is to understand how rendering works on the lowest level. This is with no doubt a good practice.
Jt Schwinschwiga