Tessellation transition - graphics

I want to implement tessellation transition from fine to coarse geometry and vice versa for terrain rendering which doesn't introduce discontinuities (cracks).
Real-time performance is not required i.e. it can be view-independent.
What do you think about the following proposal:
alt text http://www.shrani.si/f/A/qD/2UJlczki/tessellation.png?
Is it even possible?
Have you implemented something similar?
What are the drawbacks?
Do you have any simpler suggestions?

Yes, this has been done many times. See for instance Hierarchical 4-K Meshes. There are probably references that are specific to terrain modeling and rendering but I don't have one handy.

Related

Why do we still use Fixed function blending operations in D3D11 ect?

I was looking aground trying to understand why we are still using fixed function blending modes in newer 3D API's (like D3D11). In D3D10 fixed function Alpha Clipping was removed in favor of using the shaders. Why because its a much more powerful approach to almost any situation.
So why then can we not calculate or own blending operations (aka texture sample from the RenderTarget we are currently rendering into)?? Is there some hardware design issue in the video card pipelines that make this difficult to accomplish?
The reason this would be useful, is because you could do things like make refraction shaders run way faster as you wouldn't have to swap back and forth between two renderTargets for each refractive object overlay. Such as a refractive windowing system for an OS or game UI.
Where might be the best place to suggest an idea like this as this is not a discussion forum as I would love to see this in D3D12? Or is this already possible in D3D11?
So why then can we not calculate or own blending operations
Who says you can't? With shader_image_load_store (and the D3D11 equivalent), you can do pretty much anything you want with images. Provided that you follow the rules. That last part is generally what trips people up. Doing a full read/modify/write in a shader, such that later fragment shader invocations don't read the wrong value is almost impossible in the most general case. You have to restrict it by saying that each rendered object will not overlap with itself, and you have to insert a memory barrier between rendered objects (which can overlap with other rendered objects). Or you use the linked list approach.
But the point is this: with these mechanisms, not only have people implemented blending in shaders, but they've implemented order-independent transparency (via linked lists). Nothing is stopping you from doing what you want right now.
Well, nothing except performance of course. The fixed-function blender will always be faster because it can run in parallel with the fragment shader operations. The blending units are separate hardware from the fragment shaders, so you can be doing blending operations while simultaneously doing fragment shader ops (obviously from later fragments, not the ones being blended).
The read/modify/write mechanism in the blend hardware is designed specifically for blending, while the image_load_store is a more generic mechanism. And while generic may beat specific in the long-term of hardware evolution, for the immediate and near-future, you can expect fixed-function blending to beat image_load_store blending performance-wise every time.
You should use it only when you must. And even the, decide if you really, really need it.
Is there some hardware design issue in the video card pipelines that make this difficult to accomplish?
Yes, this is actually the case. If one could do blending in the fragment shader, this would introduce possible feedback loops, and this really complicates things. Blending is done in a separate hardwired stage for performance and parallelization reasons.

Best looking texture mapping for characters (NPCs, Enemies).

I'm making "dungeon master-like" game where corridors and objects will be models. I have everything completed, but the graphic part of the game missing. I also made test levels without texture.
I would like to know which texture mapping would be the best for a realistic look.
I was thinking about parallax mapping for walls and doors, normal mapping for objects like treasure and boxes.
What mapping should I choose for enemies, npcs?
I have never worked with HLSL before, so I want to be sure that I'll go straight ahead for my goal because I expect another hard work there.
The mapping to use depends on your tastes. But first of all implement diffuse color mapping and per pixel lights. When that is working add normal mapping. If still not satisfied, add parallax mapping.
Even better results than the combination of normal and parallax mapping can be achieved using DirectX 11 Tesselation and displacement mapping. But this is much more GPU intensive and may not work on older hardware.

Creating a lava lamp-like animation

I recently saw something that set me wondering how to create a realistic-looking (2D) lava lamp-like animation, for a screen-saver or game.
It would of course be possible to model the lava lamp's physics using partial differential equations, and to translate that into code. However, this is likely to be both quite difficult (because of several factors, not least of which is the inherent irregularity of the geometry of the "blobs" of wax and the high number of variables) and probably computationally far too expensive to calculate in real time.
Analytical solutions, if any could be found, would be similarly useless because you would want to have some degree of randomness (or stochasticity) in the animation.
So, the question is, can anyone think of an approach that would allow you to animate a realistic looking lava lamp, in real time (at say 10-30 FPS), on a typical desktop/laptop computer, without modelling the physics in any great detail? In other words, is there a way to "cheat"?
One way to cheat might be to use a probabilistic cellular automaton with a well-chosen transition table to simulate the motion of the blobs. Some popular screensavers (in particular ParticleFire) use this approach to elegantly simulate complex motions in 2D space by breaking the objects down to individual pixels and then defining the ways in which individual pixels transition by looking at the states of their neighbors. You can get some pretty emergent behavior with simple cellular automata - look at Conway's game of life, for example, or this simulation of a forest fire.
LavaLite is open source. You can get code with the xscreensaver-gl package in most Linux distros. It uses metaballs.

Recommended 3D Programming Aspects for Light/Laser Show Simulator?

Hey guys, I would like to develop a light/laser show editor and simulator, and for this of course I am going to learn some graphics programming. I am thinking about using C# and XNA.
I was just wondering what aspects of graphics programming I should research or focus on given the project I am working on. I am new to graphics programming so I don't know much about it, but for example I imagine something that I might look into would (possibly?) be volumetric lighting.
For example, what would be a practical way to go about rendering a 'laser' of varied width/color? I read somewhere to just draw a cylinder and apply a shader to it, I would like to confirm that this is the way.
Given that this seems like a big project, I was thinking about starting off by creating light sources and giving them properties so that I can easily manipulate them. I have (mis)read that only a certain amount of lights can be rendered at any given time, I believe eight. Does this only apply to ambient lights? Given this possible limitation, and the fact that most of the lights I will use will be directional, such as head-lights or lasers, what would be a different way to render these? Is that what volumetric lighting would be?
I'd just like to get some things clear before I dive into it. Since I'm new to this I probably didn't make the best use of words, so if something doesn't make sense please let me know. Thanks and sorry for my ignorance.
The answer to this depends on the level of sophistication that you need in your display simulation. Computer graphics is ultimately a simulation of the transport of light; that simulation can be as sophisticated as calculating the fraction of laser light deflected by particles in the atmosphere to the viewer's eyepoint, or as simple as drawing a line. Try out the cylinder effect and see if it works for your project. If you need something more sophisticated, look into shader programming (using Nvidia Cg, for example), and volumetric shading as you mentioned; also post-processing glow effects may be useful. For OpenGL, I believe there is a limit of 8? light sources in a scene, but you could conceivably work around this limit by doing your own shading logic.
Well if it's just for light show simulations I'd imagine your going to need a lot of custom lighting effects - so regardless if you decide to use XNA or straight DirectX your best bet would be to start by learning shader languages and how to program various lighting effects using them. Once you can reproduce the type of laser lighting you want, then you can experiment with the polygons you want to use to represent the lasers. (I've used the cylinder method in some of my work for personal purposes, but I'm not sure how well straight cylinders will fit your purpose).
Although its faster, I think its best not to use vanilla hardware lighting because of its limitations. Pixel shaders can help with you task. Also you may want to chose OpenGL because of portability and its clarity in rendering methods. I worked on Direct3D for several years before switching to OpenGL. OpenGL functions and states are easier to learn and rendering methods (like multi-pass rendering) is a lot clear. If you like to code on C# (which I dont recommend for these tasks), you can use CsGL library to access OpenGL functions.

Collision detection with hardware generated primitives

There's a lot of literature on collision detection, and I've read at least a big enough portion of it to be fairly familiar with most techniques. However, there's something that has eluded me for a while, and I figured, since StackOverflow provides access to a large group of brilliant minds at once, I'd ask here first before digging around in the bookshelf.
In this day and age, more and more work is being delegated to GPU rather than CPU, and in a lot of cases this is a good thing. For example, there are geometry shaders to create new geometry, or (slightly less new, but still quite fascinating) vertex shaders to which you can through a bunch of vertexes at, and something elegant will come out of it. What I was considering though, as these primitives exists only on the graphics hardware, how would you perform reliable collision detection with these primitives? Let's assume I have some kind of extremely simplified mesh which is displaced in a vertex shader (I don't have a concrete problem, I'm more playing with the idea, and I haven't gotten very deep into geometry shaders yet).
What I've considered so far is separate 'rendering' passes from suitable angles with shading more or less turned off, and perhaps lower resolution mesh, rendering the inside (with faces flipped inward) of my second primitive along with the mesh I want to test against, and executing an occlusion query for the mesh. If the mesh is completely occluded there'd be no intersection. This would of course require that my second primitive is convex.
Somehow I get the feeling that this kind of test will be extremely expensive as the number of primitives increase (even if a large portion can be culled directly). Does anyone else have another idea or technique? I'm more familiar with opengl and cg than directx, but if you have some examples or so in directx, I guess I'll be able to figure out the opengl counterparts.
All ideas are appreciated, so please brainstorm. :)
It sounds like Dan Horn's article “Stream Reduction Operations for GPGPU Applications” in GPU Gems 2 is exactly what you want. Like all chapters, it's freely available online.

Resources