Is voxel rendering a form of volume rendering? - graphics

When reading about volume rendering, I see voxel rendering is mentioned a lot. I know volume data have voxels, but are the two terms interchangeable, or are they completely different?

They are not interchangeable, but neither completely different. As your question title says, voxel rendering is a form of volume rendering, as it renders a volume. You can visualize volumetric data with a variety of methods, voxel rendering being one.
EDIT: You should also beware that sometimes people use the term volume rendering for algorithms that just render a set of discrete volume elements (voxels) and therefore really interchangeably to voxel rendering. But sometimes use it to refer to the more general visualization of 4 (or higher) dimensional data (as it arises e.g. from medical or geographical imaging precesses), which is rarely done by traditional voxel rendering (rendering small "boxes").
You should not use volume rendering and voxel rendering interchangeably, as the former is a much broader and more general topic.

All this confusion about "voxel rendering" goes back to the early 90s and the Commanche series of helicopter flight-simulation games; these claimed to use a "voxel terrain engine". To anyone familiar with computer graphics algorithms, "2.5D height field raycaster" would have been a far more accurate/descriptive designation, but somehow the "voxel rendering" marketing buzzword-speak has stuck and become associated with that particular terrain rendering method as much as with volume rendering, resulting in confusion ever since.

Related

Level of Detail in 3D graphics - What are the pros and cons?

I understand the concept of LOD but I am trying to find out the negative side of it and I see no reference to that from Googling around. The only pro I keep coming across is that it improves performance by omitting details when an object is far and displaying better graphics when the object is near.
Seriously that is the only pro and zero con? Please advice. Tnks.
There are several kinds of LOD based on camera distance. Geometric, animation, texture, and shading variations are the most common (there are also LOD changes that can occur based on image size and, for gaming, hardware capabilities and/or frame rate considerations).
At far distances, models can change tessellation or be replaced by simpler models. Animated details (say, fingers) may simplify or disappear. Textures may move to simpler textures, bump maps vanish, spec/diffuse maps combines, etc. And shaders may also swap-put to reduce the number of texture inputs or calculation (though this is less common and may be less profitable, since when objects are far away they already fill fewer pixels -- but it's important for screen-filling entities like, say, a mountain).
The upsides are that your game/app will have to render less data, and in some cases, the LOD down-rezzed model may actually look better when far away than the more-complex model (usually because the more detailed model will exhibit aliasing when far away, but the simpler one can be tuned for that distance). This frees-up resources for the nearer models that you probably care about, and lets you render overall larger scenes -- you might only be able to render three spaceships at a time at full-res, but hundreds if you use LODs.
The downsides are pretty obvious: you need to support asset swapping, which can mean both the real-time selection of different assets and switching them but also the management (at times of having both models in your memory pipeline (one to discard, one to load)); and those models don't come from the air, someone needs to create them. Finally, and this is really tricky for PC apps, less so for more stable platforms like console gaming: HOW DO YOU MEASURE the rendering benefit? What's the best point to flip from version A of a model to B, and B to C, etc? Often LODs are made based on some pretty hand-wavy specifications from an engineer or even a producer or an art director, based on hunches. Good measurement is important.
LOD has a variety of frameworks. What you are describing fits a distance-based framework.
One possible con is that you will have inaccuracies when you choose an arbitrary point within the object for every distance calculation. This will cause popping effects at times since the viewpoint can change depending on orientation.

XNA How can I get the color behind a sprite

I am currently working on a 2D project that generates random black terrain over a loaded background. I have a sprite that is loaded and controlled and am trying to find out what the best method for identifying the color behind the sprite to code some color based collision. I have tried a bunch of tutorials on perpixel and color but they all seem dependant on a collision map being used or bounding boxes between two preloaded images IE: sprite and colliding object.
If anyone could point me in the right direction it would be greatly appriciated.
Querying textures is a relatively expensive operation; I would strongly recommend that you avoid doing so in real time. Since you're generating your terrain information procedurally at runtime, why not just store it in an array and reference that?
If you need to composite textures or perform other rendering operations in order to create your terrain data, you can copy the resulting render target's data into an array in system memory using the following code:
var data = new Color[width * height];
texture.GetData(data);
Just try to avoid doing it any more often than is necessary.
I think the right direction would be away from pixel-perfect collisions. Most people assume it's necessary, but the fact is, 99% of games don't use pixel-perfect collisions because they are slow, difficult to implement properly, and overkill for most practical games. Most games use AABBs, circles, or spheres. They are simple to detect collisions between, and are "good enough" for most games. The only game I can name that uses pixel-perfect collisions is the original Worms.
This video also does a good job of covering collision detection: http://pyvideo.org/video/615/introduction-to-game-development (Collision Detection #1:13:20)

Typical rendering strategy for many and varied complex objects in directx?

I am learning directx. It provides a huge amount of freedom in how to do things, but presumably different stategies perform differently and it provides little guidance as to what well performing usage patterns might be.
When using directx is it typical to have to swap in a bunch of new data multiple times on each render?
The most obvious, and probably really inefficient, way to use it would be like this.
Stragety 1
On every single render
Load everything for model 0 (textures included) and render it (IASetVertexBuffers, VSSetShader, PSSetShader, PSSetShaderResources, PSSetConstantBuffers, VSSetConstantBuffers, Draw)
Load everything for model 1 (textures included) and render it (IASetVertexBuffers, VSSetShader, PSSetShader, PSSetShaderResources, PSSetConstantBuffers, VSSetConstantBuffers, Draw)
etc...
I am guessing you can make this more efficient partly if the biggest things to load are given dedicated slots, e.g. if the texture for model 0 is really complicated, don't reload it on each step, just load it into slot 1 and leave it there. Of course since I'm not sure how many registers there are certain to be of each type in DX11 this is complicated (can anyone point to docuemntation on that?)
Stragety 2
Choose some texture slots for loading and others for perpetual storage of your most complex textures.
Once only
Load most complicated models, shaders and textures into slots dedicated for perpetual storage
On every single render
Load everything not already present for model 0 using slots you set aside for loading and render it (IASetVertexBuffers, VSSetShader, PSSetShader, PSSetShaderResources, PSSetConstantBuffers, VSSetConstantBuffers, Draw)
Load everything not already present for model 1 using slots you set aside for loading and render it (IASetVertexBuffers, VSSetShader, PSSetShader, PSSetShaderResources, PSSetConstantBuffers, VSSetConstantBuffers, Draw)
etc...
Strategy 3
I have no idea, but the above are probably all wrong because I am really new at this.
What are the standard strategies for efficient rendering on directx (specifically DX11) to make it as efficient as possible?
DirectX manages the resources for you and tries to keep them in video memory as long as it can to optimize performance, but can only do so up to the limit of video memory in the card. There is also overhead in every state change even if the resource is still in video memory.
A general strategy for optimizing this is to minimize the number of state changes during the rendering pass. Commonly this means drawing all polygons that use the same texture in a batch, and all objects using the same vertex buffers in a batch. So generally you would try to draw as many primitives as you can before changing the state to draw more primitives
This often will make the rendering code a little more complicated and harder to maintain, so you will want to do some profiling to determine how much optimization you are willing to do.
Generally you will get better performance increases through more general algorithm changes beyond the scope of this question. Some examples would be reducing polygon counts for distant objects and occlusion queries. A popular true phrase is "the fastest polygons are the ones you don't draw". Here are a couple of quick links:
http://msdn.microsoft.com/en-us/library/bb147263%28v=vs.85%29.aspx
http://www.gamasutra.com/view/feature/3243/optimizing_direct3d_applications_.php
http://http.developer.nvidia.com/GPUGems2/gpugems2_chapter06.html
Other answers are better answers to the question per se, but by far the most relevant thing I found since asking was this discussion on gamedev.net in which some big title games are profiled for state changes and draw calls.
What comes out of it is that big name games don't appear to actually worry too much about this, i.e. it can take significant time to write code that addresses this sort of issue and the time it takes to spend writing code fussing with it probably isn't worth the time lost getting your application finished.

when would I use a collision solid consisting of the intersection of two spheres and two half-spaces?

In Panda3D, I've been learning a bit about the built-in physics engine and its collision detection features.
I'm trying to understand the DSSolid collision object, which is mentioned in a table on the Collision Solids manual page without explanation. It is tersely described in the API reference as "A collision volume or object made up of the intersection of two spheres (potentially a lens) and two half-spaces (planes)."
I basically understand that geometric description, but what is the purpose of such a shape??
Interestingly, this DSSolid is the one collision solid, other than a sphere, that can be either a "from" or an "into" solid.
This suggests to me that the shape is considered to be either more commonly needed than other shapes (such as a plane or a tube or an inverse sphere), or is cheaper to test. Neither of those reasons rings true to me... a DS would be more expensive than an inverse sphere to test for collisions against, and it seems to me, less useful. So I'm wondering, what is the use case for a DSSolid?
I'm curious too how the planes are typically arranged in relation to the two spheres... but that would probably become clear given the use case for this solid.
(And what does DS stand for? Double sphere?)
This question has been answered on the Panda3D forums :
Actually, I think this solid doesn't have much general use, and should probably be removed from the codebase. It was implemented once as part of an experiment by one of the Disney engineers whose initials happened to be D.S., and it was never developed further. The student who wrote up the collision page in the manual came across this solid and wrote what he knew about it, which wasn't much.

Modelling an I-Section in a 3D Graphics Library

I am using Direct3D to display a number of I-sections used in steel construction. There could be hundreds of instances of these I-sections all over my scene.
I could do this two ways:
Using method A, I have fewer surfaces. However, with backface culling turned on, the surfaces will be visible from only one side. If backface culling is turned off, then the flanges (horizontal plates) and web (vertical plate) may be rendered in the wrong order.
Method B seems correct (and I could keep backface culling turned on), but in my model the thickness of plates in the I-section is of no importance and I would like to avoid having to create a separate triangle strip for each side of the plates.
Is there a better solution? Is there a way to switch off backface culling for only certain calls of DrawIndexedPrimitives? I would also like a platform-neutral answer to this, if there is one.
First off, backface culling doesn't have anything to do with the order in which objects are rendered. Other than that, I'd go for approach B for no particular reason other than that it'll probably look better. Also this object probably isn't more than a hand full of triangles; having hundreds in a scene shouldn't be an issue. If it is, try looking into hardware instancing.
In OpenGL you can switch of backface culling for each triangle you draw:
glEnable(GL_CULL_FACE);
glCullFace(GL_FRONT);
// or
glCullFace(GL_BACK);
I think something similar is also possible in Direct3D
If your I-sections don't change that often, load all the sections into one big vertex/index buffer and draw them with a single call. That's the most performant way to draw things, and the graphic card will do a fast job even if you push half a million triangle to it.
Yes, this requires that you duplicate the vertex data for all sections, but that's how D3D9 is intended to be used.
I would go with A as the distance you would be seeing the B from would be a waste of processing power to draw all those degenerate triangles.
Also I would simply fire them at a z-buffer and allow that to sort it all out.
If it get's too slow then I would start looking at optimizing, but even consumer graphics cards can draw millions of polygons per second.

Resources