Implementing shadows with deferred ligtning - graphics

i am currently programming deffered rendering system for my XNA 4 project, mostly following The Cansin tutorial. however, he claims that creating directional light that cast dynamic shadow is impossible - it's not true as many games (like stalker) use dynamic directional lights with deferred shading for creating realistic sun shadows. do you have any idea how could i implement such system? it is crucial for me as most of action of my game will happen outdoor and do not want to use spotlight that follow the player as workaround.
best regards

Check out the Cansins article on Deferred Rendering. It contains Spot and Point lights with Expotential Shadows, as well as SSAO with normals. A great tutorial.

The shadow volume technique yields very realistic shadows and can be calculated in realtime. The wikipedia article should give you a good starting point:
http://en.wikipedia.org/wiki/Shadow_volume
DevMaster.net has also a very detailed article about that topic.

Yes you can use directional light shadow maps with a deferred renderer. I can't imagine what reason that tutorial gives for why it would be possible to use shadow maps for some types of lights and not for others.
There are certainly differences between how you might want to implement shadow maps for directional lights vs. spotlights but once you have that part figured out it shouldn't be any more difficult to adapt one to your deferred renderer than the other.
If you're interested in actual shadow map implementations I would post that as a separate question.

Related

Why do we still use Fixed function blending operations in D3D11 ect?

I was looking aground trying to understand why we are still using fixed function blending modes in newer 3D API's (like D3D11). In D3D10 fixed function Alpha Clipping was removed in favor of using the shaders. Why because its a much more powerful approach to almost any situation.
So why then can we not calculate or own blending operations (aka texture sample from the RenderTarget we are currently rendering into)?? Is there some hardware design issue in the video card pipelines that make this difficult to accomplish?
The reason this would be useful, is because you could do things like make refraction shaders run way faster as you wouldn't have to swap back and forth between two renderTargets for each refractive object overlay. Such as a refractive windowing system for an OS or game UI.
Where might be the best place to suggest an idea like this as this is not a discussion forum as I would love to see this in D3D12? Or is this already possible in D3D11?
So why then can we not calculate or own blending operations
Who says you can't? With shader_image_load_store (and the D3D11 equivalent), you can do pretty much anything you want with images. Provided that you follow the rules. That last part is generally what trips people up. Doing a full read/modify/write in a shader, such that later fragment shader invocations don't read the wrong value is almost impossible in the most general case. You have to restrict it by saying that each rendered object will not overlap with itself, and you have to insert a memory barrier between rendered objects (which can overlap with other rendered objects). Or you use the linked list approach.
But the point is this: with these mechanisms, not only have people implemented blending in shaders, but they've implemented order-independent transparency (via linked lists). Nothing is stopping you from doing what you want right now.
Well, nothing except performance of course. The fixed-function blender will always be faster because it can run in parallel with the fragment shader operations. The blending units are separate hardware from the fragment shaders, so you can be doing blending operations while simultaneously doing fragment shader ops (obviously from later fragments, not the ones being blended).
The read/modify/write mechanism in the blend hardware is designed specifically for blending, while the image_load_store is a more generic mechanism. And while generic may beat specific in the long-term of hardware evolution, for the immediate and near-future, you can expect fixed-function blending to beat image_load_store blending performance-wise every time.
You should use it only when you must. And even the, decide if you really, really need it.
Is there some hardware design issue in the video card pipelines that make this difficult to accomplish?
Yes, this is actually the case. If one could do blending in the fragment shader, this would introduce possible feedback loops, and this really complicates things. Blending is done in a separate hardwired stage for performance and parallelization reasons.

Tessellation transition

I want to implement tessellation transition from fine to coarse geometry and vice versa for terrain rendering which doesn't introduce discontinuities (cracks).
Real-time performance is not required i.e. it can be view-independent.
What do you think about the following proposal:
alt text http://www.shrani.si/f/A/qD/2UJlczki/tessellation.png?
Is it even possible?
Have you implemented something similar?
What are the drawbacks?
Do you have any simpler suggestions?
Yes, this has been done many times. See for instance Hierarchical 4-K Meshes. There are probably references that are specific to terrain modeling and rendering but I don't have one handy.

How does the "Unlimited Detail" graphics technology work?

So I stumbled upon this "new" graphics engine/technology called Unlimited Detail.
This seems to be pretty interesting granted it's real and not a fake.
They have some videos explaining the technology but they only scratch the surface.
What do you think about it? Is it programmatically possible?
Or is it just a scam for investors?
Update:
Since the only answer was based on voxels I have to copy this from their site:
Unlimited Details method is very different to any 3D method that has been invented so far. The three current systems used in 3D graphics are Ray tracing polygons and point cloud/voxels, they all have strengths and weaknesses. Polygons runs fast but has poor geometry, Ray-trace and voxels have perfect geometry but run very slowly.
Unlimited Detail is a fourth system, which is more like a search algorithm than a 3D engine
The underlying technology is related to something called sparse voxel octrees (see, e.g., this paper), which aren't anything incredibly amazing. What the video doesn't tell you is that these are not at all suited for things that need to be animated, so they're of limited use for anything that uses procedural animation (e.g., all ragdoll physics, etc.). So they're very inflexible. You can get great detail, but you get it in a completely static world.
A rough summary of where things stand with this technology in mainstream games is here. You will also want to check out Samuli Laine's work; he's a Finnish researcher who is focusing a great deal of his attention on this subject and is unlocking some of the secrets to implementing it well.
Update: Yes, the website says it's not "voxel-based". I suspect this is merely an issue of semantics, however, in that what they're using are essentially voxels, but because it's not exactly a voxel they feel safe in being able to claim that it's not voxel-based. In any case, the magic isn't in how similar to a voxel it is -- it's how they select which voxels to actually show. This is the primary determinant of speed.
Right now, there is no incredibly fast way to show voxels (or something approximating a voxel). So either they have developed a completely new, non-peer-reviewed method for filtering voxels (or something like them), or they're lying.
You might find more detail in the following patents:
"A Computer Graphics Method For Rendering Three Dimensional Scenes"
"A Method For Efficent Streaming Of Octree Data For Access"
- Each voxel (they call it a "node") is represented as a single bit, along with information voxels at a finer level of detail.
The full-text can be viewed online here:
https://www.lens.org/lens/search?q=Euclideon+Pty+Ltd&l=en
or
http://worldwide.espacenet.com/searchResults?submitted=true&query=EUCLIDEON

Recommended 3D Programming Aspects for Light/Laser Show Simulator?

Hey guys, I would like to develop a light/laser show editor and simulator, and for this of course I am going to learn some graphics programming. I am thinking about using C# and XNA.
I was just wondering what aspects of graphics programming I should research or focus on given the project I am working on. I am new to graphics programming so I don't know much about it, but for example I imagine something that I might look into would (possibly?) be volumetric lighting.
For example, what would be a practical way to go about rendering a 'laser' of varied width/color? I read somewhere to just draw a cylinder and apply a shader to it, I would like to confirm that this is the way.
Given that this seems like a big project, I was thinking about starting off by creating light sources and giving them properties so that I can easily manipulate them. I have (mis)read that only a certain amount of lights can be rendered at any given time, I believe eight. Does this only apply to ambient lights? Given this possible limitation, and the fact that most of the lights I will use will be directional, such as head-lights or lasers, what would be a different way to render these? Is that what volumetric lighting would be?
I'd just like to get some things clear before I dive into it. Since I'm new to this I probably didn't make the best use of words, so if something doesn't make sense please let me know. Thanks and sorry for my ignorance.
The answer to this depends on the level of sophistication that you need in your display simulation. Computer graphics is ultimately a simulation of the transport of light; that simulation can be as sophisticated as calculating the fraction of laser light deflected by particles in the atmosphere to the viewer's eyepoint, or as simple as drawing a line. Try out the cylinder effect and see if it works for your project. If you need something more sophisticated, look into shader programming (using Nvidia Cg, for example), and volumetric shading as you mentioned; also post-processing glow effects may be useful. For OpenGL, I believe there is a limit of 8? light sources in a scene, but you could conceivably work around this limit by doing your own shading logic.
Well if it's just for light show simulations I'd imagine your going to need a lot of custom lighting effects - so regardless if you decide to use XNA or straight DirectX your best bet would be to start by learning shader languages and how to program various lighting effects using them. Once you can reproduce the type of laser lighting you want, then you can experiment with the polygons you want to use to represent the lasers. (I've used the cylinder method in some of my work for personal purposes, but I'm not sure how well straight cylinders will fit your purpose).
Although its faster, I think its best not to use vanilla hardware lighting because of its limitations. Pixel shaders can help with you task. Also you may want to chose OpenGL because of portability and its clarity in rendering methods. I worked on Direct3D for several years before switching to OpenGL. OpenGL functions and states are easier to learn and rendering methods (like multi-pass rendering) is a lot clear. If you like to code on C# (which I dont recommend for these tasks), you can use CsGL library to access OpenGL functions.

Antialiasing alternatives

I've seen antialiasing on Windows using GDI+, Java and also that provided by Photoshop and Gimp. Are there any other libraries out there which provide antialiasing facility without depending on support from the host OS?
Antigrain Geometry provides anti-aliased graphics in software.
As simon pointed out, the term anti-aliasing is misused/abused quite regularly so it's always helpful to know exactly what you're trying to do.
Since you mention GDI, I'll assume you're talking about maintaining nice crisp edges when you resize them - so something like a character in a font looks clean and not pixelated when you resize it 2x or 3x it's original size. For these sorts of things I've used a technique in the past called alpha-tested magnification - you can read the whitepaper here:
http://www.valvesoftware.com/publications/2007/SIGGRAPH2007_AlphaTestedMagnification.pdf
When I implemented it, I used more than one plane so I could get better edges on all types of objects, but it covers that briefly towards the end. Of all the approaches (that I've used) to maintain quality when scaling vector images, this was the easiest and highest quality. This also has the advantage of being easily implemented in hardware. From an existing API standpoint, your best bet is to use either OpenGL or Direct3D - that being said, it really only requires bilinear filtered and texture mapped to accomplish what it does, so you could roll your own (I have in the past). If you are always dealing with rectangles and only need to do scaling it's pretty trivial, and adding rotation doesn't add that much complexity. If you do roll your own, make sure to pay particular attention to subpixel positioning (how you resolve pixel positions that do not fall on a full pixel, as this is critical to the quality and sometimes overlooked.
Hope that helps!
There are (often misnamed, btw, but that's a dead horse) many anti-aliasing approaches that can be used. Depending on what you know about the original signal and what the intended use is, different things are most likely to give you the desired result.
"Support from the host OS" is probably most sensible if the output is through the OS display facilities, since they have the most information about what is being done to the image.
I suppose that's a long way of asking what are you actually trying to do? Many graphics libraries will provide some form of antialiasing, whether or not they'll be appropriate depends a lot on what you're trying to achieve.

Resources