What's the difference between MPO (multi-plane overlay) compositing and DWM compositing? - direct3d

Can someone explain to me What's the point of MPO and what difference it makes when enabled?

Related

Nvidia High Performance Processor Setting leads to graphical bug (Seizure Warning) with current lighting system, drawing completely in the shader code

I followed the Lighting tutorial on learnopenGL, modifying some of the code to work in a 2D game engine. Everything was looking great and my team got our game done and the lights were quite simple for our designers to use. However we ran into a rare bug. as shown here: https://www.youtube.com/watch?v=to0mMP5I0cs one team member was able to recreate the bug by switching his Nvidia settings to use the "High Performance Processor" as opposed to "Integrated Graphics". Otherwise everything renders properly. The bug doesn't appear when there are no lights and everything is rendered in its full color. We have gone through alot of Ideas already but they haven't worked and now I am at a loss. Does anyone have any ideas about what is going on?
Always make sure you initialize your variables. Apparently some cards and drivers automatically initialize vec3 to (0,0,0), but others don't. That was what was going on here. Garbage values causing different colors at each fragment. By Initializing my resulting color vec3 to (0,0,0) at the beginning the problem is fixed.

Implementing shadows with deferred ligtning

i am currently programming deffered rendering system for my XNA 4 project, mostly following The Cansin tutorial. however, he claims that creating directional light that cast dynamic shadow is impossible - it's not true as many games (like stalker) use dynamic directional lights with deferred shading for creating realistic sun shadows. do you have any idea how could i implement such system? it is crucial for me as most of action of my game will happen outdoor and do not want to use spotlight that follow the player as workaround.
best regards
Check out the Cansins article on Deferred Rendering. It contains Spot and Point lights with Expotential Shadows, as well as SSAO with normals. A great tutorial.
The shadow volume technique yields very realistic shadows and can be calculated in realtime. The wikipedia article should give you a good starting point:
http://en.wikipedia.org/wiki/Shadow_volume
DevMaster.net has also a very detailed article about that topic.
Yes you can use directional light shadow maps with a deferred renderer. I can't imagine what reason that tutorial gives for why it would be possible to use shadow maps for some types of lights and not for others.
There are certainly differences between how you might want to implement shadow maps for directional lights vs. spotlights but once you have that part figured out it shouldn't be any more difficult to adapt one to your deferred renderer than the other.
If you're interested in actual shadow map implementations I would post that as a separate question.

Adobe Color Profiles/Color Spaces, how is it possible that this works?

I'm learning about Color Profiles/Color Spaces/Monitor Color Management Profiles and I was just wondering how any of that justified considering the fact that I could throw the entire color anything off by adjusting the brightness and contrast of my monitor?
Additionally there are things like lighting in the place where your monitor resides. I don't understand what makes it possible to know that the colors that you are looking at on your monitor are accurate. I really don't have any idea where to start (considering I obviously can't even get the terminology right in the first place...:-p)
The color profiles send as accurate as possible information to the monitor, but, basically, you are right when you say that it is impossible to determine if it is correct. Designers have to use tools like this (http://www.pantone.com/pages/products/product.aspx?pid=79) if they want truly accurate colors on their screen.
If you're really serious about it, you calibrate your monitor under different light levels and set up your colour management profiles using that information.
Generally, calibration is performed by looking at the monitor with a camera and comparing what colour the monitor is showing against what colour it thinks it's showing.

Recommended 3D Programming Aspects for Light/Laser Show Simulator?

Hey guys, I would like to develop a light/laser show editor and simulator, and for this of course I am going to learn some graphics programming. I am thinking about using C# and XNA.
I was just wondering what aspects of graphics programming I should research or focus on given the project I am working on. I am new to graphics programming so I don't know much about it, but for example I imagine something that I might look into would (possibly?) be volumetric lighting.
For example, what would be a practical way to go about rendering a 'laser' of varied width/color? I read somewhere to just draw a cylinder and apply a shader to it, I would like to confirm that this is the way.
Given that this seems like a big project, I was thinking about starting off by creating light sources and giving them properties so that I can easily manipulate them. I have (mis)read that only a certain amount of lights can be rendered at any given time, I believe eight. Does this only apply to ambient lights? Given this possible limitation, and the fact that most of the lights I will use will be directional, such as head-lights or lasers, what would be a different way to render these? Is that what volumetric lighting would be?
I'd just like to get some things clear before I dive into it. Since I'm new to this I probably didn't make the best use of words, so if something doesn't make sense please let me know. Thanks and sorry for my ignorance.
The answer to this depends on the level of sophistication that you need in your display simulation. Computer graphics is ultimately a simulation of the transport of light; that simulation can be as sophisticated as calculating the fraction of laser light deflected by particles in the atmosphere to the viewer's eyepoint, or as simple as drawing a line. Try out the cylinder effect and see if it works for your project. If you need something more sophisticated, look into shader programming (using Nvidia Cg, for example), and volumetric shading as you mentioned; also post-processing glow effects may be useful. For OpenGL, I believe there is a limit of 8? light sources in a scene, but you could conceivably work around this limit by doing your own shading logic.
Well if it's just for light show simulations I'd imagine your going to need a lot of custom lighting effects - so regardless if you decide to use XNA or straight DirectX your best bet would be to start by learning shader languages and how to program various lighting effects using them. Once you can reproduce the type of laser lighting you want, then you can experiment with the polygons you want to use to represent the lasers. (I've used the cylinder method in some of my work for personal purposes, but I'm not sure how well straight cylinders will fit your purpose).
Although its faster, I think its best not to use vanilla hardware lighting because of its limitations. Pixel shaders can help with you task. Also you may want to chose OpenGL because of portability and its clarity in rendering methods. I worked on Direct3D for several years before switching to OpenGL. OpenGL functions and states are easier to learn and rendering methods (like multi-pass rendering) is a lot clear. If you like to code on C# (which I dont recommend for these tasks), you can use CsGL library to access OpenGL functions.

Antialiasing alternatives

I've seen antialiasing on Windows using GDI+, Java and also that provided by Photoshop and Gimp. Are there any other libraries out there which provide antialiasing facility without depending on support from the host OS?
Antigrain Geometry provides anti-aliased graphics in software.
As simon pointed out, the term anti-aliasing is misused/abused quite regularly so it's always helpful to know exactly what you're trying to do.
Since you mention GDI, I'll assume you're talking about maintaining nice crisp edges when you resize them - so something like a character in a font looks clean and not pixelated when you resize it 2x or 3x it's original size. For these sorts of things I've used a technique in the past called alpha-tested magnification - you can read the whitepaper here:
http://www.valvesoftware.com/publications/2007/SIGGRAPH2007_AlphaTestedMagnification.pdf
When I implemented it, I used more than one plane so I could get better edges on all types of objects, but it covers that briefly towards the end. Of all the approaches (that I've used) to maintain quality when scaling vector images, this was the easiest and highest quality. This also has the advantage of being easily implemented in hardware. From an existing API standpoint, your best bet is to use either OpenGL or Direct3D - that being said, it really only requires bilinear filtered and texture mapped to accomplish what it does, so you could roll your own (I have in the past). If you are always dealing with rectangles and only need to do scaling it's pretty trivial, and adding rotation doesn't add that much complexity. If you do roll your own, make sure to pay particular attention to subpixel positioning (how you resolve pixel positions that do not fall on a full pixel, as this is critical to the quality and sometimes overlooked.
Hope that helps!
There are (often misnamed, btw, but that's a dead horse) many anti-aliasing approaches that can be used. Depending on what you know about the original signal and what the intended use is, different things are most likely to give you the desired result.
"Support from the host OS" is probably most sensible if the output is through the OS display facilities, since they have the most information about what is being done to the image.
I suppose that's a long way of asking what are you actually trying to do? Many graphics libraries will provide some form of antialiasing, whether or not they'll be appropriate depends a lot on what you're trying to achieve.

Resources