VS2012 Shader Designer broken algorithm? - visual-studio-2012

I was having no end of trouble with this function, emitted by the shader designer in VS2012. If you look at the code below, you can see the difference between the VS Version and MY Version.
In the VS version, they do not take into account the texture, so the "shady" sides of any object lit only by ambient is just grey. In MY version, you can see I added the pixelcolor (which is from the texture) and then it works great. Since they take the pixelcolor into account for the diffuse lighting, I can't figure out why they wouldn't for the ambient.
Since I'm very new to 3D I don't want to assume I'm so clever and the VS team never tested this. And since its so fundamental, I'm wondering if I'm just missing something. Thoughts?
float3 LambertLighting(
float3 lightNormal,
float3 surfaceNormal,
float3 materialAmbient,
float3 lightAmbient,
float3 lightColor,
float3 pixelColor
)
{
// compute amount of contribution per light
float diffuseAmount = saturate(dot(lightNormal, surfaceNormal));
float3 diffuse = diffuseAmount * lightColor * pixelColor;
// combine ambient with diffuse
// VS Version:
return saturate((materialAmbient * lightAmbient) + diffuse);
// MY Version:
return saturate((materialAmbient * lightAmbient * pixelColor) + diffuse);
}

This line should already take in account the color from the texture:
float3 diffuse = diffuseAmount * lightColor * pixelColor;
Which is later mixed with the ambient lighting to get the correct result. In your version you are adding the texture color twice (once in diffuse and once in ambient).
The VS version corresponds to various online examples of Lambertian lighting I have seen so that is definitely correct.
What might go wrong is that in your test setup the diffuse color is black or gray, which would make the pixel color look gray too.

Related

Blended lines do not look as expected

I use the following fragment shader, which uses the fog effect, to draw my scene:
precision mediump float;
uniform int EnableFog;
uniform float FogMinDist;
uniform float FogMaxDist;
varying lowp vec4 DestinationColor;
varying float EyeToVertexDist;
float computeFogFactor()
{
float fogFactor = 1.0;
if (EnableFog != 0)
{
//Use a bit lower vlaue of FogMaxDist to get a better fog effect - it will make the far end disappear quicker.
float fogMaxDistABitCloser = FogMaxDist * 0.98;
fogFactor = (fogMaxDistABitCloser - EyeToVertexDist) / (fogMaxDistABitCloser - FogMinDist);
fogFactor = clamp(fogFactor, 0.0, 1.0);
}
return fogFactor;
}
void main(void)
{
float fogFactor = computeFogFactor();
gl_FragColor = DestinationColor * fogFactor;
}
And i enable alpha blending:
glEnable(GL_BLEND);
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
The result is the following scene:
My problem is with the places in which the lines overlap - the result is that the color seems darker than the color of both lines:
How i can fix it?
As already described in the comment you are blending the newly drawn line with the background which may already contain colours from another object at certain pixels, in your case where lines overlap. To solve this you will either have to draw your lines without overlapping or make your drawing independent from the current buffer state.
In your specific case you may pass the background colour to your fragment shader via some uniform or even a texture and then do your blending manually in the fragment shader.
In general you might want to draw the grid to some frame buffer object (FBO) with attached texture and then draw the whole texture in a single draw call using your fog shader and blending. The drawing to FBO should then be with disabled blending.
There are other ways such as drawing the grid to a stencil buffer first and then redraw a full-screen rect applying a colour with your shader and blending.

Implementing Normal Mapping HLSL

So, ok. I'm trying to implement normal mapping in my small game engine and I just cannot get it to work.
When I do the lighting with only per-vertex normals everything is fine, but if I try to do it with normal map then everything falls apart.
I know I have the right UVs because the texture looks good, but I don't know what I'm doing wrong when it comes to Normal Map texture.
This is some of my code in my pixel shader(HLSL)....
float3 NormalSample(float3 normalMapSample, float3 unitNormalW, float3 tangentW)
{
//Uncompress each component from [0,1] to [-1,1]
float3 normalT = 2.0f*normalMapSample - 1.0f;
// Build orthonormal basis.
float3 N = unitNormalW;
float3 T = normalize(tangentW - dot(tangentW, N)*N);
float3 B = cross(N, T);
float3x3 TBN = float3x3(T, B, N);
// Transform from tangent space to world space.
float3 bumpedNormalW = mul(normalT, TBN);
return bumpedNormalW;
}
Any ideas on what I can be doing wrong?
Ok, I fixed it. I was just doing the normal calculations in the wrong coordinate space. To make it easier I decided to do everything in model-space. It looks great. Too bad I'm not allowed to post screen shots.
Thanks again Miloszmaki....

OpenGL color/alpha output slightly dimmed

I'm seeing slightly dimmed color/alpha output from OpenGL in Linux. Instead of seeing a red component value of 1.0 I'm seeing ~.96988. For example, I have a fully red rectangle (red component = 1.0, alpha = 1.0, green and blue are zero). This dimming happens whether I enable my vertex/fragment shaders or not.
Lighting is disabled so no ambient or other light should be included in the color calculation.
glBegin(GL_POLYGON);
glColor4f(1.0, 0.0, 0.0, 1.0);
glVertex2f(0.0, 0.0);
glVertex2f(1.0, 0.0);
glVertex2f(1.0, 1.0);
glVertex2f(0.0, 1.0);
glEnd();
I take a screen-shot of the resulting window and then load the image into a paint program and examine any particular pixel. I see a red component integer value of 247 instead of 255 as I would expect. When I run this with the vertex shader enabled I see the gl_Color.r component is already < 1.0 and the gl_Color.a component is as well.
All OpenGL states are at the default values. What am I missing?
Edit due to question:
I determined that the value of the red component was ~.96988 by a crude and iterative process of inspecting it in the vertex shader and altering the blue component to signal when the red component was above a threshold value. I kept reducing the constant threashold value until I no longer saw purple. This did the trick:
if(gl_Color.r > 0.96988)
{
gl_Color.b = 1.0; \\ show purple instead of the slightly dimmed red.
}
Edit:
//VERTEX SHADER
varying vec2 texture_coordinate;
void main()
{
gl_Position = ftransform();
texture_coordinate = vec2(gl_MultiTexCoord0);
gl_FrontColor = gl_Color;
}
//FRAGMENT SHADER
varying vec2 texture_coordinate;
uniform sampler2D Texture0;
void main(void)
{
gl_FragColor = texture2D(Texture0, texture_coordinate) * gl_Color;
}
Texture0 in this instance is a fully saturated RED rectangle Red = 1.0, Alpha = 1.0. Without the texture, using vertex color, I get the same results; a slightly dimminished Red and Alpha component.
One more thing, the Red and Aplha channels are "dimmed" by the same amount. So something is causing a dimming of the entire color component. And as I stated in the main question this occurs whether I use shaders or the fixed punction pipeline.
Just for fun I performed a similar test in Windows using DirectX and this resulted in a rectangle with a Red component of 254; still slightly dimmed but just barely.
I'm answering my own question because I resolved the issue and I was the cause. It turns out that I was incorrectly calculating the color channels, including alpha, for the vertices in my models when converting from binary to floating point. A silly error that introduced this slight dimming.
For instance:
currentColor = m_pVertices[i].clr; // color format ARGB
float a = (1.0 / 256) * (m_pVertices[i].clr >> 24);
float r = (1.0 / 256) * ((m_pVertices[i].clr >> 16) % 256);
float g = (1.0 / 256) * ((m_pVertices[i].clr >> 8) % 256);
float b = (1.0 / 256) * (m_pVertices[i].clr % 256);
glColor4f(r, g, b, a);
I should be dividing by 255. Doh!
It seems the only dimming is in my brain and not in openGL.

A Question on OpenGL ES 2.0 and Alpha / Stencil Tests

I have a quad covering the area between -0.5, 0.5 and 0.5, -0.5 on a cleared viewport with a stencil and alpha buffer. In the fragment shader I apply a texture which happens to have a shape -- in this case a circle -- outside of which it is fully transparent.
I am trying to figure out how I can essentially "cut" that non-alpha textured shape out of the next draw of the shape, such that I draw the first quad, offset to some degree (say between -0.3, 0.5 and 0.8, -0.5) and draw again, and only the non-overlap of the non-alpha texture is drawn of the second quad's texture.
It is easy enough doing this with a stencil buffer, such that it applies to the quad and is blind to the texture, however I would like to apply it to the texture.
So as an example of the function what I want actually rendered of the conceptual circle texture would be a crescent in that case. I am not sure what tests I should be using for this.
I think you want to stick with the stencil buffer, but the alpha test isn't available in ES 2.0 per the philosophy that anything that can be done in a shader isn't supplied as fixed functionality.
Instead, you can insert one of your own choosing inside the fragment shader, thanks to the discard keyword. Supposing you had the most trivial textured fragment shader:
varying mediump vec2 texCoordVarying;
uniform sampler2D tex2D;
void main()
{
gl_FragColor = texture2D(tex2D, texCoordVarying);
}
You could throw in an alpha test so that pixels with an alpha of less than 0.1 don't proceed down the pipeline, and hence don't affect the stencil buffer with:
varying mediump vec2 texCoordVarying;
uniform sampler2D tex2D;
void main()
{
vec4 colour = texture2D(tex2D, texCoordVarying);
if(colour.a > 0.1)
gl_FragColor = colour;
else
discard;
}

GLSL - Front vs. Back faces of polygons

I made some simple shading in GLSL of a checkers board:
f(P) = [ floor(Px)+floor(Py)+floor(Pz) ] mod 2
It seems to work well except the fact that i see the interior of the objects but i want to see only the front face.
Any ideas how to fix this? Thanks!
Teapot (glutSolidTeapot()):
Cube (glutSolidCube):
The vertex shader file is:
varying float x,y,z;
void main(){
gl_Position = gl_ProjectionMatrix * gl_ModelViewMatrix * gl_Vertex;
x = gl_Position.x;
y = gl_Position.y;
z = gl_Position.z;
}
And the fragment shader file is:
varying float x,y,z;
void main(){
float _x=x;
float _y=y;
float _z=z;
_x=floor(_x);
_y=floor(_y);
_z=floor(_z);
float sum = (_x+_y+_z);
sum = mod(sum,2.0);
gl_FragColor = vec4(sum,sum,sum,1.0);
}
The shaders are not the problem - the face culling is.
You should either disable the face culling (which is not recommended, since it's bad for performance reasons):
glDisable(GL_CULL_FACE);
or use glCullFace and glFrontFace to set the culling mode, i.e.:
glEnable(GL_CULL_FACE); // enables face culling
glCullFace(GL_BACK); // tells OpenGL to cull back faces (the sane default setting)
glFrontFace(GL_CW); // tells OpenGL which faces are considered 'front' (use GL_CW or GL_CCW)
The argument to glFrontFace depends on application conventions, i.e. the matrix handedness.

Resources