The intersection shader reads zeroes out of an SSBO - rust

My intersection shader always reads just zeroes from the SSBO I have.
I tried using debugPrintEXT, but it wouldn't print out from within the intersection shader. However, when I try to output any value via the ray payload as the hit colour, all I get is black colour all the time. Initially, I thought there was a problem with how I copied the data, but no, it is all just as good as in the example code, which works (from NVIDIA). Then, the validation layer throws no warnings/errors/nothing. Then I checked in NVIDIA NSight graphics what the buffer has, and it has the data I copied to it, exactly with the layout I wanted. Then I checked the SPIRV instructions, and all of them correctly refer to the offsets of the buffer! I am completely lost now: even the NVIDIA NSight sees the correct values but not the shader!
So, here is the shader code:
// Hitting a triangle.
#version 460 core
#extension GL_EXT_ray_tracing : enable
layout(location = 0) rayPayloadInEXT vec4 payload;
hitAttributeEXT vec2 attributes;
layout(push_constant) uniform rayParams {
vec3 rayOrigin;
vec3 rayDir;
mat4 transformMatrix;
float maxDistance;
};
struct PhongMaterial {
float ambient;
float diffuse;
float specular;
float shininess;
float reflective;
float transparency;
float refraction;
vec4 color;
float reserved_1;
vec4 reserved_2;
};
layout(set = 0, binding = 2) readonly buffer Materials {
PhongMaterial m[];
}
materials;
void main() {
PhongMaterial material = materials.m[0];
payload = material.color;
}
Here is the layout of the structure in Rust:
#[repr(C)]
pub struct PhongMaterial {
pub ambient: f32,
pub diffuse: f32,
pub specular: f32,
pub shininess: f32,
pub reflective: f32,
pub transparency: f32,
pub refraction: f32,
pub color: ColorRGBA,
reserved: [u8; 20],
}
The total byte size of the struct is 44 bytes. Aligning to a multiple of 16 results in 48 bytes copy when moving the data from the temporary CPU buffer to the GPU buffer.
Here is the screenshot of NVIDIA NSight, which proves the shader must have the data:
Here in the screenshot of NSight, the values are exactly the ones I have in Rust.
And here is the shader's SPIRV:
OpName %58 "PhongMaterial"
OpMemberDecorate %58 0 Offset 0
OpMemberDecorate %58 1 Offset 4
OpMemberDecorate %58 2 Offset 8
OpMemberDecorate %58 3 Offset 12
OpMemberDecorate %58 4 Offset 16
OpMemberDecorate %58 5 Offset 20
OpMemberDecorate %58 6 Offset 24
OpMemberDecorate %58 7 Offset 32
OpMemberDecorate %58 8 Offset 48
OpMemberDecorate %58 9 Offset 64
I don't get what is wrong. I tried to play with padding, but it just didn't help, nothing helped with regard to memory alignment. Even so, if it was an alignment issue, I think the shader should have accessed the values, at least the first ones, as those are laid out properly anyway.
Update: it seems that my closest hit shader doesn't even work when the read is performed to the buffer!
void main() {
PhongMaterial material = materials.m[0];
if (material.ambient <= 0.1) {
payload = vec4(1.0, 0.0, 0.0, 1.0);
} else if (material.ambient <= 0.5) {
payload = vec4(0.0, 1.0, 0.0, 1.0);
} else {
payload = vec4(0.0, 0.0, 1.0, 1.0);
}
Here, neither of the branch sections is walked into! The "payload" is not set entirely, so the colour I see is pitch black. However, if I don't try to access the "material" buffer and read values out of it, or just put an assignment to the "payload" after these if-elses, the colour I assign is visible:
if (material.ambient <= 0.1) {
payload = vec4(1.0, 0.0, 0.0, 1.0);
} else if (material.ambient <= 0.5) {
payload = vec4(0.0, 1.0, 0.0, 1.0);
} else {
payload = vec4(0.0, 0.0, 1.0, 1.0);
}
payload = vec4(1.0, 1.0, 0.0, 1.0);
Besides that, if I set the colour first to that, before the branching, the colour is black again, as if the payload wasn't set!
payload = vec4(1.0, 1.0, 0.0, 1.0);
if (material.ambient <= 0.1) {
payload = vec4(1.0, 0.0, 0.0, 1.0);
} else if (material.ambient <= 0.5) {
payload = vec4(0.0, 1.0, 0.0, 1.0);
} else {
payload = vec4(0.0, 0.0, 1.0, 1.0);
}
If I leave just one if and check for whatever number, it is always evaluated to true. Do I see some UB?
Another screenshot from the NSight showing that the buffer is correctly laid out, bound to the hit shader correctly as well:
My current guess is that the shader binding table isn't properly set up for the hit shader. But then I don't understand why it is even invoked then.
To whoever decided to vote for closing:
The reproduction steps must be clear to everyone who knows vulkan and rust and glsl: use a GPU buffer of the desired type. The NSight proves that it is done correctly, so this of the MCVE is unnecessary. The only thing not working correctly here is the shader, and the code for it serves as MCVE. The desired behaviour is obvious - the data should be read as it is in the buffer instead of zeroes all the time. If you have any questions - ask, don't vote for closing unless it is clear to you. Even if you are sure, notify the author about that instead of silently voting. People may provide any sort of information if asked, but I have already provided all the necessary information.

Then I checked the SPIRV instructions, and all of them correctly refer to the offsets of the buffer!
No, they don't. Look at the SPIR-V, but this time with the corresponding members annotated:
OpName %58 "PhongMaterial"
OpMemberDecorate %58 0 Offset 0 ; float ambient;
OpMemberDecorate %58 1 Offset 4 ; float diffuse;
OpMemberDecorate %58 2 Offset 8 ; float specular;
OpMemberDecorate %58 3 Offset 12 ; float shininess;
OpMemberDecorate %58 4 Offset 16 ; float reflective;
OpMemberDecorate %58 5 Offset 20 ; float transparency;
OpMemberDecorate %58 6 Offset 24 ; float refraction;
OpMemberDecorate %58 7 Offset 32 ; vec4 color;
OpMemberDecorate %58 8 Offset 48 ; float reserved_1;
OpMemberDecorate %58 9 Offset 64 ; vec4 reserved_2;
Indeed, if you just look at offset 9, you can see that it is an 80-byte structure, not 48 bytes as your Rust code suggests.
Things get off-track at color, which is at offset 32, not 28 as it is in Rust. This is because std430 layout (the default for buffer blocks) requires that vec4 is always aligned to 16 bytes. Same goes for reserved_2 relative to reserved_1.
I highly suspect that you meant to do this:
float ambient;
float diffuse;
float specular;
float shininess;
float reflective;
float transparency;
float refraction;
float reserved_1; // Padding before the `vec4`
vec4 color;
//vec4 reserved_2; No need for this at all.
Along with the Rust equivalent. This would actually be 48 bytes in size.

I solved the problem. There were actually two problems. One was the alignment issue - the colour vector was expected to be indeed at the offset of 32 bytes from the beginning of the chunk, as it was pointed out by #Nicol Bolas. However, this was rather a minor problem. A much larger problem was with the shader binding table. As you can see on my last screenshot in the question, there is no hit shader assigned to the shader binding table. This was a huge problem to debug as all the data is copied out just as correctly as it may get in my code, and the problem was not in how the data is copied to the SBT buffer but rather how the shader groups are created. In the book "Ray Tracing Gems II" it is said that the order of shaders for which we create shader groups doesn't matter. Well, on my machine, it is wrong. First of all, as NVIDIA NSight showed, I had no hit shader attached. Second of all, my miss shader was with index "1" in the group "2", which certainly was odd compared to other applications where the order of indexes and groups is always the same:
Ray generation shader group.
Miss shader group.
Hit shader group.
Having read in the book that the order didn't matter, I changed the order of the shader groups I created: the second was the hit shader group instead of the miss shader group as everywhere else.
As a Vulkan ray-tracing learner, I should point out that this quote from the Ray Tracing Gems II (page 251) book might be misinterpreted:
...First, there is no ordering requirement with respect to shader types in the SBT; ray generation, hit, and miss groups can come in any order.
I interpreted it so that we also don't care how to create the shader groups as well. It seems like the shader records for the shader groups are always in the order of ray generation, miss and then hit groups and the order matters when we create the groups themselves for the ray tracing pipeline.
One piece of advice for people having the same odd behaviour of shaders while using ray tracing pipeline and shader binding tables: if your shader behaves oddly (producing weird behaviour), even if it is invoked, or when you try to access any data except for the data passed between the shader stages, it is highly likely that the problem is with the shader binding table. Within the same book (Ray Tracing Gems II), it is pointed out that creating/using the SBT and the data for it is quite often done wrong and is a frequent cause of problems.

Related

Tween the texture on a TextureButton / TextureRect. Fade out Image1 while simultaneously fade in Image2

Character portrait selection. Clicking next loads the next image in an array, clicking back loads the previous image. Instead of a sharp change from one image to another, I want a variable-speed fading out of the current image and fading in of the new image. Dissolve/Render effects would be nice, but even an opacity tween 100->0 / 0-> 100 in x Seconds.
I really prefer not to use multiple objects on top of each other and alternating between them for "current texture".
Is this possible?
We can do Fade-in and Fade-out by animation modulate. Which is the simple solution.
For dissolve we can use shaders. And there is a lot we can do with shaders. There are plenty of dissolve shaders you can find online... I'll explain some useful variations. I'm favoring variations that are easy to tinker with.
Fade-in and Fade-out
We can do this with a Tween object and either the modulate or self-modulate properties.
I would go ahead and create a Tween in code:
var tween:Tween
func _ready():
tween = Tween.new()
add_child(tween)
Then we can use interpolate_property to manipulate modulate:
var duration_seconds = 2
tween.interpolate_property(self, "modulate",
Color.white, Color.transparent, duration_seconds)
Don't forget to call start:
tween.start()
We can take advantage of yield, to add code that will execute when the tween is completed:
yield(tween, "tween_completed")
Then we change the texture:
self.texture = target_texture
And then interpolate modulate in the opposite direction:
tween.interpolate_property(self, "modulate",
Color.transparent, Color.white, duration_seconds)
tween.start()
Note that I'm using self but you could be manipulating another node. Also target_texture is whatever texture you want to transition into, loaded beforehand.
Dissolve Texture
For any effect that require both textures partially visible, use a custom shader. Go ahead and add a ShaderMaterial to your TextureRect (or similar), and give it a new Shader file.
This will be our starting point:
shader_type canvas_item;
void fragment()
{
COLOR = texture(TEXTURE, UV);
}
That is a shader that simply shows the texture. Your TextureRect should look the same it does without this shader material. Let us add the second texture with an uniform:
shader_type canvas_item;
uniform sampler2D target_texture;
void fragment()
{
COLOR = texture(TEXTURE, UV);
}
You should see a new entry on Shader Param on the Inspector panel for the new texture.
We also need another parameter to interpolate. It will be 0 to display the original Texture, and 1 for the alternative texture. In Godot we can add a hint for the range:
shader_type canvas_item;
uniform sampler2D target_texture;
uniform float weight: hint_range(0, 1);
void fragment()
{
COLOR = texture(TEXTURE, UV);
}
In Shader Param on the Inspector Panel you should now see the new float, with a slider that goes from 0 to 1.
It does nothing, of course. We still need the code to mix the textures:
shader_type canvas_item;
uniform sampler2D target_texture;
uniform float weight: hint_range(0, 1);
void fragment()
{
vec4 color_a = texture(TEXTURE, UV);
vec4 color_b = texture(target_texture, UV);
COLOR = mix(color_a, color_b, weight);
}
That will do. However, I'll do a little refactor for ease of modification, later on this answer:
shader_type canvas_item;
uniform sampler2D target_texture;
uniform float weight: hint_range(0, 1);
float adjust_weight(float input, vec2 uv)
{
return input;
}
void fragment()
{
vec4 color_a = texture(TEXTURE, UV);
vec4 color_b = texture(target_texture, UV);
float adjusted_weight = adjust_weight(weight, UV);
COLOR = mix(color_a, color_b, adjusted_weight);
}
And now we manipulate it, again with Tween. I'll assume you have a Tween created the same way as before. Also that you already have your target_texture loaded.
We will start by setting the weight to 0, and target_texture:
self.material.set("shader_param/weight", 0)
self.material.set("shader_param/target_texture", target_texture)
We can tween weight:
var duration_seconds = 4
tween.interpolate_property(self.material, "shader_param/weight",
0, 1, duration_seconds)
tween.start()
yield(tween, "tween_completed")
And then change the texture:
self.texture = target_texture
Making Dissolve Fancy
We can get fancy we our dissolve effect. For example, we can add another texture to control how fast different parts transition form one texture to the other:
uniform sampler2D transition_texture;
Set it to a new NoiseTexture (and don't forget to set the Noise property of the NoiseTexture). I'll be using the red channel of the texture.
A simple solution looks like this:
float adjust_weight(float input, vec2 uv)
{
float transition = texture(transition_texture, uv).r;
return min(1.0, input * (transition + 1.0));
}
Where the interpolation is always linear, and the transition controls the slope.
We can also do something like this:
float adjust_weight(float input, vec2 uv)
{
float transition = texture(transition_texture, uv).r;
float input_2 = input * input;
return input_2 + (input - input_2) * transition;
}
Which ensure that an input of 0 returns 0, and an input of 1 returns 1. But transition controls the curve in between.
If you plot x * x + (x - x * x) * y in the range from 0 to 1 in both axis, you will see that when y (transition) is 1, you have a line, but when y is 0 you have a parabola.
Alternatively, we can change adjusted_weight to an step function:
float adjust_weight(float input, vec2 uv)
{
float transition = texture(transition_texture, uv).r;
return smoothstep(transition, transition, input);
}
Using smoothstep instead of step to avoid artifacts near 0.
Which will not interpolate between the textures, but each pixel will change from one to the other texture at a different instant. If your noise texture is continuous, then you will see the dissolve advance through the gradient.
Ah, but it does not have to be a noise texture! Any gradient will do. *You can create a texture defining how you want the dissolve to happen (example, under MIT license).
You probably can come up with other versions for that function.
Making Dissolve Edgy
We also could add an edge color. We need, of course, to add a color parameter:
uniform vec4 edge_color: hint_color;
And we will add that color at an offset of where we transition. We need to define that offset:
uniform float edge_weight_offset: hint_range(0, 1);
Now you can add this code:
float adjusted_weight = adjust_weight(max(0.0, weight - edge_weight_offset * (1.0 - step(1.0, weight))), UV);
float edge_weight = adjust_weight(weight, UV);
color_a = mix(color_a, edge_color, edge_weight);
Here the factor (1.0 - step(1.0, weight)) is making sure that when weight is 0, we pass 0. And when weight is 1, we pass a 1. Sadly we also need to make sure the difference does not result in a negative value. There must be another way to do it… How about this:
float weight_2 = weight * weight;
float adjusted_weight = adjust_weight(weight_2, UV);
float edge_weight = adjust_weight(weight_2 + (weight - weight_2) * edge_weight_offset, UV);
color_a = mix(color_a, edge_color, edge_weight);
Ok, feel free to inline adjust_weight. Whichever version you are using (this makes edges with the smoothstep version. With the other it blends a color with the transition).
Dissolve Alpha
It is not hard to modify the above shader to dissolve to alpha instead of dissolving to another texture. First of all, remove target_texture, also remove color_b, which we don't need and should not use. And instead of mix, we can do this:
COLOR = vec4(color_a.rgb, 1.0 - adjusted_weight);
And to use it, do the same as before to transition out:
self.material.set("shader_param/weight", 0)
var duration_seconds = 2
tween.interpolate_property(self.material, "shader_param/weight",
0, 1, duration_seconds)
tween.start()
yield(tween, "tween_completed")
Which will result in making it transparent. So you can change the texture:
self.texture = target_texture
And transition in (with the new texture):
tween.interpolate_property(self.material, "shader_param/weight",
1, 0, duration_seconds)
tween.start()

GLSL won't accept my implicit cast

I'm learning OpenGL 3.3, using some tutorials (http://opengl-tutorial.org). In the tutorial I'm using, there is a vertex shader which does the following:
Tutorial Shader source
#version 330 core
// Input vertex data, different for all executions of this shader.
layout(location = 0) in vec3 vertexPosition_modelspace;
// Values that stay constant for the whole mesh.
uniform mat4 MVP;
void main(){
// Output position of the vertex, in clip space : MVP * position
gl_Position = MVP * vec4(vertexPosition_modelspace,1);
}
Yet, when I try to emulate the same behavior in my application, I get the following:
error: implicit cast from "vec4" to "vec3".
After seeing this, I wasn't sure if it was because I was using 4.2 version shaders as opposed to 3.3, so changed everything to match what the author had been using, still receiving the same error afterward.
So, I changed my shader to do this:
My (latest) Source
#version 330 core
layout(location = 0) in vec3 vertexPosition_modelspace;
uniform mat4 MVP;
void main()
{
vec4 a = vec4(vertexPosition_modelspace, 1);
gl_Position.xyz = MVP * a;
}
Which, of course, still produces the same error.
Does anyone know why this is the case, as well as what a solution might be to this? I'm not sure if it could be my calling code (which I've posted, just in case).
Calling Code
static const GLfloat T_VERTEX_BUF_DATA[] =
{
// x, y z
-1.0f, -1.0f, 0.0f,
1.0f, -1.0f, 0.0f,
0.0f, 1.0f, 0.0f
};
static const GLushort T_ELEMENT_BUF_DATA[] =
{ 0, 1, 2 };
void TriangleDemo::Run(void)
{
glClear(GL_COLOR_BUFFER_BIT);
GLuint matrixID = glGetUniformLocation(mProgramID, "MVP");
glUseProgram(mProgramID);
glUniformMatrix4fv(matrixID, 1, GL_FALSE, &mMVP[0][0]); // This sends our transformation to the MVP uniform matrix, in the currently bound vertex shader
const GLuint vertexShaderID = 0;
glEnableVertexAttribArray(vertexShaderID);
glBindBuffer(GL_ARRAY_BUFFER, mVertexBuffer);
glVertexAttribPointer(
vertexShaderID, // Specify the ID of the shader to point to (in this case, the shader is built in to GL, which will just produce a white triangle)
3, // Specify the number of indices per vertex in the vertex buffer
GL_FLOAT, // Type of value the vertex buffer is holding as data
GL_FALSE, // Normalized?
0, // Amount of stride
(void*)0 ); // Offset within the array buffer
glDrawArrays(GL_TRIANGLES, 0, 3); //0 => start index of the buffer, 3 => number of vertices
glDisableVertexAttribArray(vertexShaderID);
}
void TriangleDemo::Initialize(void)
{
glGenVertexArrays(1, &mVertexArrayID);
glBindVertexArray(mVertexArrayID);
glGenBuffers(1, &mVertexBuffer);
glBindBuffer(GL_ARRAY_BUFFER, mVertexBuffer);
glBufferData(GL_ARRAY_BUFFER, sizeof(T_VERTEX_BUF_DATA), T_VERTEX_BUF_DATA, GL_STATIC_DRAW );
mProgramID = LoadShaders("v_Triangle", "f_Triangle");
glm::mat4 projection = glm::perspective(45.0f, 4.0f / 3.0f, 0.1f, 100.0f); // field of view, aspect ratio (4:3), 0.1 units near, to 100 units far
glm::mat4 view = glm::lookAt(
glm::vec3(4, 3, 3), // Camera is at (4, 3, 3) in world space
glm::vec3(0, 0, 0), // and looks at the origin
glm::vec3(0, 1, 0) // this is the up vector - the head of the camera is facing upwards. We'd use (0, -1, 0) to look upside down
);
glm::mat4 model = glm::mat4(1.0f); // set model matrix to identity matrix, meaning the model will be at the origin
mMVP = projection * view * model;
}
Notes
I'm in Visual Studio 2012
I'm using Shader Maker for the GLSL editing
I can't say what's wrong with the tutorial code.
In "My latest source" though, there's
gl_Position.xyz = MVP * a;
which looks weird because you're assigning a vec4 to a vec3.
EDIT
I can't reproduce your problem.
I have used a trivial fragment shader for testing...
#version 330 core
void main()
{
}
Testing "Tutorial Shader source":
3.3.11762 Core Profile Context
Log: Vertex shader was successfully compiled to run on hardware.
Log: Fragment shader was successfully compiled to run on hardware.
Log: Vertex shader(s) linked, fragment shader(s) linked.
Testing "My latest source":
3.3.11762 Core Profile Context
Log: Vertex shader was successfully compiled to run on hardware.
WARNING: 0:11: warning(#402) Implicit truncation of vector from size 4 to size 3.
Log: Fragment shader was successfully compiled to run on hardware.
Log: Vertex shader(s) linked, fragment shader(s) linked.
And the warning goes away after replacing gl_Position.xyz with gl_Position.
What's your setup? Do you have a correct version of OpenGL context? Is glGetError() silent?
Finally, are your GPU drivers up-to-date?
I've had problems with some GPUs (ATi ones, I believe) not liking integer literals when it expects a float. Try changing
gl_Position = MVP * vec4(vertexPosition_modelspace,1);
To
gl_Position = MVP * vec4(vertexPosition_modelspace, 1.0);
I just came across this error message on an ATI Radeon HD 7900 with latest drivers installed while compiling some sample code associated with the book "3D Engine Design for Virtual Globes" (http://www.virtualglobebook.com).
Here is the original fragment shader line:
fragmentColor = mix(vec3(0.0, intensity, 0.0), vec3(intensity, 0.0, 0.0), (distanceToContour < dF));
The solution is to cast the offending Boolean expression into float, as in:
fragmentColor = mix(vec3(0.0, intensity, 0.0), vec3(intensity, 0.0, 0.0), float(distanceToContour < dF));
The manual for mix (http://www.opengl.org/sdk/docs/manglsl) states
For the variants of mix where a is genBType, elements for which a[i] is false, the result for that
element is taken from x, and where a[i] is true, it will be taken from y.
So, since a Boolean blend value should be accepted by the compiler without comment, I think this should go down as an AMD/ATI driver issue.

OpenGL color/alpha output slightly dimmed

I'm seeing slightly dimmed color/alpha output from OpenGL in Linux. Instead of seeing a red component value of 1.0 I'm seeing ~.96988. For example, I have a fully red rectangle (red component = 1.0, alpha = 1.0, green and blue are zero). This dimming happens whether I enable my vertex/fragment shaders or not.
Lighting is disabled so no ambient or other light should be included in the color calculation.
glBegin(GL_POLYGON);
glColor4f(1.0, 0.0, 0.0, 1.0);
glVertex2f(0.0, 0.0);
glVertex2f(1.0, 0.0);
glVertex2f(1.0, 1.0);
glVertex2f(0.0, 1.0);
glEnd();
I take a screen-shot of the resulting window and then load the image into a paint program and examine any particular pixel. I see a red component integer value of 247 instead of 255 as I would expect. When I run this with the vertex shader enabled I see the gl_Color.r component is already < 1.0 and the gl_Color.a component is as well.
All OpenGL states are at the default values. What am I missing?
Edit due to question:
I determined that the value of the red component was ~.96988 by a crude and iterative process of inspecting it in the vertex shader and altering the blue component to signal when the red component was above a threshold value. I kept reducing the constant threashold value until I no longer saw purple. This did the trick:
if(gl_Color.r > 0.96988)
{
gl_Color.b = 1.0; \\ show purple instead of the slightly dimmed red.
}
Edit:
//VERTEX SHADER
varying vec2 texture_coordinate;
void main()
{
gl_Position = ftransform();
texture_coordinate = vec2(gl_MultiTexCoord0);
gl_FrontColor = gl_Color;
}
//FRAGMENT SHADER
varying vec2 texture_coordinate;
uniform sampler2D Texture0;
void main(void)
{
gl_FragColor = texture2D(Texture0, texture_coordinate) * gl_Color;
}
Texture0 in this instance is a fully saturated RED rectangle Red = 1.0, Alpha = 1.0. Without the texture, using vertex color, I get the same results; a slightly dimminished Red and Alpha component.
One more thing, the Red and Aplha channels are "dimmed" by the same amount. So something is causing a dimming of the entire color component. And as I stated in the main question this occurs whether I use shaders or the fixed punction pipeline.
Just for fun I performed a similar test in Windows using DirectX and this resulted in a rectangle with a Red component of 254; still slightly dimmed but just barely.
I'm answering my own question because I resolved the issue and I was the cause. It turns out that I was incorrectly calculating the color channels, including alpha, for the vertices in my models when converting from binary to floating point. A silly error that introduced this slight dimming.
For instance:
currentColor = m_pVertices[i].clr; // color format ARGB
float a = (1.0 / 256) * (m_pVertices[i].clr >> 24);
float r = (1.0 / 256) * ((m_pVertices[i].clr >> 16) % 256);
float g = (1.0 / 256) * ((m_pVertices[i].clr >> 8) % 256);
float b = (1.0 / 256) * (m_pVertices[i].clr % 256);
glColor4f(r, g, b, a);
I should be dividing by 255. Doh!
It seems the only dimming is in my brain and not in openGL.

GLSL - Front vs. Back faces of polygons

I made some simple shading in GLSL of a checkers board:
f(P) = [ floor(Px)+floor(Py)+floor(Pz) ] mod 2
It seems to work well except the fact that i see the interior of the objects but i want to see only the front face.
Any ideas how to fix this? Thanks!
Teapot (glutSolidTeapot()):
Cube (glutSolidCube):
The vertex shader file is:
varying float x,y,z;
void main(){
gl_Position = gl_ProjectionMatrix * gl_ModelViewMatrix * gl_Vertex;
x = gl_Position.x;
y = gl_Position.y;
z = gl_Position.z;
}
And the fragment shader file is:
varying float x,y,z;
void main(){
float _x=x;
float _y=y;
float _z=z;
_x=floor(_x);
_y=floor(_y);
_z=floor(_z);
float sum = (_x+_y+_z);
sum = mod(sum,2.0);
gl_FragColor = vec4(sum,sum,sum,1.0);
}
The shaders are not the problem - the face culling is.
You should either disable the face culling (which is not recommended, since it's bad for performance reasons):
glDisable(GL_CULL_FACE);
or use glCullFace and glFrontFace to set the culling mode, i.e.:
glEnable(GL_CULL_FACE); // enables face culling
glCullFace(GL_BACK); // tells OpenGL to cull back faces (the sane default setting)
glFrontFace(GL_CW); // tells OpenGL which faces are considered 'front' (use GL_CW or GL_CCW)
The argument to glFrontFace depends on application conventions, i.e. the matrix handedness.

Depth buffer only show blue color

I'm trying to implement Light Prepass rendering in RenderMonkey. So far, in Normal+Depth pass, it seems like Normal buffer is getting correct result, but Depth buffer only show one color. How can I check if my Depth buffer is correct or not?
Workspace download link: http://www.mediafire.com/?jq3jmantyxw
The light blue is actually RGB values 0.0, 1.0, 1.0. Since depth is (usually) a single channel representing Z, when sampled from texture it's returned in the first channel, red. Missing channels green, blue and alpha will have 1.0 substituted by the hardware.
Your download link is non-functional, since it's been 2 years I suspect.
You should ensure your pixel shader is returning both COLOR0 and COLOR1 semantics (note that depth is a float4 despite the output being a single channel texture):
struct PS_OUT { float4 color : COLOR0; float4 depth : COLOR1; };
PS_OUT ps_main( PS_INPUT Input )
{
PS_OUT Output;
// your color shader here
Output.color = myFinalColor;
Output.depth = myFinalDepth; // e.g. Input.posz / Input.posw from your vertex shader
return Output;
}
Depending on your camera settings, you could get something like:

Resources