HLSL projection shader - direct3d

I am rendering a scene to a texture and then drawing a water plane with that texture using projection. Most of the samples I have seen on the web pass in a view/projection matrix to the vertex shader and transform the vertex, then in the pixel shader they do :
projection.x = input.projectionCoords.x / input.projectionCoords.w / 2.0 + 0.5;
projection.y = input.projectionCoords.y / input.projectionCoords.w / 2.0 + 0.5;
I have a shader that does the following and it works too but I don't know where I found this particular code or how it gives the same results with the above code. Here is the code I have
Vertex shader : (world is an identityMatrix, viewProj is my cameras combined viewProjection matrix):
output.position3D = mul(float4(vsin.position,1.0), world).xyz;
output.position = mul(float4(output.position3D,1.0), viewProj);
output.reflectionTexCoord.x = 0.5 * (output.position.w + output.position.x);
output.reflectionTexCoord.y = 0.5 * (output.position.w - output.position.y);
output.reflectionTexCoord.z = output.position.w;
Pixel shader :
float2 projectedTexCoord = (input.reflectionTexCoord.xy / input.reflectionTexCoord.z);
What confuses me is the usage of 0.5 * (output.position.w + output.position.x) and 0.5 * (output.position.w - output.position.y). How does this have the same effect and what does the w component mean here?

After a while I realized that it ends up being the same thing :
0.5 * (output.position.w + output.position.x);
0.5 * output.position.w + 0.5 * output.position.x
Then in the pixel shader :
(0.5 * output.position.w + 0.5 * output.position.x) / output.position.w
(0.5 * output.position.w)/output.position.w + (0.5 * output.position.x) / output.position.w
The first part becomes 0.5 :
0.5 + 0.5 * ( output.position.x / output.position.w)
This is equal to :
(output.position.x / output.position.w) / 2 + 0.5
I believe moving this calculation to the vertex shader is more efficient so I will just leave it there.
To completely move the calculation out of the shader the matrix can be calculated on the client side :
XMMATRIX v = XMLoadFloat4x4(&view);
XMMATRIX p = XMLoadFloat4x4(&projection);
XMMATRIX t(
0.5f, 0.0f, 0.0f, 0.0f,
0.0f, -0.5f, 0.0f, 0.0f,
0.0f, 0.0f, 1.0f, 0.0f,
0.5f, 0.5f, 0.0f, 1.0f);
XMMATRIX reflectionTransform = v * p * t;
XMStoreFloat4x4(&_reflectionTransform, XMMatrixTranspose(reflectionTransform));
Then all the vertex shader has to do is :
output.reflectionTexCoord = mul(float4(output.position3D,1.0), reflectionProjectionMatrix);

Related

View-space position of a sample point

I am working on implementing Crytek's original SSAO implementation and I have found myself stuck and confused at the part where I need to find the view-space position of the sample. I have implemented a method which I feel should work however, it seems to give me an odd result with blackening occurring at the back. Am I missing something? Would appreciate any insight, thanks in advance.
vec3 depthToPositions(vec2 tc)
{
float depth = texture(depthMap, tc).x;
vec4 clipSpace = vec4(tc * 2.0 - 1.0, depth, 1.0);
vec4 viewSpace = inverse(camera.proj) * clipSpace;
return viewSpace.xyz / viewSpace.w;
}
for(int i = 0; i < ssao.sample_amount; ++i) {
// Mittring, 2007 "Finding next gen CryEngine 2" document suggests to reflect sample
vec3 samplePos = reflect(ssao.samples[i].xyz, plane);
samplePos.xy = samplePos.xy * 0.5 + 0.5; // conver to 0-1 texture coordinates
samplePos = depthToPositions(samplePos.xy); // this is how I am retrieving view-space position of sample
samplePos = viewSpacePositions + samplePos * radius;
vec4 offset = vec4(samplePos, 1.0);
offset = camera.proj * offset;
offset.xyz /= offset.w;
offset.xy = offset.xy * 0.5 + 0.5;
float sampleDepth = texture(gPosition, offset.xy).z;
float rangeCheck = (viewSpacePositions.z - sampleDepth) < radius ? 1.0 : 0.0;
occlusion += (sampleDepth >= samplePos.z + bias ? 1.0 : 0.0) * rangeCheck;
}
Generating samples in C++
for(unsigned int i = 0; i < 64; i++) {
glm::vec4 sample(
randomFloats(generator) * 2.0 - 1.0,
randomFloats(generator) * 2.0 - 1.0,
randomFloats(generator) * 2.0 - 1.0, 0.0);
sample = glm::normalize(sample);
sample *= randomFloats(generator);
float scale = float(i) / 64;
scale = Lerp(0.1f, 1.0f, scale * scale);
sample *= scale;
ssaoKernel.push_back(sample);
}

PBR - Incorrect direct lighting

Based on many internet resources I wrote PBR implementation for directional lighting for my DirectX 11 game engine, but It works incorrectly.
Bellow, you can see a screenshot where I forced metalness to 0.0f and roughness to 1.0f. As you can see there are too many reflections. For example, the grass is reflective very, but roughness is set to 0, so it shouldn't look like that.
Bellow, I visualized ambientLigting and it looks correct.
Unfortunately, directLighting seems completely off and I don't know why. There are too many reflections. It might be because I applied PBR formulas incorrectly for the directional light source, but I don't know how to make it correct.
Here is my PBR source code. I hope you will help me solve this problem or at least give me a hint, where the problem may be because, to be honest, I have no idea at this moment how to fix it.
static const float PI = 3.14159265359f;
static const float3 DIELECTRIC_FACTOR = float3(0.04f, 0.04f, 0.04f);
static const float EPSILON = 0.00001f;
float DistributionGGX(float3 normal, float3 halfway, float roughness)
{
float alpha = roughness * roughness;
float alphaSquare = alpha * alpha;
float cosHalfway = max(dot(normal, halfway), 0.0f);
float cosHalfwaySquare = cosHalfway * cosHalfway;
float denominator = (cosHalfwaySquare * (alphaSquare - 1.0f)) + 1.0f;
denominator = PI * denominator * denominator;
return alphaSquare / denominator;
}
float GeometrySchlickGGX(float cosinus, float roughness)
{
float r = (roughness + 1.0);
float k = (r * r) / 8.0;
float denominator = cosinus * (1.0 - k) + k;
return cosinus / denominator;
}
float GeometrySmith(float3 normal, float roughness, float cosView, float cosLight)
{
return GeometrySchlickGGX(cosView, roughness) * GeometrySchlickGGX(cosLight, roughness);
}
float3 FresnelSchlick(float cosTheta, float3 F0)
{
return F0 + (1.0f - F0) * pow(1.0f - cosTheta, 5.0f);
}
float3 FresnelSchlickRoughness(float cosTheta, float3 F0, float roughness)
{
return F0 + (max(float(1.0f - roughness).xxx, F0) - F0) * pow(1.0f - cosTheta, 5.0f);
}
int GetTextureMipMapLevels(TextureCube input)
{
int width, heigth, levels;
input.GetDimensions(0, width, heigth, levels);
return levels;
}
float3 Pbr(float3 albedo, float3 normal, float metallic, float roughness, float occlusion,
TextureCube irradianceTexture, TextureCube radianceTexture, Texture2D brdfLut,
SamplerState defaultSampler, SamplerState brdfSampler, float3 lightDirection,
float3 lightColor, float3 cameraPosition, float3 pixelPosition, float shadowMultiplier)
{
lightDirection *= -1;
float3 viewDirection = normalize(cameraPosition - pixelPosition);
float3 halfwayDirection = normalize(viewDirection + lightDirection);
float3 reflectionDirection = reflect(-viewDirection, normal);
float3 F0 = lerp(DIELECTRIC_FACTOR, albedo, metallic);
float cosView = max(dot(normal, viewDirection), 0.0f);
float cosLight = max(dot(normal, lightDirection), 0.0f);
float NDF = DistributionGGX(normal, halfwayDirection, roughness);
float G = GeometrySmith(normal, roughness, cosView, cosLight);
float3 F = FresnelSchlick(max(dot(halfwayDirection, viewDirection), 0.0f), F0);
float3 nominator = NDF * G * F;
float denominator = 4 * cosView * cosLight + EPSILON;
float3 specular = nominator / denominator;
float3 kD = lerp(float3(1.0f, 1.0f, 1.0f) - F, float3(0.0f, 0.0f, 0.0f), metallic);
float3 directLighting = (kD * albedo / PI + specular) * lightColor * cosLight;
F = FresnelSchlickRoughness(cosView, F0, roughness);
kD = lerp(float3(1.0f, 1.0f, 1.0f) - F, float3(0.0f, 0.0f, 0.0f), metallic);
float3 irradiance = irradianceTexture.Sample(defaultSampler, normal).rgb;
float3 diffuse = irradiance * albedo;
int radianceLevels = GetTextureMipMapLevels(radianceTexture);
float3 radiance = radianceTexture.SampleLevel(defaultSampler, reflectionDirection, roughness * radianceLevels).rgb;
float2 brdf = brdfLut.Sample(brdfSampler, float2(cosView, roughness)).xy;
float3 specularColor = radiance * (F0 * brdf.x + brdf.y);
float3 ambientLighting = (kD * diffuse + specularColor) * occlusion;
return ambientLighting + (directLighting * shadowMultiplier);
}

Inputs and Outputs of the Geometry Shader

I was wondering if anyone would be so kind as to pin-point the problem with my program. I am certain the setback has something to do with the way in which data is passed through the GS. If, for instance, the geometry shader is taken out of the code (modifying the other two stages to accommodate for the change as well), I end up with a operational pipeline. And if I modify the data input of the GS to accept PS_INPUT instead of VS_DATA, the program does not crash, but outputs a blank blue screen. My intent here is to create a collection of squares on a two-dimensional plane, so blank blue screens are not exactly what I am going for.
Texture2D txDiffuse[26] : register(t0);
SamplerState samLinear : register(s0); //For Texturing
#define AWR_MAX_SHADE_LAY 1024
cbuffer ConstantBuffer : register(b0)
{
float4 Matrix_Array[30];
matrix Screen;
float GM;
float GA;
float GD;
float epsilon;
}
// Includes Layer Data
cbuffer CBLayer : register(b1)
{
float4 Array_Fill_Color[AWR_MAX_SHADE_LAY];
float4 Array_Line_Color[AWR_MAX_SHADE_LAY];
float Array_Width[AWR_MAX_SHADE_LAY];
float Array_Line_Pattern[AWR_MAX_SHADE_LAY];
float Array_Z[AWR_MAX_SHADE_LAY];
float Array_Thickness[AWR_MAX_SHADE_LAY];
}
//Input for Vertex Shader
struct VS_DATA
{
float4 Pos : POSITION;
int M2W_index : M2W_INDEX;
int Layer_index : LAYER_INDEX;
};
//Input for Pixel Shader
struct PS_INPUT{
float4 Pos : SV_POSITION;
float4 Color : COLOR;
int Layer_index : LAYER_INDEX;
};
//Vertex Shader
VS_DATA VS(VS_DATA input)// Vertex Shader
{
VS_DATA output = (VS_DATA)0;
//Model to World Transform
float xm = input.Pos.x, yw = input.Pos.y, zm = input.Pos.z, ww = input.Pos.w, xw, zw;
float4 transformation = Matrix_Array[input.M2W_index];
xw = ((xm)*transformation.y - (zm)*transformation.x) + transformation.z;
zw = ((xm)*transformation.x + (zm)*transformation.y) + transformation.w;
//set color
int valid_index = input.Layer_index;
output.Color = Array_Fill_Color[valid_index];
output.Color.a = 0.0;
//output.Vertex_index = input.Vertex_index;
//output.Next_Vertex_index = input.Next_Vertex_index;
//Snapping process
float sgn_x = (xw >= 0) ? 1.0 : -1.0;
float sgn_z = (zw >= 0) ? 1.0 : -1.0;
int floored_x = (int)((xw + (sgn_x*GA) + epsilon)*GD);
int floored_z = (int)((zw + (sgn_z*GA) + epsilon)*GD);
output.Pos.x = ((float)floored_x)*GM;
output.Pos.y = yw;
output.Pos.z = ((float)floored_z)*GM;
output.Pos.w = ww;
int another_valid_index = input.Layer_index;
output.Layer_index = another_valid_index;
// Transform to Screen Space
output.Pos = mul(output.Pos, Screen);
return output;
}
[maxvertexcount(6)]
void GS_Line(line VS_DATA points[2], inout TriangleStream<PS_INPUT> output)
{
float4 p0 = points[0].Pos;
float4 p1 = points[1].Pos;
float w0 = p0.w;
float w1 = p1.w;
p0.xyz /= p0.w;
p1.xyz /= p1.w;
float3 line01 = p1 - p0;
float3 dir = normalize(line01);
float3 ratio = float3(700.0, 0.0, 700.0);
ratio = normalize(ratio);
float3 unit_z = normalize(float3(0.0, -1.0, 0.0));
float3 normal = normalize(cross(unit_z, dir) * ratio);
float width = 0.01;
PS_INPUT v[4];
float3 dir_offset = dir * ratio * width;
float3 normal_scaled = normal * ratio * width;
float3 p0_ex = p0 - dir_offset;
float3 p1_ex = p1 + dir_offset;
v[0].Pos = float4(p0_ex - normal_scaled, 1) * w0;
v[0].Color = float4(1.0, 1.0, 1.0, 1.0);
v[0].Layer_index = 1;
v[1].Pos = float4(p0_ex + normal_scaled, 1) * w0;
v[1].Color = float4(1.0, 1.0, 1.0, 1.0);
v[1].Layer_index = 1;
v[2].Pos = float4(p1_ex + normal_scaled, 1) * w1;
v[2].Color = float4(1.0, 1.0, 1.0, 1.0);
v[2].Layer_index = 1;
v[3].Pos = float4(p1_ex - normal_scaled, 1) * w1;
v[3].Color = float4(1.0, 1.0, 1.0, 1.0);
v[3].Layer_index = 1;
output.Append(v[2]);
output.Append(v[1]);
output.Append(v[0]);
output.RestartStrip();
output.Append(v[3]);
output.Append(v[2]);
output.Append(v[0]);
output.RestartStrip();
}
//Pixel Shader
float4 PS(PS_INPUT input) : SV_Target{
float2 Tex = float2(input.Pos.x / (8.0), input.Pos.y / (8.0));
int the_index = input.Layer_index;
float4 tex0 = txDiffuse[25].Sample(samLinear, Tex);
if (tex0.r > 0.0)
tex0 = float4(1.0, 1.0, 1.0, 1.0);
else
tex0 = float4(0.0, 0.0, 0.0, 0.0);
if (tex0.r == 0.0)
discard;
tex0 *= input.Color;
return tex0;
}
If you compile your vertex shader as it is, you will have the following error :
(line 53) : invalid subscript 'Color'
output.Color = Array_Fill_Color[valid_index];
output is of type VS_DATA which does not contain color.
If you change your VS definition as :
PS_INPUT VS(VS_DATA input)// Vertex Shader
{
PS_INPUT output = (PS_INPUT)0;
//rest of the code here
Then your vs will compile, but then you will have a mismatched layout with GS (GS still expects a line of VS_DATA as input, and now you provide PS_INPUT to it)
This will not give you any error until you draw (and generally runtime will silently fail, you would have a mismatch message in case debug layer is on)
So you also need to modify your GS to accept PS_INPUT as input eg:
[maxvertexcount(6)]
void GS_Line(line PS_INPUT points[2], inout TriangleStream<PS_INPUT> output)

FPS weapon with gluLookAt

I'm developing a 3D maze-like game, just for learning(and of course for fun :) ). I have made the maze, I can move between the walls in First-Person mode. My only problem is, that I want some kind of weapon for my First-Person view( like an FPS game). To moving in the maze I'm using gluLookAt.
Code snippets:
void RenderScene(void)
{
glLoadIdentity();
gluLookAt(x, 1.0f, z,x + lx, 1.0f, z + lz,0.0f, 1.0f, 0.0f);
....
glBindTexture(GL_TEXTURE_2D, texture[0]); //texture binding
glScalef(7.0f, 8.0f, 7.0f);
glTranslatef(-(r * 2), 0.0f, -(c * 2)); //place the maze walls(cubes)
glCallList(mazeListId);//using the display list
}
void SpecialKeys(int key, int xx, int yy)
{
// ...
int state;
float fraction = 1.0f;
switch (key) {
case GLUT_KEY_LEFT:
angle -= 0.15f;
lx = sin(angle);
lz = -cos(angle);
break;
case GLUT_KEY_RIGHT:
angle += 0.15f;
lx = sin(angle);
lz = -cos(angle);
break;
case GLUT_KEY_UP:
x += lx * fraction;
z += lz * fraction;
break;
case GLUT_KEY_DOWN:
x -= lx * fraction;
z -= lz * fraction;
break;
}
I've tried to do this with my cube( the cube is now the "weapon").
glPushMatrix();
glTranslatef(0.0f, 0.0f, 5.0f);
glTranslatef(x + lx, -0.5f, z + lz);
glRotatef(angle, 0.0f, 0.0f, 1.0f);
drawCube(2);
glPopMatrix();
With this the cube moves forward és backward perfectly, but when I turn left or right, it stays at its position.
Can somebody help me with the turning?
Finally, I solved the problem! I didn't made the right order with the transformations. I share my code, if someone will need it in the future.
gluLookAt(x, 1.0f, z,
x + lx, 1.0f, z + lz,
0.0f, 1.0f, 0.0f);
glPushMatrix();
glTranslatef(x + lx, -0.5f, z + lz);
glRotatef(-angle* 57.2957795, 0.0f, 1.0f, 0.0f);
glTranslatef(0.0f, 0.0f, 5.0f); // offset
drawCube(2);
glPopMatrix();

Problem when trying to use simple Shaders + VBOs

Hello I'm trying to convert the following functions to a VBO based function for learning purposes, it displays a static texture on screen. I'm using OpenGL ES 2.0 with shaders on the iPhone (should be almost the same than regular OpenGL in this case), this is what I got working:
//Works!
- (void) drawAtPoint:(CGPoint)point depth:(CGFloat)depth
{
GLfloat coordinates[] = {
0, 1,
1, 1,
0, 0,
1, 0
};
GLfloat width = (GLfloat)_width * _maxS,
height = (GLfloat)_height * _maxT;
GLfloat vertices[] = {
-width / 2 + point.x, -height / 2 + point.y,
width / 2 + point.x, -height / 2 + point.y,
-width / 2 + point.x, height / 2 + point.y,
width / 2 + point.x, height / 2 + point.y,
};
glBindTexture(GL_TEXTURE_2D, _name);
//Attrib position and attrib_tex coord are handles for the shader attributes
glVertexAttribPointer(ATTRIB_POSITION, 2, GL_FLOAT, GL_FALSE, 0, vertices);
glEnableVertexAttribArray(ATTRIB_POSITION);
glVertexAttribPointer(ATTRIB_TEXCOORD, 2, GL_FLOAT, GL_FALSE, 0, coordinates);
glEnableVertexAttribArray(ATTRIB_TEXCOORD);
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
}
I tried to do this to convert to a VBO however I don't see anything displaying on-screen with this version:
//Doesn't display anything
- (void) drawAtPoint:(CGPoint)point depth:(CGFloat)depth
{
GLfloat width = (GLfloat)_width * _maxS,
height = (GLfloat)_height * _maxT;
GLfloat position[] = {
-width / 2 + point.x, -height / 2 + point.y,
width / 2 + point.x, -height / 2 + point.y,
-width / 2 + point.x, height / 2 + point.y,
width / 2 + point.x, height / 2 + point.y,
}; //Texture on-screen position ( each vertex is x,y in on-screen coords )
GLfloat coordinates[] = {
0, 1,
1, 1,
0, 0,
1, 0
}; // Texture coords from 0 to 1
glBindVertexArrayOES(vao);
glGenVertexArraysOES(1, &vao);
glGenBuffers(2, vbo);
//Buffer 1
glBindBuffer(GL_ARRAY_BUFFER, vbo[0]);
glBufferData(GL_ARRAY_BUFFER, 8 * sizeof(GLfloat), position, GL_STATIC_DRAW);
glEnableVertexAttribArray(ATTRIB_POSITION);
glVertexAttribPointer(ATTRIB_POSITION, 2, GL_FLOAT, GL_FALSE, 0, position);
//Buffer 2
glBindBuffer(GL_ARRAY_BUFFER, vbo[1]);
glBufferData(GL_ARRAY_BUFFER, 8 * sizeof(GLfloat), coordinates, GL_DYNAMIC_DRAW);
glEnableVertexAttribArray(ATTRIB_TEXCOORD);
glVertexAttribPointer(ATTRIB_TEXCOORD, 2, GL_FLOAT, GL_FALSE, 0, coordinates);
//Draw
glBindVertexArrayOES(vao);
glBindTexture(GL_TEXTURE_2D, _name);
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
}
In both cases I'm using this simple Vertex Shader
//Vertex Shader
attribute vec2 position;//Bound to ATTRIB_POSITION
attribute vec4 color;
attribute vec2 texcoord;//Bound to ATTRIB_TEXCOORD
varying vec2 texcoordVarying;
uniform mat4 mvp;
void main()
{
//You CAN'T use transpose before in glUniformMatrix4fv so... here it goes.
gl_Position = mvp * vec4(position.x, position.y, 0.0, 1.0);
texcoordVarying = texcoord;
}
The gl_Position is equal to product of mvp * vec4 because I'm simulating glOrthof in 2D with that mvp
And this Fragment Shader
//Fragment Shader
uniform sampler2D sampler;
varying mediump vec2 texcoordVarying;
void main()
{
gl_FragColor = texture2D(sampler, texcoordVarying);
}
I really need help with this, maybe my shaders are wrong for the second case ?
thanks in advance.
Everything is right, except the glVertexAttribPointer call.
When you have a VBO bound, the last parameter o glVertexAttribPointer is used as an offset into the VBO, as a pointer (the pointer value is the offset). So, your data is at the start of the VBO, so the last parameter should be 0 (NULL) for both calls.

Resources