Passing colors through a pixel shader in HLSL - direct3d

I have have a pixel shader that should simply pass the input color through, but instead I am getting a constant result. I think my syntax might be the problem. Here is the shader:
struct PixelShaderInput
{
float3 color : COLOR;
};
struct PixelShaderOutput
{
float4 color : SV_TARGET0;
};
PixelShaderOutput main(PixelShaderInput input)
{
PixelShaderOutput output;
output.color.rgba = float4(input.color, 1.0f); // input.color is 0.5, 0.5, 0.5; output is black
// output.color.rgba = float4(0.5f, 0.5f, 0.5f, 1); // output is gray
return output;
}
For testing, I have the vertex shader that precedes this in the pipleline passing a COLOR parameter of 0.5, 0.5, 0.5. Stepping through the pixel shader in VisualStudio, input.color has the correct values, and these are being assinged to output.color correctly. However when rendered, the vertices that use this shader are all black.
Here is the vertex shader element description:
const D3D11_INPUT_ELEMENT_DESC vertexDesc[] =
{
{ "POSITION", 0, DXGI_FORMAT_R32G32B32_FLOAT, 0, D3D11_APPEND_ALIGNED_ELEMENT, D3D11_INPUT_PER_VERTEX_DATA, 0 },
{ "COLOR", 0, DXGI_FORMAT_R32G32B32_FLOAT, 0, D3D11_APPEND_ALIGNED_ELEMENT, D3D11_INPUT_PER_VERTEX_DATA, 0 },
{ "TEXCOORD", 0, DXGI_FORMAT_R32G32_FLOAT, 0, D3D11_APPEND_ALIGNED_ELEMENT, D3D11_INPUT_PER_VERTEX_DATA, 0 },
};
I'm not sure if it's important that the vertex shader takes colors as RGB outputs the same, but the pixel shader outputs RGBA. The alpha layer is working correctly at least.
If I comment out that first assignment, the one using input.color, and uncomment the other assignment, with the explicit values, then the rendered pixels are gray (as expected).
Any ideas on what I'm doing wrong here?
I'm using shader model 4 level 9_1, with optimizations disabled and debug info enabled.

output.color.rgba = float4(input.color, 1.0f);
your input.color is a float4 and you are passing it into another float4, i think this should work
output.color.rgba = float4(input.color.rgb, 1.0f);
this is all you need to pass it thru simply
return input.color;
if you want to change the colour to red then do something like
input.color = float4(1.0f, 0.0f, 0.0f, 1.0f);
return input.color;

*Are you sure that your vertices are in the place they are supposed to be? You are starting to make me doubt my D3D knowledge. :P
I believe your problem is that you are only passing a color,
BOTH parts of the shader NEED a position in order to work.
Your PixelShaderInput layout should be:
struct PixelShaderInput
{
float4 position :SV_POSITION;
float3 color : COLOR;
};*
Could you maybe try this as your pixel shader?:
float4 main(float3 color : COLOR) : SV_TARGET
{
return float4(color, 1.0f);
}

I have never seen this kind of constructor
float4(input.color, 1.0f);
this might be the problem, but I could be wrong. Try passing the float values one by one like this:
float4(input.color[0], input.color[1], input.color[2], 1.0f);
Edit:
Actually you might have to use float4 as type for COLOR (http://msdn.microsoft.com/en-us/library/windows/desktop/bb509647(v=vs.85).aspx)

Related

3d model not rendered (DirectX 12)

I am developing a small program that load 3d models using assimp, but it does not render the model. At first I thought that vertices and indices were not loaded correctly but this is not the case ( I printed on a txt file vertices and indices). I think that the probem might be with the position of the model and camera. The application does not return any error, it runs properly.
Vertex Struct:
struct Vertex {
XMFLOAT3 position;
XMFLOAT2 texture;
XMFLOAT3 normal;
};
Input layout:
D3D12_INPUT_ELEMENT_DESC inputLayout[] =
{
{ "POSITION", 0, DXGI_FORMAT_R32G32B32_FLOAT, 0, 0, D3D12_INPUT_CLASSIFICATION_PER_VERTEX_DATA, 0 },
{ "TEXCOORD", 0, DXGI_FORMAT_R32G32_FLOAT, 0, 12, D3D12_INPUT_CLASSIFICATION_PER_VERTEX_DATA, 0 },
{ "NORMAL", 0, DXGI_FORMAT_R32G32B32_FLOAT, 0, D3D12_APPEND_ALIGNED_ELEMENT, D3D12_INPUT_CLASSIFICATION_PER_VERTEX_DATA, 0 }
};
Vertices, texcoords, normals and indices loader:
model = new ModelMesh();
std::vector<XMFLOAT3> positions;
std::vector<XMFLOAT3> normals;
std::vector<XMFLOAT2> texCoords;
std::vector<unsigned int> indices;
model->LoadMesh("beast.x", positions, normals,
texCoords, indices);
// Create vertex buffer
if (positions.size() == 0)
{
MessageBox(0, L"Vertices vector is empty.",
L"Error", MB_OK);
}
Vertex* vList = new Vertex[positions.size()];
for (size_t i = 0; i < positions.size(); i++)
{
Vertex vert;
XMFLOAT3 pos = positions[i];
vert.position = XMFLOAT3(pos.x, pos.y, pos.z);
XMFLOAT3 norm = normals[i];
vert.normal = XMFLOAT3(norm.x, norm.y, norm.z);
XMFLOAT2 tex = texCoords[i];
vert.texture = XMFLOAT2(tex.x, tex.y);
vList[i] = vert;
}
int vBufferSize = sizeof(vList);
Build of the camera and views:
XMMATRIX tmpMat = XMMatrixPerspectiveFovLH(45.0f*(3.14f/180.0f), (float)Width / (float)Height, 0.1f, 1000.0f);
XMStoreFloat4x4(&cameraProjMat, tmpMat);
// set starting camera state
cameraPosition = XMFLOAT4(0.0f, 2.0f, -4.0f, 0.0f);
cameraTarget = XMFLOAT4(0.0f, 0.0f, 0.0f, 0.0f);
cameraUp = XMFLOAT4(0.0f, 1.0f, 0.0f, 0.0f);
// build view matrix
XMVECTOR cPos = XMLoadFloat4(&cameraPosition);
XMVECTOR cTarg = XMLoadFloat4(&cameraTarget);
XMVECTOR cUp = XMLoadFloat4(&cameraUp);
tmpMat = XMMatrixLookAtLH(cPos, cTarg, cUp);
XMStoreFloat4x4(&cameraViewMat, tmpMat);
cube1Position = XMFLOAT4(0.0f, 0.0f, 0.0f, 0.0f);
XMVECTOR posVec = XMLoadFloat4(&cube1Position);
tmpMat = XMMatrixTranslationFromVector(posVec);
XMStoreFloat4x4(&cube1RotMat, XMMatrixIdentity());
XMStoreFloat4x4(&cube1WorldMat, tmpMat);
Update function :
XMStoreFloat4x4(&cube1WorldMat, worldMat);
XMMATRIX viewMat = XMLoadFloat4x4(&cameraViewMat); // load view matrix
XMMATRIX projMat = XMLoadFloat4x4(&cameraProjMat); // load projection matrix
XMMATRIX wvpMat = XMLoadFloat4x4(&cube1WorldMat) * viewMat * projMat; // create wvp matrix
XMMATRIX transposed = XMMatrixTranspose(wvpMat); // must transpose wvp matrix for the gpu
XMStoreFloat4x4(&cbPerObject.wvpMat, transposed); // store transposed wvp matrix in constant buffer
memcpy(cbvGPUAddress[frameIndex], &cbPerObject, sizeof(cbPerObject));
VERTEX SHADER:
struct VS_INPUT
{
float4 pos : POSITION;
float2 tex: TEXCOORD;
float3 normal : NORMAL;
};
struct VS_OUTPUT
{
float4 pos: SV_POSITION;
float2 tex: TEXCOORD;
float3 normal: NORMAL;
};
cbuffer ConstantBuffer : register(b0)
{
float4x4 wvpMat;
};
VS_OUTPUT main(VS_INPUT input)
{
VS_OUTPUT output;
output.pos = mul(input.pos, wvpMat);
return output;
}
Hope it is a long code to read but I don't understand what is going wrong with this code. Hope somebody can help me.
A few things to try/check:
Make your background clear color grey. That way, if you are drawing black triangles you will see them.
Turn backface culling off in the rendering state, in case your triangles are back to front.
Turn depth test off in the rendering state.
Turn off alpha blending.
You don't show your pixel shader, but try writing a constant color to see if your lighting calculation is broken.
Use NVIDIA's nSight tool, or the Visual Studio Graphics debugger to see what your graphics pipeline is doing.
Those are usually the things I try first...

Problems implementing perspective projection in 3D graphics, (d3d)

I'm having massive difficulties with setting up a perspective projection with direct x.
I've been stuck on this for weeks, any help would be appreciated.
As far as I can my pipeline set up is fine, in as far as the shader is getting exactly the data I am packing it, so I'll forgo including that code, unless I'm asked for it.
I am using glm for maths (therefore column matrices) on the host side. It's set to use a left handed coordinate system. However I couldn't seem to get its projection matrix to work,
(I get a blank screen if I use the matrix I get from it) so I wrote this - which I hope only to be temporary.
namespace
{
glm::mat4
perspective()
{
DirectX::XMMATRIX dx = DirectX::XMMatrixPerspectiveFovLH(glm::radians(60.0f), 1, 0.01, 1000.0f);
glm::mat4 glfromdx;
for (int c = 0; c < 3; c++)
{
for (int r = 0; r < 3; r++)
{
glfromdx[c][r] = DirectX::XMVectorGetByIndex(dx.r[c], r);
}
}
return glfromdx;
}
}
this does hand me a projection matrix that works - but I don't really think that it is working. I'm not certain whether
its an actual projection, and my clipspace is miniscule. If I move my camera around at all it clips. Photos included.
photo 1: (straight on)
photo 2: (cam tilted up)
photo 3: (cam tilted left)
Here is my vertex shader
https://gist.github.com/anonymous/31726881a5476e6c8513573909bf4dc6
cbuffer offsets
{
float4x4 model; //<------currently just identity
float4x4 view;
float4x4 proj;
}
struct idata
{
float3 position : POSITION;
float4 colour : COLOR;
};
struct odata
{
float4 position : SV_POSITION;
float4 colour : COLOR;
};
void main(in idata dat, out odata odat)
{
odat.colour = dat.colour;
odat.position = float4(dat.position, 1.0f);
//point' = point * model * view * projection
float4x4 mvp = mul(model, mul(view, proj));
//column matrices.
odat.position = mul(mvp, odat.position);
}
And here is my fragment shader
https://gist.github.com/anonymous/342a707e603b9034deb65eb2c2101e91
struct idata
{
float4 position : SV_POSITION;
float4 colour : COLOR;
};
float4 main(in idata data) : SV_TARGET
{
return float4(data.colour[0], data.colour[1], data.colour[2] , 1.0f);
}
also full camera implementation is here
https://gist.github.com/anonymous/b15de44f08852cbf4fa4d6b9fafa0d43
https://gist.github.com/anonymous/8702429212c411ddb04f5163b1d885b9
Finally these are the vertices being passed in
position 3 colour 4
(0.0f, 0.5f, 0.0f, 1.0f, 0.0f, 0.0f, 1.0f),
(0.45f, -0.5f, 0.0f, 0.0f, 1.0f, 0.0f, 1.0f),
(-0.45f, -0.5f, 0.0f, 0.0f, 0.0f, 1.0f, 1.0f)
Edit:
I changed the P*M*V*P calculation in the shader to this;
odat.position = mul(mul(mul(proj, view), model), odat.position);
still having the same issue.
Edit 2:
Hmm, the above changes mixed with a change of what generates the perspective, now just purely using glm.
glm::mat4 GLtoDX_NDC(glm::translate(glm::vec3(0.0, 0.0, 0.5)) *
glm::scale(glm::vec3(1.0, 1.0, 0.5)));
glm::mat4 gl = GLtoDX_NDC * glm::perspectiveFovLH(60.0f, 500.0f, 500.0f, 100.0f, 0.1f);
This works, in as far as object no longer cull at about a micrometre, and are projected to their approximate sizes. However, now everything is upside down and rotating causes a shear...

Error 'overlapping register semantics not yet implemented' in VertexShader

I am trying to perform diffuse reflection in hlsl. Currently I am working on vertex shader. Unfortunatelly I get following error, when trying to compile with fxc.exe:
C:\Users\BBaczek\Projects\MyApp\VertexShader.vs.hlsl(2,10-25)
: error X4500: overlapping register semantics not yet implemented 'c1'
C:\Users\BBaczek\Projects\MyApp\VertexShader.vs.hlsl(2,10-25)
: error X4500: overlapping register semantics not yet implemented 'c2'
C:\Users\BBaczek\Projects\MyApp\VertexShader.vs.hlsl(2,10-25)
: error X4500: overlapping register semantics not yet implemented 'c3'
Vertex shader code:
float4x4 WorldViewProj : register(c0);
float4x4 inv_world_matrix : register(c1);
float4 LightAmbient;
float4 LightPosition;
struct VertexData
{
float4 Position : POSITION;
float4 Normal : NORMAL;
float3 UV : TEXCOORD;
};
struct VertexShaderOutput
{
float4 Position : POSITION;
float3 Color: COLOR;
};
VertexShaderOutput main(VertexData vertex)
{
VertexShaderOutput output;
vertex.Normal = normalize(vertex.Normal);
float4 newColor = LightAmbient;
vector obj_light = mul(LightPosition, inv_world_matrix);
vector LightDir = normalize(obj_light - vertex.Position);
float DiffuseAttn = max(0, dot(vertex.Normal, LightDir));
vector light = { 0.8, 0.8, 0.8, 1 };
newColor += light * DiffuseAttn;
output.Position = mul(vertex.Position, WorldViewProj);
output.Color = float3(newColor.r, newColor.g, newColor.b);
return output;
}
And command I use to perform compilation:
fxc /T vs_2_0 /O3 /Zpr /Fo VertexShader.vs VertexShader.vs.hlsl
Why am I getting this error? What can I do to prevent this?
Found it out - I am not deleting this question, because someone might find it useful.
What you need to do is changing
float4x4 WorldViewProj : register(c0);
float4x4 inv_world_matrix : register(c1);
to
float4x4 WorldViewProj : register(c0);
float4x4 inv_world_matrix : register(c4);
I am not sure what is that ok, but I assume that float4x4 is going to take more space in that buffer (4x4 - so it takes 4 places). I think that explanation is a bit silly but it works.

Directx 11 Texture mapping

I've looked for this and I am so sure it can be done.
Does anyone know how I can stop a texture being stretched over an oversized facet?
I remember in some game designs you would have the option of either stretching the image over the object or running a repeat.
EDIT: Okay, so I have used pixel coordinates and the issue still remains. The vertices are fine. What I am trying to do is load a bitmap and keep the size the same regardless of what the resolution is, or the size of the image. I want the image to only use 20x20 physical pixels.
I hope that makes sense because I don't think my previous explaination did.
Texture2D Texture;
SamplerState SampleType
{
Filter = TEXT_1BIT;
// AddressU = Clamp;
// AddressV = Clamp;
};
struct Vertex
{
float4 position : POSITION;
float2 tex : TEXCOORD0;
};
struct Pixel
{
float4 position : SV_POSITION;
float2 tex : TEXCOORD0;
};
Pixel FontVertexShader(Vertex input)
{
return input;
}
float4 FPS(Pixel input) : SV_Target
{
return Texture.Sample(SampleType, input.tex);
}
...
The answer is in hwnd = CreateWindow(...);
Using WS_POPUP meant I removed the borders and my texture was able to map itself correctly.
You need to use GetClientRect();
Thankyou to everyone for your help. :)

Managed DirectX Postprocessing Fragment Shader rendering problem

I'm using Managed Direct X 2.0 with C# and I'm attempting to apply a fragment shader to a texture built by rendering the screen to a texture using the RenderToSurface helper class.
The code I'm using to do this is:
RtsHelper.BeginScene(RenderSurface);
device.Clear(ClearFlags.Target | ClearFlags.ZBuffer, Color.White, 1.0f, 0);
//pre-render shader setup
preProc.Begin(FX.None);
preProc.BeginPass(0);
//mesh drawing
mesh.DrawSubset(j);
preProc.CommitChanges();
preProc.EndPass();
preProc.End();
RtsHelper.EndScene(Filter.None);
which renders to my Surface, RenderSurface, which is attached to a Texture object called RenderTexture
Then I call the following code to render the surface to the screen, applying a second shader "PostProc" to the rendered texture. This shader combines color values on a per pixel basis and transforms the scene to grayscale. I'm following the tutorial here: http://rbwhitaker.wikidot.com/post-processing-effects
device.BeginScene();
{
using (Sprite sprite = new Sprite(device))
{
sprite.Begin(SpriteFlags.DoNotSaveState);
postProc.Begin(FX.None);
postProc.BeginPass(0);
sprite.Draw(RenderTexture, new Rectangle(0, 0, WINDOWWIDTH, WINDOWHEIGHT), new Vector3(0, 0, 0), new Vector3(0, 0, 0), Color.White);
postProc.CommitChanges();
postProc.EndPass();
postProc.End();
sprite.End();
}
}
device.EndScene();
device.Present();
this.Invalidate();
However all I see is the original rendered scene, as rendered to the texture, but unmodified by the second shader.
FX file is below in case it's important.
//------------------------------ TEXTURE PROPERTIES ----------------------------
// This is the texture that Sprite will try to set before drawing
texture ScreenTexture;
// Our sampler for the texture, which is just going to be pretty simple
sampler TextureSampler = sampler_state
{
Texture = <ScreenTexture>;
};
//------------------------ PIXEL SHADER ----------------------------------------
// This pixel shader will simply look up the color of the texture at the
// requested point, and turns it into a shade of gray
float4 PixelShaderFunction(float2 TextureCoordinate : TEXCOORD0) : COLOR0
{
float4 color = tex2D(TextureSampler, TextureCoordinate);
float value = (color.r + color.g + color.b) / 3;
color.r = value;
color.g = value;
color.b = value;
return color;
}
//-------------------------- TECHNIQUES ----------------------------------------
// This technique is pretty simple - only one pass, and only a pixel shader
technique BlackAndWhite
{
pass Pass1
{
PixelShader = compile ps_1_1 PixelShaderFunction();
}
}
Fixed it. Was using the wrong flags for the post processor shader initialization
was:
sprite.Begin(SpriteFlags.DoNotSaveState);
postProc.Begin(FX.None);
should be:
sprite.Begin(SpriteFlags.DoNotSaveState);
postProc.Begin(FX.DoNotSaveState);

Resources