Error 'overlapping register semantics not yet implemented' in VertexShader - graphics

I am trying to perform diffuse reflection in hlsl. Currently I am working on vertex shader. Unfortunatelly I get following error, when trying to compile with fxc.exe:
C:\Users\BBaczek\Projects\MyApp\VertexShader.vs.hlsl(2,10-25)
: error X4500: overlapping register semantics not yet implemented 'c1'
C:\Users\BBaczek\Projects\MyApp\VertexShader.vs.hlsl(2,10-25)
: error X4500: overlapping register semantics not yet implemented 'c2'
C:\Users\BBaczek\Projects\MyApp\VertexShader.vs.hlsl(2,10-25)
: error X4500: overlapping register semantics not yet implemented 'c3'
Vertex shader code:
float4x4 WorldViewProj : register(c0);
float4x4 inv_world_matrix : register(c1);
float4 LightAmbient;
float4 LightPosition;
struct VertexData
{
float4 Position : POSITION;
float4 Normal : NORMAL;
float3 UV : TEXCOORD;
};
struct VertexShaderOutput
{
float4 Position : POSITION;
float3 Color: COLOR;
};
VertexShaderOutput main(VertexData vertex)
{
VertexShaderOutput output;
vertex.Normal = normalize(vertex.Normal);
float4 newColor = LightAmbient;
vector obj_light = mul(LightPosition, inv_world_matrix);
vector LightDir = normalize(obj_light - vertex.Position);
float DiffuseAttn = max(0, dot(vertex.Normal, LightDir));
vector light = { 0.8, 0.8, 0.8, 1 };
newColor += light * DiffuseAttn;
output.Position = mul(vertex.Position, WorldViewProj);
output.Color = float3(newColor.r, newColor.g, newColor.b);
return output;
}
And command I use to perform compilation:
fxc /T vs_2_0 /O3 /Zpr /Fo VertexShader.vs VertexShader.vs.hlsl
Why am I getting this error? What can I do to prevent this?

Found it out - I am not deleting this question, because someone might find it useful.
What you need to do is changing
float4x4 WorldViewProj : register(c0);
float4x4 inv_world_matrix : register(c1);
to
float4x4 WorldViewProj : register(c0);
float4x4 inv_world_matrix : register(c4);
I am not sure what is that ok, but I assume that float4x4 is going to take more space in that buffer (4x4 - so it takes 4 places). I think that explanation is a bit silly but it works.

Related

How to do static environment mapping in DirectX 11?

I have the .dds cube map texture loaded it in and put it on the sphere model. Obviously it doesn't look right, because I would have to map the texture to the right points on the sphere.
how can I map the texture to the right points on the sphere?
somethings like that:
TextureCube TEXTURE_REFLECTION;
//...
VS_OUTPUT vs(VS_INPUT IN) {
//...
float4 worldPosition = mul(float4(IN.Position.xyz, 1.0f), IN.World);
OUT.worldpos = worldPosition.xyz;
//...
}
PS_OUTPUT ps(VS_OUTPUT IN) {
//...
float4 ColorTex = TEXTURE_DIFFUSE.Sample(SAMPLER_DEFAULT, IN.TexCoord);
float3 normalFromMap = mul(2.0f * TEXTURE_NORMAL.Sample(SAMPLER_DEFAULT, IN.TexCoord).xyz - 1.0f), IN.tangentToWorld);
//...
float3 incident = -normalize(CAMERA_POSITION - IN.worldpos);
float3 reflectionVector = reflect(incident, normalFromMap);
ColorTex.rgb = lerp(ColorTex.rgb, TEXTURE_REFLECTION.Sample(SAMPLER_DEFAULT, reflectionVector).rgb, MATERIAL_REFLECTIVITY);
}
I figured it out.
Pixel shader:
TextureCube CubeMap: register(t0);
SamplerState TexSampler : register(s0);
float4 main(LightingPixelShaderInput input) : SV_Target
{
float4 cubeTexture = CubeMap.Sample(TexSampler, input.worldNormal);
//light calculations
float3 finalColour = (gAmbientColour + diffuseLights) * cubeTexture.rgb +
(specularLights) * cubeTexture.a;
return float4(finalColour, 1.0f);
}

hlsl nointerpolation behaviour

For an experiment I want to pass all edges of a triangle to the pixel shader and manually calculate the pixel position with the triangle edges and bayrcentric coodinates.
In order to do this I wrote a geometry shader that passes all edges of my triangle to the pixel shader:
struct GeoIn
{
float4 projPos : SV_POSITION;
float3 position : POSITION;
};
struct GeoOut
{
float4 projPos : SV_POSITION;
float3 position : POSITION;
float3 p[3] : TRIPOS;
};
[maxvertexcount(3)]
void main(triangle GeoIn i[3], inout TriangleStream<GeoOut> OutputStream)
{
GeoOut o;
// add triangle data
for(uint idx = 0; idx < 3; ++idx)
{
o.p[idx] = i[idx].position;
}
// generate verices
for(idx = 0; idx < 3; ++idx)
{
o.projPos = i[idx].projPos;
o.position = i[idx].position;
OutputStream.Append(o);
}
OutputStream.RestartStrip();
}
The pixel shader outputs the manually reconstructed position:
struct PixelIn
{
float4 projPos : SV_POSITION;
float3 position : POSITION;
float3 p[3] : TRIPOS;
float3 bayr : SV_Barycentrics;
};
float4 main(PixelIn i) : SV_TARGET
{
float3 pos = i.bayr.x * i.p[0] + i.bayr.y * i.p[1] + i.bayr.z * i.p[2];
return float4(abs(pos), 1.0);
}
And I get the following (expected) result:
However, when I modify my PixelIn struct by adding nointerpolation to p[3]:
struct PixelIn
{
...
nointerpolation float3 p[3] : TRIPOS;
};
I get:
I did not expect a different result because I am not changing the values of p[] for a single triangle in the geometry shader. I tried debugging it by changing the output to float4(abs(i.p[0]), 1.0); with and without interpolation. Without nointerpolation the values of p[] do not vary within a triangle (which makes sense, because all should have the same value). With nointerpolation the values of p[] do change slightly. Why is that the case? I thought nointerpolate was not supposed to interpolate anything.
Edit:
This is the wireframe of my geometry:

Strange D3D11 Error when creatng shader [SharpDX/MONOGAME]

i'm a true beginner in shader programming and i'm using the monogame framework . I'm trying to follow along the examples in this book
packtpub 3d graphics with xna game studio 4.0
But i've been hitting a wall for the past 4 days trying to make the prelighting renderer works (chapter 3 if some of you know this book)
https://www.packtpub.com/books/content/advanced-lighting-3d-graphics-xna-game-studio-40
The code compiles fine , but any mesh with le prelighting shader applied fails to display
2.When drilling down the shader in the VS2013 locals , i find this kind of exceptions
VertexShader 'this.depthNormalEffect._shaders[0].VertexShader' threw
an exception of type
'System.Runtime.InteropServices.SEHException' SharpDX.Direct3D11.VertexShader
{System.Runtime.InteropServices.SEHException}
Which with DX native debugger tranlates to
D3D11 ERROR: ID3D11Device::CreateVertexShader: Shader must be vs_4_0, vs_4_1, or vs_5_0. Shader version provided: ps_4_0 [ STATE_CREATION ERROR #167: CREATEVERTEXSHADER_INVALIDSHADERTYPE]
D3D11: BREAK enabled for the previous message, which was: [ ERROR STATE_CREATION #167: CREATEVERTEXSHADER_INVALIDSHADERTYPE ]
First-chance exception at 0x75FAC41F (KernelBase.dll) in 3dTest.exe: 0x0000087A (parameters: 0x00000001, 0x003DB5D0, 0x003DC328).
D3D11 ERROR: ID3D11Device::CreatePixelShader: Shader must be ps_4_0, ps_4_1, or ps_5_0. Shader version provided: vs_4_0 [ STATE_CREATION ERROR #193: CREATEPIXELSHADER_INVALIDSHADERTYPE]
D3D11: BREAK enabled for the previous message, which was: [ ERROR STATE_CREATION #193: CREATEPIXELSHADER_INVALIDSHADERTYPE ]
The baffling thing being (to me) that the library seems to link the wrong compiled shader function to either the vertex or pixel shader
D3D11 ERROR: ID3D11Device::CreatePixelShader: Shader must be ps_4_0,
ps_4_1, or ps_5_0. Shader version provided: vs_4_0 [ STATE_CREATION
ERROR #193: CREATEPIXELSHADER_INVALIDSHADERTYPE]
here is some of the shader code ( the effect spans on three shaders with 2 rendering to render targets so the third can use it for the final pass)
float4x4 World;
float4x4 View;
float4x4 Projection;
struct VertexShaderInput
{
float4 Position : SV_POSITION;
float3 Normal : NORMAL0;
};
struct VertexShaderOutput
{
float4 Position : SV_POSITION;
float2 Depth : TEXCOORD0;
float3 Normal : TEXCOORD1;
};
VertexShaderOutput VertexShaderFunction(VertexShaderInput input)
{
VertexShaderOutput output;
float4x4 viewProjection = mul(View, Projection);
float4x4 worldViewProjection = mul(World, viewProjection);
output.Position = mul(input.Position, worldViewProjection);
output.Normal = mul(input.Normal, World);
//les composante z et w de Position correspondent a la distance % camera et % plan lointain
output.Depth.xy = output.Position.zw;
return output;
}
struct PixelShaderOutput
{
float4 Normal : COLOR0;
float4 Depth : COLOR1;
};
PixelShaderOutput PixelShaderFunction(VertexShaderOutput input)
{
PixelShaderOutput output;
output.Depth = input.Depth.x / input.Depth.y;
output.Normal.xyz = (normalize(input.Normal).xyz/ 2) + .5;
output.Depth.a = 1;
output.Normal.a = 1;
return output;
}
technique Technique0
{
pass Pass0
{
VertexShader = compile vs_4_0 VertexShaderFunction();
PixelShader = compile ps_4_0 PixelShaderFunction();
}
};
Well , searches on google left me dry on this issue which makes me think i'm doing something absolutly stupid wihtout realizing it (shader n00b remember :D) .
Any ideas ??

Directx 11 Texture mapping

I've looked for this and I am so sure it can be done.
Does anyone know how I can stop a texture being stretched over an oversized facet?
I remember in some game designs you would have the option of either stretching the image over the object or running a repeat.
EDIT: Okay, so I have used pixel coordinates and the issue still remains. The vertices are fine. What I am trying to do is load a bitmap and keep the size the same regardless of what the resolution is, or the size of the image. I want the image to only use 20x20 physical pixels.
I hope that makes sense because I don't think my previous explaination did.
Texture2D Texture;
SamplerState SampleType
{
Filter = TEXT_1BIT;
// AddressU = Clamp;
// AddressV = Clamp;
};
struct Vertex
{
float4 position : POSITION;
float2 tex : TEXCOORD0;
};
struct Pixel
{
float4 position : SV_POSITION;
float2 tex : TEXCOORD0;
};
Pixel FontVertexShader(Vertex input)
{
return input;
}
float4 FPS(Pixel input) : SV_Target
{
return Texture.Sample(SampleType, input.tex);
}
...
The answer is in hwnd = CreateWindow(...);
Using WS_POPUP meant I removed the borders and my texture was able to map itself correctly.
You need to use GetClientRect();
Thankyou to everyone for your help. :)

Passing colors through a pixel shader in HLSL

I have have a pixel shader that should simply pass the input color through, but instead I am getting a constant result. I think my syntax might be the problem. Here is the shader:
struct PixelShaderInput
{
float3 color : COLOR;
};
struct PixelShaderOutput
{
float4 color : SV_TARGET0;
};
PixelShaderOutput main(PixelShaderInput input)
{
PixelShaderOutput output;
output.color.rgba = float4(input.color, 1.0f); // input.color is 0.5, 0.5, 0.5; output is black
// output.color.rgba = float4(0.5f, 0.5f, 0.5f, 1); // output is gray
return output;
}
For testing, I have the vertex shader that precedes this in the pipleline passing a COLOR parameter of 0.5, 0.5, 0.5. Stepping through the pixel shader in VisualStudio, input.color has the correct values, and these are being assinged to output.color correctly. However when rendered, the vertices that use this shader are all black.
Here is the vertex shader element description:
const D3D11_INPUT_ELEMENT_DESC vertexDesc[] =
{
{ "POSITION", 0, DXGI_FORMAT_R32G32B32_FLOAT, 0, D3D11_APPEND_ALIGNED_ELEMENT, D3D11_INPUT_PER_VERTEX_DATA, 0 },
{ "COLOR", 0, DXGI_FORMAT_R32G32B32_FLOAT, 0, D3D11_APPEND_ALIGNED_ELEMENT, D3D11_INPUT_PER_VERTEX_DATA, 0 },
{ "TEXCOORD", 0, DXGI_FORMAT_R32G32_FLOAT, 0, D3D11_APPEND_ALIGNED_ELEMENT, D3D11_INPUT_PER_VERTEX_DATA, 0 },
};
I'm not sure if it's important that the vertex shader takes colors as RGB outputs the same, but the pixel shader outputs RGBA. The alpha layer is working correctly at least.
If I comment out that first assignment, the one using input.color, and uncomment the other assignment, with the explicit values, then the rendered pixels are gray (as expected).
Any ideas on what I'm doing wrong here?
I'm using shader model 4 level 9_1, with optimizations disabled and debug info enabled.
output.color.rgba = float4(input.color, 1.0f);
your input.color is a float4 and you are passing it into another float4, i think this should work
output.color.rgba = float4(input.color.rgb, 1.0f);
this is all you need to pass it thru simply
return input.color;
if you want to change the colour to red then do something like
input.color = float4(1.0f, 0.0f, 0.0f, 1.0f);
return input.color;
*Are you sure that your vertices are in the place they are supposed to be? You are starting to make me doubt my D3D knowledge. :P
I believe your problem is that you are only passing a color,
BOTH parts of the shader NEED a position in order to work.
Your PixelShaderInput layout should be:
struct PixelShaderInput
{
float4 position :SV_POSITION;
float3 color : COLOR;
};*
Could you maybe try this as your pixel shader?:
float4 main(float3 color : COLOR) : SV_TARGET
{
return float4(color, 1.0f);
}
I have never seen this kind of constructor
float4(input.color, 1.0f);
this might be the problem, but I could be wrong. Try passing the float values one by one like this:
float4(input.color[0], input.color[1], input.color[2], 1.0f);
Edit:
Actually you might have to use float4 as type for COLOR (http://msdn.microsoft.com/en-us/library/windows/desktop/bb509647(v=vs.85).aspx)

Resources