Strange D3D11 Error when creatng shader [SharpDX/MONOGAME] - direct3d

i'm a true beginner in shader programming and i'm using the monogame framework . I'm trying to follow along the examples in this book
packtpub 3d graphics with xna game studio 4.0
But i've been hitting a wall for the past 4 days trying to make the prelighting renderer works (chapter 3 if some of you know this book)
https://www.packtpub.com/books/content/advanced-lighting-3d-graphics-xna-game-studio-40
The code compiles fine , but any mesh with le prelighting shader applied fails to display
2.When drilling down the shader in the VS2013 locals , i find this kind of exceptions
VertexShader 'this.depthNormalEffect._shaders[0].VertexShader' threw
an exception of type
'System.Runtime.InteropServices.SEHException' SharpDX.Direct3D11.VertexShader
{System.Runtime.InteropServices.SEHException}
Which with DX native debugger tranlates to
D3D11 ERROR: ID3D11Device::CreateVertexShader: Shader must be vs_4_0, vs_4_1, or vs_5_0. Shader version provided: ps_4_0 [ STATE_CREATION ERROR #167: CREATEVERTEXSHADER_INVALIDSHADERTYPE]
D3D11: BREAK enabled for the previous message, which was: [ ERROR STATE_CREATION #167: CREATEVERTEXSHADER_INVALIDSHADERTYPE ]
First-chance exception at 0x75FAC41F (KernelBase.dll) in 3dTest.exe: 0x0000087A (parameters: 0x00000001, 0x003DB5D0, 0x003DC328).
D3D11 ERROR: ID3D11Device::CreatePixelShader: Shader must be ps_4_0, ps_4_1, or ps_5_0. Shader version provided: vs_4_0 [ STATE_CREATION ERROR #193: CREATEPIXELSHADER_INVALIDSHADERTYPE]
D3D11: BREAK enabled for the previous message, which was: [ ERROR STATE_CREATION #193: CREATEPIXELSHADER_INVALIDSHADERTYPE ]
The baffling thing being (to me) that the library seems to link the wrong compiled shader function to either the vertex or pixel shader
D3D11 ERROR: ID3D11Device::CreatePixelShader: Shader must be ps_4_0,
ps_4_1, or ps_5_0. Shader version provided: vs_4_0 [ STATE_CREATION
ERROR #193: CREATEPIXELSHADER_INVALIDSHADERTYPE]
here is some of the shader code ( the effect spans on three shaders with 2 rendering to render targets so the third can use it for the final pass)
float4x4 World;
float4x4 View;
float4x4 Projection;
struct VertexShaderInput
{
float4 Position : SV_POSITION;
float3 Normal : NORMAL0;
};
struct VertexShaderOutput
{
float4 Position : SV_POSITION;
float2 Depth : TEXCOORD0;
float3 Normal : TEXCOORD1;
};
VertexShaderOutput VertexShaderFunction(VertexShaderInput input)
{
VertexShaderOutput output;
float4x4 viewProjection = mul(View, Projection);
float4x4 worldViewProjection = mul(World, viewProjection);
output.Position = mul(input.Position, worldViewProjection);
output.Normal = mul(input.Normal, World);
//les composante z et w de Position correspondent a la distance % camera et % plan lointain
output.Depth.xy = output.Position.zw;
return output;
}
struct PixelShaderOutput
{
float4 Normal : COLOR0;
float4 Depth : COLOR1;
};
PixelShaderOutput PixelShaderFunction(VertexShaderOutput input)
{
PixelShaderOutput output;
output.Depth = input.Depth.x / input.Depth.y;
output.Normal.xyz = (normalize(input.Normal).xyz/ 2) + .5;
output.Depth.a = 1;
output.Normal.a = 1;
return output;
}
technique Technique0
{
pass Pass0
{
VertexShader = compile vs_4_0 VertexShaderFunction();
PixelShader = compile ps_4_0 PixelShaderFunction();
}
};
Well , searches on google left me dry on this issue which makes me think i'm doing something absolutly stupid wihtout realizing it (shader n00b remember :D) .
Any ideas ??

Related

hlsl nointerpolation behaviour

For an experiment I want to pass all edges of a triangle to the pixel shader and manually calculate the pixel position with the triangle edges and bayrcentric coodinates.
In order to do this I wrote a geometry shader that passes all edges of my triangle to the pixel shader:
struct GeoIn
{
float4 projPos : SV_POSITION;
float3 position : POSITION;
};
struct GeoOut
{
float4 projPos : SV_POSITION;
float3 position : POSITION;
float3 p[3] : TRIPOS;
};
[maxvertexcount(3)]
void main(triangle GeoIn i[3], inout TriangleStream<GeoOut> OutputStream)
{
GeoOut o;
// add triangle data
for(uint idx = 0; idx < 3; ++idx)
{
o.p[idx] = i[idx].position;
}
// generate verices
for(idx = 0; idx < 3; ++idx)
{
o.projPos = i[idx].projPos;
o.position = i[idx].position;
OutputStream.Append(o);
}
OutputStream.RestartStrip();
}
The pixel shader outputs the manually reconstructed position:
struct PixelIn
{
float4 projPos : SV_POSITION;
float3 position : POSITION;
float3 p[3] : TRIPOS;
float3 bayr : SV_Barycentrics;
};
float4 main(PixelIn i) : SV_TARGET
{
float3 pos = i.bayr.x * i.p[0] + i.bayr.y * i.p[1] + i.bayr.z * i.p[2];
return float4(abs(pos), 1.0);
}
And I get the following (expected) result:
However, when I modify my PixelIn struct by adding nointerpolation to p[3]:
struct PixelIn
{
...
nointerpolation float3 p[3] : TRIPOS;
};
I get:
I did not expect a different result because I am not changing the values of p[] for a single triangle in the geometry shader. I tried debugging it by changing the output to float4(abs(i.p[0]), 1.0); with and without interpolation. Without nointerpolation the values of p[] do not vary within a triangle (which makes sense, because all should have the same value). With nointerpolation the values of p[] do change slightly. Why is that the case? I thought nointerpolate was not supposed to interpolate anything.
Edit:
This is the wireframe of my geometry:

Error 'overlapping register semantics not yet implemented' in VertexShader

I am trying to perform diffuse reflection in hlsl. Currently I am working on vertex shader. Unfortunatelly I get following error, when trying to compile with fxc.exe:
C:\Users\BBaczek\Projects\MyApp\VertexShader.vs.hlsl(2,10-25)
: error X4500: overlapping register semantics not yet implemented 'c1'
C:\Users\BBaczek\Projects\MyApp\VertexShader.vs.hlsl(2,10-25)
: error X4500: overlapping register semantics not yet implemented 'c2'
C:\Users\BBaczek\Projects\MyApp\VertexShader.vs.hlsl(2,10-25)
: error X4500: overlapping register semantics not yet implemented 'c3'
Vertex shader code:
float4x4 WorldViewProj : register(c0);
float4x4 inv_world_matrix : register(c1);
float4 LightAmbient;
float4 LightPosition;
struct VertexData
{
float4 Position : POSITION;
float4 Normal : NORMAL;
float3 UV : TEXCOORD;
};
struct VertexShaderOutput
{
float4 Position : POSITION;
float3 Color: COLOR;
};
VertexShaderOutput main(VertexData vertex)
{
VertexShaderOutput output;
vertex.Normal = normalize(vertex.Normal);
float4 newColor = LightAmbient;
vector obj_light = mul(LightPosition, inv_world_matrix);
vector LightDir = normalize(obj_light - vertex.Position);
float DiffuseAttn = max(0, dot(vertex.Normal, LightDir));
vector light = { 0.8, 0.8, 0.8, 1 };
newColor += light * DiffuseAttn;
output.Position = mul(vertex.Position, WorldViewProj);
output.Color = float3(newColor.r, newColor.g, newColor.b);
return output;
}
And command I use to perform compilation:
fxc /T vs_2_0 /O3 /Zpr /Fo VertexShader.vs VertexShader.vs.hlsl
Why am I getting this error? What can I do to prevent this?
Found it out - I am not deleting this question, because someone might find it useful.
What you need to do is changing
float4x4 WorldViewProj : register(c0);
float4x4 inv_world_matrix : register(c1);
to
float4x4 WorldViewProj : register(c0);
float4x4 inv_world_matrix : register(c4);
I am not sure what is that ok, but I assume that float4x4 is going to take more space in that buffer (4x4 - so it takes 4 places). I think that explanation is a bit silly but it works.

Directx 11 Texture mapping

I've looked for this and I am so sure it can be done.
Does anyone know how I can stop a texture being stretched over an oversized facet?
I remember in some game designs you would have the option of either stretching the image over the object or running a repeat.
EDIT: Okay, so I have used pixel coordinates and the issue still remains. The vertices are fine. What I am trying to do is load a bitmap and keep the size the same regardless of what the resolution is, or the size of the image. I want the image to only use 20x20 physical pixels.
I hope that makes sense because I don't think my previous explaination did.
Texture2D Texture;
SamplerState SampleType
{
Filter = TEXT_1BIT;
// AddressU = Clamp;
// AddressV = Clamp;
};
struct Vertex
{
float4 position : POSITION;
float2 tex : TEXCOORD0;
};
struct Pixel
{
float4 position : SV_POSITION;
float2 tex : TEXCOORD0;
};
Pixel FontVertexShader(Vertex input)
{
return input;
}
float4 FPS(Pixel input) : SV_Target
{
return Texture.Sample(SampleType, input.tex);
}
...
The answer is in hwnd = CreateWindow(...);
Using WS_POPUP meant I removed the borders and my texture was able to map itself correctly.
You need to use GetClientRect();
Thankyou to everyone for your help. :)

Transparent objects covering each other

For an academic project I'm trying to write a code for drawing billboards from scratch; now I'm at the point of making them translucent. I've managed to make them look good against the background but they still may cover each other with their should-be-transparent corners, like in this picture:
I'm not sure what I'm doing wrong. This is the effect file I'm using to draw the billboards. I've omitted the parts related to the vertex shader, which I think is irrelevant right now.
//cut
texture Texture;
texture MaskTexture;
sampler Sampler = sampler_state
{
Texture = (Texture);
MinFilter = Linear;
MagFilter = Linear;
MipFilter = Point;
AddressU = Clamp;
AddressV = Clamp;
};
sampler MaskSampler = sampler_state
{
Texture = (MaskTexture);
MinFilter = Linear;
MagFilter = Linear;
MipFilter = Point;
AddressU = Clamp;
AddressV = Clamp;
};
//cut
struct VertexShaderOutput
{
float4 Position : POSITION0;
float4 Color : COLOR;
float2 TexCoord : TEXCOORD0;
};
//cut
float4 PixelShaderFunction(VertexShaderOutput input) : COLOR0
{
float4 result = tex2D(Sampler, input.TexCoord) * input.Color;
float4 mask = tex2D(MaskSampler, input.TexCoord);
float alpha = mask.r;
result.rgb *= alpha;
result.a = alpha;
return result;
}
technique Technique1
{
pass Pass1
{
VertexShader = compile vs_2_0 VertexShaderFunction();
PixelShader = compile ps_2_0 PixelShaderFunction();
AlphaBlendEnable = true;
SrcBlend = SrcAlpha;
DestBlend = InvSrcAlpha;
}
}
I've got two textures, named Texture and MaskTexture, the latter being in grayscale. The billboards are, most likely, in the same vertex buffer and are drawn with a single call of GraphicsDevice.DrawIndexedPrimitives() from XNA.
I've got a feeling I'm not doing the whole thing right.
You have to draw them in order. From farthest to closest.
I have found a solution. The shaders are fine, the problem turned out to be in the XNA code, so sorry for drawing your attention to the wrong thing.
The solution is to enable a depth stencil buffer (whatever it is) before drawing the billboards:
device.DepthStencilState = DepthStencilState.DepthRead;
It can be disabled afterwards:
device.DepthStencilState = DepthStencilState.Default;

Managed DirectX Postprocessing Fragment Shader rendering problem

I'm using Managed Direct X 2.0 with C# and I'm attempting to apply a fragment shader to a texture built by rendering the screen to a texture using the RenderToSurface helper class.
The code I'm using to do this is:
RtsHelper.BeginScene(RenderSurface);
device.Clear(ClearFlags.Target | ClearFlags.ZBuffer, Color.White, 1.0f, 0);
//pre-render shader setup
preProc.Begin(FX.None);
preProc.BeginPass(0);
//mesh drawing
mesh.DrawSubset(j);
preProc.CommitChanges();
preProc.EndPass();
preProc.End();
RtsHelper.EndScene(Filter.None);
which renders to my Surface, RenderSurface, which is attached to a Texture object called RenderTexture
Then I call the following code to render the surface to the screen, applying a second shader "PostProc" to the rendered texture. This shader combines color values on a per pixel basis and transforms the scene to grayscale. I'm following the tutorial here: http://rbwhitaker.wikidot.com/post-processing-effects
device.BeginScene();
{
using (Sprite sprite = new Sprite(device))
{
sprite.Begin(SpriteFlags.DoNotSaveState);
postProc.Begin(FX.None);
postProc.BeginPass(0);
sprite.Draw(RenderTexture, new Rectangle(0, 0, WINDOWWIDTH, WINDOWHEIGHT), new Vector3(0, 0, 0), new Vector3(0, 0, 0), Color.White);
postProc.CommitChanges();
postProc.EndPass();
postProc.End();
sprite.End();
}
}
device.EndScene();
device.Present();
this.Invalidate();
However all I see is the original rendered scene, as rendered to the texture, but unmodified by the second shader.
FX file is below in case it's important.
//------------------------------ TEXTURE PROPERTIES ----------------------------
// This is the texture that Sprite will try to set before drawing
texture ScreenTexture;
// Our sampler for the texture, which is just going to be pretty simple
sampler TextureSampler = sampler_state
{
Texture = <ScreenTexture>;
};
//------------------------ PIXEL SHADER ----------------------------------------
// This pixel shader will simply look up the color of the texture at the
// requested point, and turns it into a shade of gray
float4 PixelShaderFunction(float2 TextureCoordinate : TEXCOORD0) : COLOR0
{
float4 color = tex2D(TextureSampler, TextureCoordinate);
float value = (color.r + color.g + color.b) / 3;
color.r = value;
color.g = value;
color.b = value;
return color;
}
//-------------------------- TECHNIQUES ----------------------------------------
// This technique is pretty simple - only one pass, and only a pixel shader
technique BlackAndWhite
{
pass Pass1
{
PixelShader = compile ps_1_1 PixelShaderFunction();
}
}
Fixed it. Was using the wrong flags for the post processor shader initialization
was:
sprite.Begin(SpriteFlags.DoNotSaveState);
postProc.Begin(FX.None);
should be:
sprite.Begin(SpriteFlags.DoNotSaveState);
postProc.Begin(FX.DoNotSaveState);

Resources