Monogame pixel shader - Texture passing as completely transparent - graphics

I'm trying to make a distortion shader for water in my game. I have the screen's rendertarget, and the water mask rendertarget, and I'm try to simply capture the pixels underneath the mask, but I can't get it to work. When I pass the textures, it's as if they're both completely transparent. What could I be doing wrong?
Shader:
texture Screen;
texture Mask;
float2 Offset;
sampler ScreenSampler = sampler_state
{
Texture = <Screen>;
};
sampler MaskSampler = sampler_state
{
Texture = <Mask>;
};
float4 PixelShaderFunction(float2 texCoord: TEXCOORD0) : COLOR
{
float4 mask = tex2D(MaskSampler, texCoord);
float4 color = tex2D(ScreenSampler, texCoord + Offset);
if (mask.a > 0)
{
return color;
}
return mask;
}
technique Technique0
{
pass Pass0
{
PixelShader = compile ps_4_0 PixelShaderFunction();
}
}
Render target:
Doldrums.Game.Graphics.GraphicsDevice.SetRenderTarget(renderTargetDistortion);
Doldrums.Game.Graphics.GraphicsDevice.Clear(Color.Transparent);
waterEffect.Parameters["Screen"].SetValue(Doldrums.RenderTarget);
waterEffect.Parameters["Mask"].SetValue(renderTargetWater);
waterEffect.Parameters["Offset"].SetValue(Doldrums.Camera.ToScreen(renderTargetPosition));
sprites.Begin(SpriteSortMode.Deferred, null, null, null, null, waterEffect);
sprites.Draw(renderTargetWater, Vector2.Zero, Color.White);
sprites.End();
Finally, rendering the rendertarget:
sprites.Draw(renderTargetDistortion, renderTargetPosition, Color.White);

I had the exact same "issue"using monogame during my development. The problem here is easily fixed, change this:
sprites.Begin(**SpriteSortMode.Deferred**, null, null, null, null, waterEffect);
sprites.Draw(renderTargetWater, Vector2.Zero, Color.White);
sprites.End();
To another mode like this:
sprites.Begin(**SpriteSortMode.Immediate**, null, null, null, null, waterEffect);
sprites.Draw(renderTargetWater, Vector2.Zero, Color.White);
sprites.End();
Have fun :)

Related

3d model not rendered (DirectX 12)

I am developing a small program that load 3d models using assimp, but it does not render the model. At first I thought that vertices and indices were not loaded correctly but this is not the case ( I printed on a txt file vertices and indices). I think that the probem might be with the position of the model and camera. The application does not return any error, it runs properly.
Vertex Struct:
struct Vertex {
XMFLOAT3 position;
XMFLOAT2 texture;
XMFLOAT3 normal;
};
Input layout:
D3D12_INPUT_ELEMENT_DESC inputLayout[] =
{
{ "POSITION", 0, DXGI_FORMAT_R32G32B32_FLOAT, 0, 0, D3D12_INPUT_CLASSIFICATION_PER_VERTEX_DATA, 0 },
{ "TEXCOORD", 0, DXGI_FORMAT_R32G32_FLOAT, 0, 12, D3D12_INPUT_CLASSIFICATION_PER_VERTEX_DATA, 0 },
{ "NORMAL", 0, DXGI_FORMAT_R32G32B32_FLOAT, 0, D3D12_APPEND_ALIGNED_ELEMENT, D3D12_INPUT_CLASSIFICATION_PER_VERTEX_DATA, 0 }
};
Vertices, texcoords, normals and indices loader:
model = new ModelMesh();
std::vector<XMFLOAT3> positions;
std::vector<XMFLOAT3> normals;
std::vector<XMFLOAT2> texCoords;
std::vector<unsigned int> indices;
model->LoadMesh("beast.x", positions, normals,
texCoords, indices);
// Create vertex buffer
if (positions.size() == 0)
{
MessageBox(0, L"Vertices vector is empty.",
L"Error", MB_OK);
}
Vertex* vList = new Vertex[positions.size()];
for (size_t i = 0; i < positions.size(); i++)
{
Vertex vert;
XMFLOAT3 pos = positions[i];
vert.position = XMFLOAT3(pos.x, pos.y, pos.z);
XMFLOAT3 norm = normals[i];
vert.normal = XMFLOAT3(norm.x, norm.y, norm.z);
XMFLOAT2 tex = texCoords[i];
vert.texture = XMFLOAT2(tex.x, tex.y);
vList[i] = vert;
}
int vBufferSize = sizeof(vList);
Build of the camera and views:
XMMATRIX tmpMat = XMMatrixPerspectiveFovLH(45.0f*(3.14f/180.0f), (float)Width / (float)Height, 0.1f, 1000.0f);
XMStoreFloat4x4(&cameraProjMat, tmpMat);
// set starting camera state
cameraPosition = XMFLOAT4(0.0f, 2.0f, -4.0f, 0.0f);
cameraTarget = XMFLOAT4(0.0f, 0.0f, 0.0f, 0.0f);
cameraUp = XMFLOAT4(0.0f, 1.0f, 0.0f, 0.0f);
// build view matrix
XMVECTOR cPos = XMLoadFloat4(&cameraPosition);
XMVECTOR cTarg = XMLoadFloat4(&cameraTarget);
XMVECTOR cUp = XMLoadFloat4(&cameraUp);
tmpMat = XMMatrixLookAtLH(cPos, cTarg, cUp);
XMStoreFloat4x4(&cameraViewMat, tmpMat);
cube1Position = XMFLOAT4(0.0f, 0.0f, 0.0f, 0.0f);
XMVECTOR posVec = XMLoadFloat4(&cube1Position);
tmpMat = XMMatrixTranslationFromVector(posVec);
XMStoreFloat4x4(&cube1RotMat, XMMatrixIdentity());
XMStoreFloat4x4(&cube1WorldMat, tmpMat);
Update function :
XMStoreFloat4x4(&cube1WorldMat, worldMat);
XMMATRIX viewMat = XMLoadFloat4x4(&cameraViewMat); // load view matrix
XMMATRIX projMat = XMLoadFloat4x4(&cameraProjMat); // load projection matrix
XMMATRIX wvpMat = XMLoadFloat4x4(&cube1WorldMat) * viewMat * projMat; // create wvp matrix
XMMATRIX transposed = XMMatrixTranspose(wvpMat); // must transpose wvp matrix for the gpu
XMStoreFloat4x4(&cbPerObject.wvpMat, transposed); // store transposed wvp matrix in constant buffer
memcpy(cbvGPUAddress[frameIndex], &cbPerObject, sizeof(cbPerObject));
VERTEX SHADER:
struct VS_INPUT
{
float4 pos : POSITION;
float2 tex: TEXCOORD;
float3 normal : NORMAL;
};
struct VS_OUTPUT
{
float4 pos: SV_POSITION;
float2 tex: TEXCOORD;
float3 normal: NORMAL;
};
cbuffer ConstantBuffer : register(b0)
{
float4x4 wvpMat;
};
VS_OUTPUT main(VS_INPUT input)
{
VS_OUTPUT output;
output.pos = mul(input.pos, wvpMat);
return output;
}
Hope it is a long code to read but I don't understand what is going wrong with this code. Hope somebody can help me.
A few things to try/check:
Make your background clear color grey. That way, if you are drawing black triangles you will see them.
Turn backface culling off in the rendering state, in case your triangles are back to front.
Turn depth test off in the rendering state.
Turn off alpha blending.
You don't show your pixel shader, but try writing a constant color to see if your lighting calculation is broken.
Use NVIDIA's nSight tool, or the Visual Studio Graphics debugger to see what your graphics pipeline is doing.
Those are usually the things I try first...

Aligning osgViewer Camera with an Image

As a precursor to render to texture, I am trying to simply align the osgViewer's camera to a texture mapped plane.
This is the code I employ for the same:
int main()
{
osgViewer::Viewer viewer;
osg::ref_ptr<osg::Image> image = osgDB::readImageFile("path//to//file.png");
if (!image.valid())
{
assert(false);
return 1;
}
osg::ref_ptr<osg::Geometry> pictureQuad = osg::createTexturedQuadGeometry(osg::Vec3(0.f,0.f,0.f),
osg::Vec3(image->s(),0.f,0.f),
osg::Vec3(0.f,0.f,image->t()),
0.f,
0.f,
image->s(),
image->t());
osg::ref_ptr<osg::TextureRectangle> textureRect = new osg::TextureRectangle(image);
textureRect->setFilter(osg::Texture::MIN_FILTER, osg::Texture::LINEAR);
textureRect->setFilter(osg::Texture::MAG_FILTER, osg::Texture::LINEAR);
textureRect->setWrap(osg::Texture::WRAP_S, osg::Texture::CLAMP_TO_EDGE);
textureRect->setWrap(osg::Texture::WRAP_T, osg::Texture::CLAMP_TO_EDGE);
pictureQuad->getOrCreateStateSet()->setTextureAttributeAndModes(0, textureRect.get(),
osg::StateAttribute::ON);
pictureQuad->getOrCreateStateSet()->setMode(GL_DEPTH_TEST, osg::StateAttribute::ON);
osg::ref_ptr<osg::Geode> geode = new osg::Geode();
geode->setDataVariance(osg::Object::DYNAMIC);
geode->addDrawable(pictureQuad.get());
osg::StateSet *state = geode->getOrCreateStateSet();
state->setMode( GL_LIGHTING, osg::StateAttribute::PROTECTED | osg::StateAttribute::OFF );
viewer.setSceneData(geode);
osg::ref_ptr<osg::Camera> camera = viewer.getCamera();
while( !viewer.done() )
{
camera->setReferenceFrame(osg::Transform::ABSOLUTE_RF);
camera->setProjectionMatrix(osg::Matrix::ortho2D(0.f, image->s(), 0.f, image->t()));
camera->setViewMatrixAsLookAt(osg::Vec3f(0.f, -100.f, 0.f),
osg::Vec3f(image->s()*0.5, 0.f, image->t()*0.5f),
osg::Vec3f(0.f, 0.f, 1.f));
viewer.frame();
}
return 0;
}
However, the results show me a view that is completely skewed. Can someone please point out the bug in my code?
In
camera->setViewMatrixAsLookAt(osg::Vec3f(0.f, -100.f, 0.f), // eye
osg::Vec3f(image->s()*0.5, 0.f, image->t()*0.5f), // center
osg::Vec3f(0.f, 0.f, 1.f)); // up vector
Your eye is on the ground, looking at the center of your image, which is up from it, so you would naturally expect a slanted image.
Try putting the eye at height image->t()*0.5 so the view is straight-on.

Passing colors through a pixel shader in HLSL

I have have a pixel shader that should simply pass the input color through, but instead I am getting a constant result. I think my syntax might be the problem. Here is the shader:
struct PixelShaderInput
{
float3 color : COLOR;
};
struct PixelShaderOutput
{
float4 color : SV_TARGET0;
};
PixelShaderOutput main(PixelShaderInput input)
{
PixelShaderOutput output;
output.color.rgba = float4(input.color, 1.0f); // input.color is 0.5, 0.5, 0.5; output is black
// output.color.rgba = float4(0.5f, 0.5f, 0.5f, 1); // output is gray
return output;
}
For testing, I have the vertex shader that precedes this in the pipleline passing a COLOR parameter of 0.5, 0.5, 0.5. Stepping through the pixel shader in VisualStudio, input.color has the correct values, and these are being assinged to output.color correctly. However when rendered, the vertices that use this shader are all black.
Here is the vertex shader element description:
const D3D11_INPUT_ELEMENT_DESC vertexDesc[] =
{
{ "POSITION", 0, DXGI_FORMAT_R32G32B32_FLOAT, 0, D3D11_APPEND_ALIGNED_ELEMENT, D3D11_INPUT_PER_VERTEX_DATA, 0 },
{ "COLOR", 0, DXGI_FORMAT_R32G32B32_FLOAT, 0, D3D11_APPEND_ALIGNED_ELEMENT, D3D11_INPUT_PER_VERTEX_DATA, 0 },
{ "TEXCOORD", 0, DXGI_FORMAT_R32G32_FLOAT, 0, D3D11_APPEND_ALIGNED_ELEMENT, D3D11_INPUT_PER_VERTEX_DATA, 0 },
};
I'm not sure if it's important that the vertex shader takes colors as RGB outputs the same, but the pixel shader outputs RGBA. The alpha layer is working correctly at least.
If I comment out that first assignment, the one using input.color, and uncomment the other assignment, with the explicit values, then the rendered pixels are gray (as expected).
Any ideas on what I'm doing wrong here?
I'm using shader model 4 level 9_1, with optimizations disabled and debug info enabled.
output.color.rgba = float4(input.color, 1.0f);
your input.color is a float4 and you are passing it into another float4, i think this should work
output.color.rgba = float4(input.color.rgb, 1.0f);
this is all you need to pass it thru simply
return input.color;
if you want to change the colour to red then do something like
input.color = float4(1.0f, 0.0f, 0.0f, 1.0f);
return input.color;
*Are you sure that your vertices are in the place they are supposed to be? You are starting to make me doubt my D3D knowledge. :P
I believe your problem is that you are only passing a color,
BOTH parts of the shader NEED a position in order to work.
Your PixelShaderInput layout should be:
struct PixelShaderInput
{
float4 position :SV_POSITION;
float3 color : COLOR;
};*
Could you maybe try this as your pixel shader?:
float4 main(float3 color : COLOR) : SV_TARGET
{
return float4(color, 1.0f);
}
I have never seen this kind of constructor
float4(input.color, 1.0f);
this might be the problem, but I could be wrong. Try passing the float values one by one like this:
float4(input.color[0], input.color[1], input.color[2], 1.0f);
Edit:
Actually you might have to use float4 as type for COLOR (http://msdn.microsoft.com/en-us/library/windows/desktop/bb509647(v=vs.85).aspx)

How to update Texture2D in pixel shader every frame (in D3D10)?

Using D3D10, I am drawing a 2d rectangle and want to fill it with a texture (bitmap) that should change a few times every second (like displaying video).
I am using a shader effect, with a Texture2D variable, and trying to update a ID3D10EffectShaderResourceVariable and redraw the mesh.
My actual usage will be by copying bitmaps from memory, and using UpdateSubresource.
But it did not work, so I reduced it to test switching between two DDS images.
The result is that it draws the first image as expected, but keeps drawing it instead of switching between the two images.
I am new to D3D. Can you explain if this method can work at all, or suggest the right way to do it.
The shader effect:
Texture2D txDiffuse;
SamplerState samLinear
{
Filter = MIN_MAG_MIP_LINEAR;
AddressU = Wrap;
AddressV = Wrap;
};
struct VS_INPUT
{
float4 Pos : POSITION;
float2 Tex : TEXCOORD;
};
struct PS_INPUT
{
float4 Pos : SV_POSITION;
float2 Tex : TEXCOORD0;
};
PS_INPUT VS( VS_INPUT input )
{
PS_INPUT output = (PS_INPUT)0;
output.Pos = input.Pos;
output.Tex = input.Tex;
return output;
}
float4 PS( PS_INPUT input) : SV_Target
{
return txDiffuse.Sample( samLinear, input.Tex );
}
technique10 Render
{
pass P0
{
SetVertexShader( CompileShader( vs_4_0, VS() ) );
SetGeometryShader( NULL );
SetPixelShader( CompileShader( ps_4_0, PS() ) );
}
}
Code (skipped many parts):
ID3D10ShaderResourceView* g_pTextureRV = NULL;
ID3D10EffectShaderResourceVariable* g_pDiffuseVariable = NULL;
D3DX10CreateEffectFromResource(gInstance, MAKEINTRESOURCE(IDR_RCDATA1), NULL, NULL, NULL, "fx_4_0", dwShaderFlags, 0, device, NULL, NULL, &g_pEffect, NULL, NULL);
g_pTechnique = g_pEffect->GetTechniqueByName( "Render" );
g_pDiffuseVariable = g_pEffect->GetVariableByName( "txDiffuse" )->AsShaderResource();
// this part is called on Frame render:
device->CreateRenderTargetView( backBuffer, NULL, &rtView);
device->ClearRenderTargetView( rtView, ClearColor );
if(g_pTextureRV != NULL) {
g_pTextureRV->Release();
g_pTextureRV = NULL;
}
D3DX10CreateShaderResourceViewFromFile(device, pCurrentDDSFilePath, NULL, NULL, &g_pTextureRV, NULL );
g_pDiffuseVariable->SetResource( g_pTextureRV );
D3D10_TECHNIQUE_DESC techDesc;
g_pTechnique->GetDesc( &techDesc );
for( UINT p = 0; p < techDesc.Passes; ++p )
{
g_pTechnique->GetPassByIndex( p )->Apply( 0 );
direct2dDrawingContext->dev->Draw( 6, 0 );
}
// ... present the current back buffer
One solution, not necessarily the best, but one that doesn't use custom shaders, follows (I wrote it in C# / Managed DirectX but it should be easy to transcode.)
Bitmap bmp; //the bitmap that you will use to update the texture
Texture tex; //the texture that DirectX will render
void Render()
{
//render some stuff
bmp = GetNextTextureFrame(); //whatever you do to update your bitmap
Surface s = tex.GetSurfaceLevel(0);
Graphics g = s.GetGraphics();
//IntPtr hdc = g.GetHdc();
//BitBlt(hdc, 0, 0, bmp.Width, bmp.Height, bmpHdc, 0, 0, 0xcc0020);
g.DrawImageUnscaled(bmp, 0, 0);
g.ReleaseHdc(hdc);
s.ReleaseGraphics();
device.SetTexture(0, tex);
//now render your primitives
//render some more stuff
//present
}
The commented out lines are the way I actually did it, using an hBitmap and DC with BitBlt, because it's faster than GDI+. A lot of people will probably tell you that the above is a bad way to do it, because of all the memory locking that has to occur, and they're probably right. But I was able to achieve 30fps with multiple 1920x1080 textures, so regardless of whether it's proper, it works.

Transparent objects covering each other

For an academic project I'm trying to write a code for drawing billboards from scratch; now I'm at the point of making them translucent. I've managed to make them look good against the background but they still may cover each other with their should-be-transparent corners, like in this picture:
I'm not sure what I'm doing wrong. This is the effect file I'm using to draw the billboards. I've omitted the parts related to the vertex shader, which I think is irrelevant right now.
//cut
texture Texture;
texture MaskTexture;
sampler Sampler = sampler_state
{
Texture = (Texture);
MinFilter = Linear;
MagFilter = Linear;
MipFilter = Point;
AddressU = Clamp;
AddressV = Clamp;
};
sampler MaskSampler = sampler_state
{
Texture = (MaskTexture);
MinFilter = Linear;
MagFilter = Linear;
MipFilter = Point;
AddressU = Clamp;
AddressV = Clamp;
};
//cut
struct VertexShaderOutput
{
float4 Position : POSITION0;
float4 Color : COLOR;
float2 TexCoord : TEXCOORD0;
};
//cut
float4 PixelShaderFunction(VertexShaderOutput input) : COLOR0
{
float4 result = tex2D(Sampler, input.TexCoord) * input.Color;
float4 mask = tex2D(MaskSampler, input.TexCoord);
float alpha = mask.r;
result.rgb *= alpha;
result.a = alpha;
return result;
}
technique Technique1
{
pass Pass1
{
VertexShader = compile vs_2_0 VertexShaderFunction();
PixelShader = compile ps_2_0 PixelShaderFunction();
AlphaBlendEnable = true;
SrcBlend = SrcAlpha;
DestBlend = InvSrcAlpha;
}
}
I've got two textures, named Texture and MaskTexture, the latter being in grayscale. The billboards are, most likely, in the same vertex buffer and are drawn with a single call of GraphicsDevice.DrawIndexedPrimitives() from XNA.
I've got a feeling I'm not doing the whole thing right.
You have to draw them in order. From farthest to closest.
I have found a solution. The shaders are fine, the problem turned out to be in the XNA code, so sorry for drawing your attention to the wrong thing.
The solution is to enable a depth stencil buffer (whatever it is) before drawing the billboards:
device.DepthStencilState = DepthStencilState.DepthRead;
It can be disabled afterwards:
device.DepthStencilState = DepthStencilState.Default;

Resources