How to update Texture2D in pixel shader every frame (in D3D10)? - direct3d

Using D3D10, I am drawing a 2d rectangle and want to fill it with a texture (bitmap) that should change a few times every second (like displaying video).
I am using a shader effect, with a Texture2D variable, and trying to update a ID3D10EffectShaderResourceVariable and redraw the mesh.
My actual usage will be by copying bitmaps from memory, and using UpdateSubresource.
But it did not work, so I reduced it to test switching between two DDS images.
The result is that it draws the first image as expected, but keeps drawing it instead of switching between the two images.
I am new to D3D. Can you explain if this method can work at all, or suggest the right way to do it.
The shader effect:
Texture2D txDiffuse;
SamplerState samLinear
{
Filter = MIN_MAG_MIP_LINEAR;
AddressU = Wrap;
AddressV = Wrap;
};
struct VS_INPUT
{
float4 Pos : POSITION;
float2 Tex : TEXCOORD;
};
struct PS_INPUT
{
float4 Pos : SV_POSITION;
float2 Tex : TEXCOORD0;
};
PS_INPUT VS( VS_INPUT input )
{
PS_INPUT output = (PS_INPUT)0;
output.Pos = input.Pos;
output.Tex = input.Tex;
return output;
}
float4 PS( PS_INPUT input) : SV_Target
{
return txDiffuse.Sample( samLinear, input.Tex );
}
technique10 Render
{
pass P0
{
SetVertexShader( CompileShader( vs_4_0, VS() ) );
SetGeometryShader( NULL );
SetPixelShader( CompileShader( ps_4_0, PS() ) );
}
}
Code (skipped many parts):
ID3D10ShaderResourceView* g_pTextureRV = NULL;
ID3D10EffectShaderResourceVariable* g_pDiffuseVariable = NULL;
D3DX10CreateEffectFromResource(gInstance, MAKEINTRESOURCE(IDR_RCDATA1), NULL, NULL, NULL, "fx_4_0", dwShaderFlags, 0, device, NULL, NULL, &g_pEffect, NULL, NULL);
g_pTechnique = g_pEffect->GetTechniqueByName( "Render" );
g_pDiffuseVariable = g_pEffect->GetVariableByName( "txDiffuse" )->AsShaderResource();
// this part is called on Frame render:
device->CreateRenderTargetView( backBuffer, NULL, &rtView);
device->ClearRenderTargetView( rtView, ClearColor );
if(g_pTextureRV != NULL) {
g_pTextureRV->Release();
g_pTextureRV = NULL;
}
D3DX10CreateShaderResourceViewFromFile(device, pCurrentDDSFilePath, NULL, NULL, &g_pTextureRV, NULL );
g_pDiffuseVariable->SetResource( g_pTextureRV );
D3D10_TECHNIQUE_DESC techDesc;
g_pTechnique->GetDesc( &techDesc );
for( UINT p = 0; p < techDesc.Passes; ++p )
{
g_pTechnique->GetPassByIndex( p )->Apply( 0 );
direct2dDrawingContext->dev->Draw( 6, 0 );
}
// ... present the current back buffer

One solution, not necessarily the best, but one that doesn't use custom shaders, follows (I wrote it in C# / Managed DirectX but it should be easy to transcode.)
Bitmap bmp; //the bitmap that you will use to update the texture
Texture tex; //the texture that DirectX will render
void Render()
{
//render some stuff
bmp = GetNextTextureFrame(); //whatever you do to update your bitmap
Surface s = tex.GetSurfaceLevel(0);
Graphics g = s.GetGraphics();
//IntPtr hdc = g.GetHdc();
//BitBlt(hdc, 0, 0, bmp.Width, bmp.Height, bmpHdc, 0, 0, 0xcc0020);
g.DrawImageUnscaled(bmp, 0, 0);
g.ReleaseHdc(hdc);
s.ReleaseGraphics();
device.SetTexture(0, tex);
//now render your primitives
//render some more stuff
//present
}
The commented out lines are the way I actually did it, using an hBitmap and DC with BitBlt, because it's faster than GDI+. A lot of people will probably tell you that the above is a bad way to do it, because of all the memory locking that has to occur, and they're probably right. But I was able to achieve 30fps with multiple 1920x1080 textures, so regardless of whether it's proper, it works.

Related

Multiple Render Targets (MRT) and OSG

Folks,
I have studied about FBO, RTT and MRT to include this feature in my application, however I faced with some problems/doubts I did not find answers/tips during my search. Follows below the description of my scenario. I´ll be grateful if anyone can help me.
What I want to do?
Attach two render textures (for color and depth buffers) to the same camera;
Display only the color buffer in the post render camera;
Read images from depth and color buffer in a final draw callback;
Write collected float images in disk.
What have I got so far?
Allow rendering for color or depth buffers separately, but not both on the same camera;
Display the color buffer in the post render camera;
Read color or depth buffer in the final draw callback;
Write collected image (color or depth) in disk - only images as GL_UNSIGNED_BYTE. The following error is presented:
Error writing file ./Test-depth.png: Warning: Error in writing to "./Test-depth.png".
What are the doubts? (help!)
How can I properly render both textures (color and depth buffer) in the same camera?
How can I properly read both depth and color buffers in the final draw callback?
During image writing in disk, why the error is presented only for images as GL_FLOAT, not for GL_UNSIGNED_BYTE?
Is the render texture attached to an osg::Geode mandatory or optional in this process? Do I need to create two osg::Geode (one for each buffers), or only one osg::Geode for both?
Please take a look in my current source code (what am I doing wrong here?):
// OSG includes
#include <osgDB/ReadFile>
#include <osgDB/WriteFile>
#include <osgViewer/Viewer>
#include <osg/Camera>
#include <osg/Geode>
#include <osg/Geometry>
#include <osg/Texture2D>
struct SnapImage : public osg::Camera::DrawCallback {
SnapImage(osg::GraphicsContext* gc) {
_image = new osg::Image;
_depth = new osg::Image;
if (gc->getTraits()) {
int width = gc->getTraits()->width;
int height = gc->getTraits()->height;
_image->allocateImage(width, height, 1, GL_RGBA, GL_FLOAT);
_depth->allocateImage(width, height, 1, GL_DEPTH_COMPONENT, GL_FLOAT);
}
}
virtual void operator () (osg::RenderInfo& renderInfo) const {
osg::Camera* camera = renderInfo.getCurrentCamera();
osg::GraphicsContext* gc = camera->getGraphicsContext();
if (gc->getTraits() && _image.valid()) {
int width = gc->getTraits()->width;
int height = gc->getTraits()->height;
_image->readPixels(0, 0, width, height, GL_RGBA, GL_FLOAT);
_depth->readPixels(0, 0, width, height, GL_DEPTH_COMPONENT, GL_FLOAT);
osgDB::writeImageFile(*_image, "./Test-color.png");
osgDB::writeImageFile(*_depth, "./Test-depth.png");
}
}
osg::ref_ptr<osg::Image> _image;
osg::ref_ptr<osg::Image> _depth;
};
osg::Camera* setupMRTCamera( osg::ref_ptr<osg::Camera> camera, std::vector<osg::Texture2D*>& attachedTextures, int w, int h ) {
camera->setClearColor( osg::Vec4() );
camera->setClearMask( GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT );
camera->setRenderTargetImplementation( osg::Camera::FRAME_BUFFER_OBJECT );
camera->setRenderOrder( osg::Camera::PRE_RENDER );
camera->setViewport( 0, 0, w, h );
osg::Texture2D* tex = new osg::Texture2D;
tex->setTextureSize( w, h );
tex->setSourceType( GL_FLOAT );
tex->setSourceFormat( GL_RGBA );
tex->setInternalFormat( GL_RGBA32F_ARB );
tex->setResizeNonPowerOfTwoHint( false );
tex->setFilter( osg::Texture2D::MIN_FILTER, osg::Texture2D::LINEAR );
tex->setFilter( osg::Texture2D::MAG_FILTER, osg::Texture2D::LINEAR );
attachedTextures.push_back( tex );
camera->attach( osg::Camera::COLOR_BUFFER, tex );
tex = new osg::Texture2D;
tex->setTextureSize( w, h );
tex->setSourceType( GL_FLOAT );
tex->setSourceFormat( GL_DEPTH_COMPONENT );
tex->setInternalFormat( GL_DEPTH_COMPONENT32 );
tex->setResizeNonPowerOfTwoHint( false );
attachedTextures.push_back( tex );
camera->attach( osg::Camera::DEPTH_BUFFER, tex );
return camera.release();
}
int main() {
osg::ref_ptr< osg::Group > root( new osg::Group );
root->addChild( osgDB::readNodeFile( "cow.osg" ) );
unsigned int winW = 800;
unsigned int winH = 600;
osgViewer::Viewer viewer;
viewer.setUpViewInWindow( 0, 0, winW, winH );
viewer.setSceneData( root.get() );
viewer.realize();
// setup MRT camera
std::vector<osg::Texture2D*> attachedTextures;
osg::Camera* mrtCamera ( viewer.getCamera() );
setupMRTCamera( mrtCamera, attachedTextures, winW, winH );
// set RTT textures to quad
osg::Geode* geode( new osg::Geode );
geode->addDrawable( osg::createTexturedQuadGeometry(
osg::Vec3(-1,-1,0), osg::Vec3(2.0,0.0,0.0), osg::Vec3(0.0,2.0,0.0)) );
geode->getOrCreateStateSet()->setTextureAttributeAndModes( 0, attachedTextures[0] );
geode->getOrCreateStateSet()->setMode( GL_LIGHTING, osg::StateAttribute::OFF );
geode->getOrCreateStateSet()->setMode( GL_DEPTH_TEST, osg::StateAttribute::OFF );
// configure postRenderCamera to draw fullscreen textured quad
osg::Camera* postRenderCamera( new osg::Camera );
postRenderCamera->setClearMask( 0 );
postRenderCamera->setRenderTargetImplementation( osg::Camera::FRAME_BUFFER, osg::Camera::FRAME_BUFFER );
postRenderCamera->setReferenceFrame( osg::Camera::ABSOLUTE_RF );
postRenderCamera->setRenderOrder( osg::Camera::POST_RENDER );
postRenderCamera->setViewMatrix( osg::Matrixd::identity() );
postRenderCamera->setProjectionMatrix( osg::Matrixd::identity() );
postRenderCamera->addChild( geode );
root->addChild(postRenderCamera);
// setup the callback
SnapImage* finalDrawCallback = new SnapImage(viewer.getCamera()->getGraphicsContext());
mrtCamera->setFinalDrawCallback(finalDrawCallback);
return (viewer.run());
}
Thanks in advance,
Rômulo Cerqueira

Monogame pixel shader - Texture passing as completely transparent

I'm trying to make a distortion shader for water in my game. I have the screen's rendertarget, and the water mask rendertarget, and I'm try to simply capture the pixels underneath the mask, but I can't get it to work. When I pass the textures, it's as if they're both completely transparent. What could I be doing wrong?
Shader:
texture Screen;
texture Mask;
float2 Offset;
sampler ScreenSampler = sampler_state
{
Texture = <Screen>;
};
sampler MaskSampler = sampler_state
{
Texture = <Mask>;
};
float4 PixelShaderFunction(float2 texCoord: TEXCOORD0) : COLOR
{
float4 mask = tex2D(MaskSampler, texCoord);
float4 color = tex2D(ScreenSampler, texCoord + Offset);
if (mask.a > 0)
{
return color;
}
return mask;
}
technique Technique0
{
pass Pass0
{
PixelShader = compile ps_4_0 PixelShaderFunction();
}
}
Render target:
Doldrums.Game.Graphics.GraphicsDevice.SetRenderTarget(renderTargetDistortion);
Doldrums.Game.Graphics.GraphicsDevice.Clear(Color.Transparent);
waterEffect.Parameters["Screen"].SetValue(Doldrums.RenderTarget);
waterEffect.Parameters["Mask"].SetValue(renderTargetWater);
waterEffect.Parameters["Offset"].SetValue(Doldrums.Camera.ToScreen(renderTargetPosition));
sprites.Begin(SpriteSortMode.Deferred, null, null, null, null, waterEffect);
sprites.Draw(renderTargetWater, Vector2.Zero, Color.White);
sprites.End();
Finally, rendering the rendertarget:
sprites.Draw(renderTargetDistortion, renderTargetPosition, Color.White);
I had the exact same "issue"using monogame during my development. The problem here is easily fixed, change this:
sprites.Begin(**SpriteSortMode.Deferred**, null, null, null, null, waterEffect);
sprites.Draw(renderTargetWater, Vector2.Zero, Color.White);
sprites.End();
To another mode like this:
sprites.Begin(**SpriteSortMode.Immediate**, null, null, null, null, waterEffect);
sprites.Draw(renderTargetWater, Vector2.Zero, Color.White);
sprites.End();
Have fun :)

Directx 11 Texture mapping

I've looked for this and I am so sure it can be done.
Does anyone know how I can stop a texture being stretched over an oversized facet?
I remember in some game designs you would have the option of either stretching the image over the object or running a repeat.
EDIT: Okay, so I have used pixel coordinates and the issue still remains. The vertices are fine. What I am trying to do is load a bitmap and keep the size the same regardless of what the resolution is, or the size of the image. I want the image to only use 20x20 physical pixels.
I hope that makes sense because I don't think my previous explaination did.
Texture2D Texture;
SamplerState SampleType
{
Filter = TEXT_1BIT;
// AddressU = Clamp;
// AddressV = Clamp;
};
struct Vertex
{
float4 position : POSITION;
float2 tex : TEXCOORD0;
};
struct Pixel
{
float4 position : SV_POSITION;
float2 tex : TEXCOORD0;
};
Pixel FontVertexShader(Vertex input)
{
return input;
}
float4 FPS(Pixel input) : SV_Target
{
return Texture.Sample(SampleType, input.tex);
}
...
The answer is in hwnd = CreateWindow(...);
Using WS_POPUP meant I removed the borders and my texture was able to map itself correctly.
You need to use GetClientRect();
Thankyou to everyone for your help. :)

C#/XNA/HLSL - Applying a pixel shader on 2D sprites affects the other sprites on the same render target

Background information
I have just started learning HLSL and decided to test what I have learned from the Internet by writing a simple 2D XNA 4.0 bullet-hell game.
I have written a pixel shader in order to change the color of bullets.
Here is the idea: the original texture of the bullet is mainly black, white and red. With the help of my pixel shader, bullets can be much more colorful.
But, I'm not sure how and when the shader is applied on spriteBatch in XNA 4.0, and when it ends. This may be the cause of problem.
There were pass.begin() and pass.end() in XNA 3.x, but pass.apply() in XNA 4.0 confuses me.
In addition, it is the first time for me to use renderTarget. It may cause problems.
Symptom
It works, but only if there are bullets of the same color in the bullet list.
If bullets of different colors are rendered, it produces wrong colors.
It seems that the pixel shader is not applied on the bullet texture, but applied on the renderTarget, which contains all the rendered bullets.
For an example:
Here I have some red bullets and blue bullets. The last created bullet is a blue one. It seems that the pixel shader have added blue color on the red ones, making them to be blue-violet.
If I continuously create bullets, the red bullets will appear to be switching between red and blue-violet. (I believe that the blue ones are also switching, but not obvious.)
Code
Since I am new to HLSL, I don't really know what I have to provide.
Here are all the things that I believe or don't know if they are related to the problem.
C# - Enemy bullet (or just Bullet):
protected SpriteBatch spriteBatch;
protected Texture2D texture;
protected Effect colorEffect;
protected Color bulletColor;
... // And some unrelated variables
public EnemyBullet(SpriteBatch spriteBatch, Texture2D texture, Effect colorEffect, BulletType bulletType, (and other data, like velocity)
{
this.spriteBatch = spriteBatch;
this.texture = texture;
this.colorEffect = colorEffect;
if(bulletType == BulletType.ARROW_S)
{
bulletColor = Color.Red; // The bullet will be either red
}
else
{
bulletColor = Color.Blue; // or blue.
}
}
public void Update()
{
... // Update positions and other properties, but not the color.
}
public void Draw()
{
colorEffect.Parameters["DestColor"].SetValue(bulletColor.ToVector4());
int l = colorEffect.CurrentTechnique.Passes.Count();
for (int i = 0; i < l; i++)
{
colorEffect.CurrentTechnique.Passes[i].Apply();
spriteBatch.Draw(texture, Position, sourceRectangle, Color.White, (float)Math.PI - rotation_randian, origin, Scale, SpriteEffects.None, 0.0f);
}
}
C# - Bullet manager:
private Texture2D bulletTexture;
private List<EnemyBullet> enemyBullets;
private const int ENEMY_BULLET_CAPACITY = 10000;
private RenderTarget2D bulletsRenderTarget;
private Effect colorEffect;
...
public EnemyBulletManager()
{
enemyBullets = new List<EnemyBullet>(ENEMY_BULLET_CAPACITY);
}
public void LoadContent(ContentManager content, SpriteBatch spriteBatch)
{
bulletTexture = content.Load<Texture2D>(#"Textures\arrow_red2");
bulletsRenderTarget = new RenderTarget2D(spriteBatch.GraphicsDevice, spriteBatch.GraphicsDevice.PresentationParameters.BackBufferWidth, spriteBatch.GraphicsDevice.PresentationParameters.BackBufferHeight, false, SurfaceFormat.Color, DepthFormat.None);
colorEffect = content.Load<Effect>(#"Effects\ColorTransform");
colorEffect.Parameters["ColorMap"].SetValue(bulletTexture);
}
public void Update()
{
int l = enemyBullets.Count();
for (int i = 0; i < l; i++)
{
if (enemyBullets[i].IsAlive)
{
enemyBullets[i].Update();
}
else
{
enemyBullets.RemoveAt(i);
i--;
l--;
}
}
}
// This function is called before Draw()
public void PreDraw()
{
// spriteBatch.Begin() is called outside this class, for reference:
// spriteBatch.Begin(SpriteSortMode.Immediate, null);
spriteBatch.GraphicsDevice.SetRenderTarget(bulletsRenderTarget);
spriteBatch.GraphicsDevice.Clear(Color.Transparent);
int l = enemyBullets.Count();
for (int i = 0; i < l; i++)
{
if (enemyBullets[i].IsAlive)
{
enemyBullets[i].Draw();
}
}
spriteBatch.GraphicsDevice.SetRenderTarget(null);
}
public void Draw()
{
// Before this function is called,
// GraphicsDevice.Clear(Color.Black);
// is called outside.
spriteBatch.Draw(bulletsRenderTarget, Vector2.Zero, Color.White);
// spriteBatch.End();
}
// This function will be responsible for creating new bullets.
public EnemyBullet CreateBullet(EnemyBullet.BulletType bulletType, ...)
{
EnemyBullet eb = new EnemyBullet(spriteBatch, bulletTexture, colorEffect, bulletType, ...);
enemyBullets.Add(eb);
return eb;
}
HLSL - Effects\ColorTransform.fx
float4 DestColor;
texture2D ColorMap;
sampler2D ColorMapSampler = sampler_state
{
Texture = <ColorMap>;
};
struct PixelShaderInput
{
float2 TexCoord : TEXCOORD0;
};
float4 PixelShaderFunction(PixelShaderInput input) : COLOR0
{
float4 srcRGBA = tex2D(ColorMapSampler, input.TexCoord);
float fmax = max(srcRGBA.r, max(srcRGBA.g, srcRGBA.b));
float fmin = min(srcRGBA.r, min(srcRGBA.g, srcRGBA.b));
float delta = fmax - fmin;
float4 originalDestColor = float4(1, 0, 0, 1);
float4 deltaDestColor = originalDestColor - DestColor;
float4 finalRGBA = srcRGBA - (deltaDestColor * delta);
return finalRGBA;
}
technique Technique1
{
pass ColorTransform
{
PixelShader = compile ps_2_0 PixelShaderFunction();
}
}
I would be appreciate if anyone can help solving the problem. (Or optimizing my shader. I really know very little about HLSL.)
In XNA 4 you should pass the effect directly to the SpriteBatch, as explained on Shawn Hargreaves' Blog.
That said, it seems to me like the problem is, that after rendering your bullets to bulletsRenderTarget, you then draw that RenderTarget using the same spriteBatch with the last effect still in action. That would explain why the entire image is painted blue.
A solution would be to use two Begin()/End() passes of SpriteBatch, one with the effect and the other without. Or just don't use a separate RenderTarget to begin with, which seems pointless in this case.
I'm also very much a beginner with pixel shaders so, just my 2c.

Screen-space square looking distorted in PIX

I have a simple function that creates a square that covers the entire screen, I use it for applying post-processing effects, however as far as I can tell it has been the cause of countless errors.
When I run my code in PIX I get the following mesh, but the square should be straight and covering the screen, shouldn't it?
My vertex shader does no transformation and simply passes position information to the pixel shader.
The function that creates the square is as follows:
private void InitializeGeometry()
{
meshes = new Dictionary<Vector3, Mesh>();
//build array of vertices for one square
ppVertex[] vertexes = new ppVertex[4];
//vertexes[0].Position = new Vector3(-1f, -1f, 0.25f);
vertexes[0].Position = new Vector3(-1, -1, 1f);
vertexes[1].Position = new Vector3(-1, 1, 1f);
vertexes[2].Position = new Vector3(1, -1, 1f);
vertexes[3].Position = new Vector3(1, 1, 1f);
vertexes[0].TexCoords = new Vector2(0, 0);
vertexes[1].TexCoords = new Vector2(0, 1);
vertexes[2].TexCoords = new Vector2(1, 0);
vertexes[3].TexCoords = new Vector2(1, 1);
//build index array for the vertices to build a quad from two triangles
short[] indexes = { 0, 1, 2, 1, 3, 2 };
//create the data stream to push the vertex data into the buffer
DataStream vertices = new DataStream(Marshal.SizeOf(typeof(Vertex)) * 4, true, true);
//load the data stream
vertices.WriteRange(vertexes);
//reset the data position
vertices.Position = 0;
//create the data stream to push the index data into the buffer
DataStream indices = new DataStream(sizeof(short) * 6, true, true);
//load the data stream
indices.WriteRange(indexes);
//reset the data position
indices.Position = 0;
//create the mesh object
Mesh mesh = new Mesh();
//create the description of the vertex buffer
D3D.BufferDescription vbd = new BufferDescription();
vbd.BindFlags = D3D.BindFlags.VertexBuffer;
vbd.CpuAccessFlags = D3D.CpuAccessFlags.None;
vbd.OptionFlags = ResourceOptionFlags.None;
vbd.SizeInBytes = Marshal.SizeOf(typeof(Vertex)) * 4;
vbd.Usage = ResourceUsage.Default;
//create and assign the vertex buffer to the mesh, filling it with data
mesh.VertexBuffer = new D3D.Buffer(device, vertices, vbd);
//create the description of the index buffer
D3D.BufferDescription ibd = new BufferDescription();
ibd.BindFlags = D3D.BindFlags.IndexBuffer;
ibd.CpuAccessFlags = D3D.CpuAccessFlags.None;
ibd.OptionFlags = ResourceOptionFlags.None;
ibd.SizeInBytes = sizeof(short) * 6;
ibd.Usage = ResourceUsage.Default;
//create and assign the index buffer to the mesh, filling it with data
mesh.IndexBuffer = new D3D.Buffer(device, indices, ibd);
//get vertex and index counts
mesh.vertices = vertexes.GetLength(0);
mesh.indices = indexes.Length;
//close the data streams
indices.Close();
vertices.Close();
meshes.Add(new Vector3(0), mesh);
}
and when I render the square:
private void DrawScene()
{
lock (meshes)
{
foreach (Mesh mesh in meshes.Values)
{
if (mesh.indices > 0)
{
try
{
//if (camera.SphereInFrustum(mesh.BoundingSphere, sphereRadius))
//{
context.InputAssembler.SetVertexBuffers(0, new VertexBufferBinding(mesh.VertexBuffer, Marshal.SizeOf(typeof(Vertex)), 0));
context.InputAssembler.SetIndexBuffer(mesh.IndexBuffer, Format.R16_UInt, 0);
context.DrawIndexed(mesh.indices, 0, 0);
//}
}
catch (Exception err)
{
MessageBox.Show(err.Message);
}
}
}
}
}
EDIT: I've added the vertex shader being run
cbuffer EveryFrame : register(cb0)
{
float3 diffuseColor : packoffset(c0);
float3 lightdir : packoffset(c1);
};
cbuffer EveryMotion : register(cb1)
{
float4x4 WorldViewProjection : packoffset(c0);
float4x4 LightWorldViewProjection : packoffset(c4);
};
struct VS_IN
{
float3 position : POSITION;
float3 normal : NORMAL;
float4 col : TEXCOORD;
};
struct PS_IN
{
float4 position : SV_POSITION;
float4 col : TEXCOORD;
float3 normal : NORMAL;
};
PS_IN VS(VS_IN input)
{
PS_IN output;
output.position = float4(input.position,1);
output.col = input.col;
output.normal = input.normal;
return output;
}
Here's PIX's vertex output.
PreVS:
PostVS:
And here's the dissassembly PIX generated when I chose to debug vertex 0
//
// Generated by Microsoft (R) HLSL Shader Compiler 9.29.952.3111
//
//
//
// Input signature:
//
// Name Index Mask Register SysValue Format Used
// ---------------- ----- ------ -------- -------- ------ ------
// POSITION 0 xyz 0 NONE float xyz
// NORMAL 0 xyz 1 NONE float xyz
// TEXCOORD 0 xyzw 2 NONE float
//
//
// Output signature:
//
// Name Index Mask Register SysValue Format Used
// ---------------- ----- ------ -------- -------- ------ ------
// SV_POSITION 0 xyzw 0 POS float xyzw
// TEXCOORD 0 xyzw 1 NONE float xyzw
// NORMAL 0 xyz 2 NONE float xyz
//
vs_4_0
dcl_input v0.xyz
dcl_input v1.xyz
dcl_output_siv o0.xyzw , position
dcl_output o1.xyzw
dcl_output o2.xyz
mov o0.xyz, v0.xyzx
mov o0.w, l(1.000000)
mov o1.xyzw, l(1.000000, 1.000000, 1.000000, 1.000000)
mov o2.xyz, v1.xyzx
ret
// Approximately 5 instruction slots used
I've also added the input assembler:
private void SetPPInputAssembler(Shader shader)
{
InputElement[] elements = new[] {
new InputElement("POSITION",0,Format.R32G32B32_Float,0),
new InputElement("NORMAL",0,Format.R32G32B32_Float,12,0),
new InputElement("TEXCOORD",0,Format.R32G32_Float,24,0),
};
InputLayout layout = new InputLayout(device, shader.InputSignature, elements);
context.InputAssembler.InputLayout = layout;
context.InputAssembler.PrimitiveTopology = PrimitiveTopology.TriangleList;
}
Obviously your vertex input positions don't match the values you want to give in.
For the first vertex the values look good until the z-coordinate of the texture coordinates.
You are defining a Vector2D in your program Vertex-struct, but a Vector4D in the Vertexshader Vertex-struct and things get mixed up.
just change VS_IN to this:
struct VS_IN
{
float3 position : POSITION;
float3 normal : NORMAL;
float2 col : TEXCOORD; // float2 instead of float4
};
I'm not sure though if you really want to have colors or rather texcoords. If you really want to have colors float4 would be right, but then you had to change
vertexes[0].TexCoords = new Vector2(0, 0);
into
vertexes[0].TexCoords = new Vector4(0, 0, 0, 0);
Either way, one of those variables is misnamed and probably the reason for the confusion.

Resources