I am developing a small program that load 3d models using assimp, but it does not render the model. At first I thought that vertices and indices were not loaded correctly but this is not the case ( I printed on a txt file vertices and indices). I think that the probem might be with the position of the model and camera. The application does not return any error, it runs properly.
Vertex Struct:
struct Vertex {
XMFLOAT3 position;
XMFLOAT2 texture;
XMFLOAT3 normal;
};
Input layout:
D3D12_INPUT_ELEMENT_DESC inputLayout[] =
{
{ "POSITION", 0, DXGI_FORMAT_R32G32B32_FLOAT, 0, 0, D3D12_INPUT_CLASSIFICATION_PER_VERTEX_DATA, 0 },
{ "TEXCOORD", 0, DXGI_FORMAT_R32G32_FLOAT, 0, 12, D3D12_INPUT_CLASSIFICATION_PER_VERTEX_DATA, 0 },
{ "NORMAL", 0, DXGI_FORMAT_R32G32B32_FLOAT, 0, D3D12_APPEND_ALIGNED_ELEMENT, D3D12_INPUT_CLASSIFICATION_PER_VERTEX_DATA, 0 }
};
Vertices, texcoords, normals and indices loader:
model = new ModelMesh();
std::vector<XMFLOAT3> positions;
std::vector<XMFLOAT3> normals;
std::vector<XMFLOAT2> texCoords;
std::vector<unsigned int> indices;
model->LoadMesh("beast.x", positions, normals,
texCoords, indices);
// Create vertex buffer
if (positions.size() == 0)
{
MessageBox(0, L"Vertices vector is empty.",
L"Error", MB_OK);
}
Vertex* vList = new Vertex[positions.size()];
for (size_t i = 0; i < positions.size(); i++)
{
Vertex vert;
XMFLOAT3 pos = positions[i];
vert.position = XMFLOAT3(pos.x, pos.y, pos.z);
XMFLOAT3 norm = normals[i];
vert.normal = XMFLOAT3(norm.x, norm.y, norm.z);
XMFLOAT2 tex = texCoords[i];
vert.texture = XMFLOAT2(tex.x, tex.y);
vList[i] = vert;
}
int vBufferSize = sizeof(vList);
Build of the camera and views:
XMMATRIX tmpMat = XMMatrixPerspectiveFovLH(45.0f*(3.14f/180.0f), (float)Width / (float)Height, 0.1f, 1000.0f);
XMStoreFloat4x4(&cameraProjMat, tmpMat);
// set starting camera state
cameraPosition = XMFLOAT4(0.0f, 2.0f, -4.0f, 0.0f);
cameraTarget = XMFLOAT4(0.0f, 0.0f, 0.0f, 0.0f);
cameraUp = XMFLOAT4(0.0f, 1.0f, 0.0f, 0.0f);
// build view matrix
XMVECTOR cPos = XMLoadFloat4(&cameraPosition);
XMVECTOR cTarg = XMLoadFloat4(&cameraTarget);
XMVECTOR cUp = XMLoadFloat4(&cameraUp);
tmpMat = XMMatrixLookAtLH(cPos, cTarg, cUp);
XMStoreFloat4x4(&cameraViewMat, tmpMat);
cube1Position = XMFLOAT4(0.0f, 0.0f, 0.0f, 0.0f);
XMVECTOR posVec = XMLoadFloat4(&cube1Position);
tmpMat = XMMatrixTranslationFromVector(posVec);
XMStoreFloat4x4(&cube1RotMat, XMMatrixIdentity());
XMStoreFloat4x4(&cube1WorldMat, tmpMat);
Update function :
XMStoreFloat4x4(&cube1WorldMat, worldMat);
XMMATRIX viewMat = XMLoadFloat4x4(&cameraViewMat); // load view matrix
XMMATRIX projMat = XMLoadFloat4x4(&cameraProjMat); // load projection matrix
XMMATRIX wvpMat = XMLoadFloat4x4(&cube1WorldMat) * viewMat * projMat; // create wvp matrix
XMMATRIX transposed = XMMatrixTranspose(wvpMat); // must transpose wvp matrix for the gpu
XMStoreFloat4x4(&cbPerObject.wvpMat, transposed); // store transposed wvp matrix in constant buffer
memcpy(cbvGPUAddress[frameIndex], &cbPerObject, sizeof(cbPerObject));
VERTEX SHADER:
struct VS_INPUT
{
float4 pos : POSITION;
float2 tex: TEXCOORD;
float3 normal : NORMAL;
};
struct VS_OUTPUT
{
float4 pos: SV_POSITION;
float2 tex: TEXCOORD;
float3 normal: NORMAL;
};
cbuffer ConstantBuffer : register(b0)
{
float4x4 wvpMat;
};
VS_OUTPUT main(VS_INPUT input)
{
VS_OUTPUT output;
output.pos = mul(input.pos, wvpMat);
return output;
}
Hope it is a long code to read but I don't understand what is going wrong with this code. Hope somebody can help me.
A few things to try/check:
Make your background clear color grey. That way, if you are drawing black triangles you will see them.
Turn backface culling off in the rendering state, in case your triangles are back to front.
Turn depth test off in the rendering state.
Turn off alpha blending.
You don't show your pixel shader, but try writing a constant color to see if your lighting calculation is broken.
Use NVIDIA's nSight tool, or the Visual Studio Graphics debugger to see what your graphics pipeline is doing.
Those are usually the things I try first...
Related
Basically I've created a plane and rotate it 5 times to be a cube. I made cube with different colors of each side. And did some rotation with touch event.
Everthing was good so far, but the cube turned out to be like this.
Please help it's been driving me crazy!
My cube code:
public class PhotoCube{
public ArrayList<FloatBuffer> vertexBufferList = new ArrayList<>();
public int[][] lists = {
{0,1,0,0}, //front
{1,0,0,-90}, //top
{0,1,0,-90}, //left
{0,1,0,90}, //
{1,0,0,90}, //bottom
{0,1,0,180} //right
};
// number of coordinates per vertex in this array
static final int COORDS_PER_VERTEX = 3;
static final float[] coords = { // in counterclockwise order:
0.5f, 0.5f, 0.5f, // top right
-0.5f, 0.5f, 0.5f, // top left
-0.5f, -0.5f, 0.5f, // bottom left
0.5f, -0.5f, 0.5f // bottom right
};
static final float[][] colorList = {
{1f,0f,0f,1f},
{0f,1f,0f,1f},
{1f,1f,1f,1f},
{1f,1f,0f,1f},
{1f,0f,1f,1f},
{0f,1f,1f,1f}
};
public PhotoCube() {
final int maxVertices = 6;
for(int[] list:lists){
float[] currentCoords = coords.clone();
for(int i=0;i<12;i+=3){
float x = coords[i];
float y = coords[i+1];
float z = coords[i+2];
double angle = Math.toRadians(list[3]);
currentCoords[i]=(float) ((list[0]==1)?x:(list[1]==1)?x*Math.cos(angle)+z*Math.sin(angle):x*Math.cos(angle)-y*Math.sin(angle));
currentCoords[i+1]=(float) ((list[0]==1)?y*Math.cos(angle)-z*Math.sin(angle):(list[1]==1)?y:x*Math.sin(angle)+y*Math.cos(angle));
currentCoords[i+2]=(float) ((list[0]==1)?z*Math.cos(angle)+y*Math.sin(angle):(list[1]==1)?z*Math.cos(angle)-x*Math.sin(angle):z);
}
ByteBuffer bb = ByteBuffer.allocateDirect(
// (number of coordinate values * 4 bytes per float)
currentCoords.length * 4);
// use the device hardware's native byte order
bb.order(ByteOrder.nativeOrder());
// create a floating point buffer from the ByteBuffer
FloatBuffer vertexBuffer = bb.asFloatBuffer();
// add the coordinates to the FloatBuffer
vertexBuffer.put(currentCoords);
// set the buffer to read the first coordinate
vertexBuffer.position(0);
if(vertexBufferList.size()==maxVertices){
vertexBufferList.remove(0);
}
vertexBufferList.add(vertexBuffer);
ByteBuffer dlb = ByteBuffer.allocateDirect(
// (# of coordinate values * 2 bytes per short)
drawOrder.length * 2);
dlb.order(ByteOrder.nativeOrder());
drawListBuffer = dlb.asShortBuffer();
drawListBuffer.put(drawOrder);
drawListBuffer.position(0);
createProgram();
GLES20.glLinkProgram(mProgram);
// creates OpenGL ES program executables
}
}
public void createProgram(){
// create empty OpenGL ES Program
mProgram = GLES20.glCreateProgram();
int vertexShader = MyGLRenderer.loadShader(GLES20.GL_VERTEX_SHADER,
vertexShaderCode);
int fragmentShader = MyGLRenderer.loadShader(GLES20.GL_FRAGMENT_SHADER,
fragmentShaderCode);
// add the vertex shader to program
GLES20.glAttachShader(mProgram, vertexShader);
// add the fragment shader to program
GLES20.glAttachShader(mProgram, fragmentShader);
}
public void draw(int order) {
final int vertexStride = COORDS_PER_VERTEX * 4; // 4 bytes per vertex
// Add program to OpenGL ES environment
GLES20.glUseProgram(mProgram);
// get handle to vertex shader's vPosition member
int positionHandle = GLES20.glGetAttribLocation(mProgram, "vPosition");
// Enable a handle to the triangle vertices
GLES20.glEnableVertexAttribArray(positionHandle);
// Prepare the triangle coordinate data
GLES20.glVertexAttribPointer(positionHandle, COORDS_PER_VERTEX,
GLES20.GL_FLOAT, true,
vertexStride, vertexBufferList.get(order));
// get handle to fragment shader's vColor member
int colorHandle = GLES20.glGetUniformLocation(mProgram, "vColor");
// Set color for drawing the triangle
GLES20.glUniform4fv(colorHandle, 1, colorList[order], 0);
GLES20.glDrawElements(
GLES20.GL_TRIANGLES,
drawOrder.length,
GL_UNSIGNED_SHORT,
drawListBuffer);
// Disable vertex array
//GLES20.glDisableVertexAttribArray(positionHandle);
}
}
My Renderer code:
public class MyGLRenderer implements GLSurfaceView.Renderer {
private PhotoCube mPhotoCube;
public final float[] vPMatrix = new float[16];
private final float[] projectionMatrix = new float[16];
private final float[] viewMatrix = new float[16];
private int vPMatrixHandle = -1;
private volatile float mAngleX = 0;
private volatile float mAngleY = 0;
private float[] rotationMX = new float[16];
private float[] rotationMY = new float[16];
private float[] scratch = new float[16];
public MyGLRenderer(Context context){
}
public static int loadShader(int type, String shaderCode) {
int shader = GLES20.glCreateShader(type);
// add the source code to the shader and compile it
GLES20.glShaderSource(shader, shaderCode);
GLES20.glCompileShader(shader);
return shader;
}
public void onSurfaceCreated(GL10 unused, EGLConfig config) {
// Set the background frame color
GLES20.glClearColor(0.0f, 0.0f, 0.0f, 0.0f);
mPhotoCube = new PhotoCube();
vPMatrixHandle = GLES20.glGetUniformLocation(mPhotoCube.mProgram, "uVPMatrix");
onDrawFrame(unused);
}
public void onDrawFrame(GL10 unused) {
// Redraw background color
GLES20.glClear(GLES20.GL_COLOR_BUFFER_BIT);
GLES20.glEnable(GLES20.GL_BLEND);
GLES20.glBlendFuncSeparate(GLES20.GL_SRC_ALPHA, GLES20.GL_ONE_MINUS_SRC_ALPHA, GLES20.GL_ZERO, GLES20.GL_ONE);
Matrix.setRotateM(rotationMX, 0, -mAngleX, 0, 1, 0);
Matrix.setLookAtM(viewMatrix, 0, 0, 0, 5, 0f, 0f, -5f, 0f, 1.0f, 0f);
// Calculate the projection and view transformation
Matrix.multiplyMM(vPMatrix, 0, projectionMatrix, 0, viewMatrix, 0);
Matrix.multiplyMM(scratch, 0, vPMatrix, 0, rotationMX, 0);
Matrix.setRotateM(rotationMY, 0, -mAngleY, scratch[0], scratch[4], scratch[8]);
Matrix.multiplyMM(scratch, 0, scratch, 0, rotationMY, 0);
vPMatrixHandle = GLES20.glGetUniformLocation(mPhotoCube.mProgram, "uMVPMatrix");
GLES20.glUniformMatrix4fv(vPMatrixHandle, 1, false, scratch, 0);
for(int i=0;i<mPhotoCube.lists.length;i++){
mPhotoCube.draw(i);
}
}
public void onSurfaceChanged(GL10 unused, int width, int height) {
GLES20.glViewport(0, 0, width, height);
float ratio = (float) width / height;
// this projection matrix is applied to object coordinates
// in the onDrawFrame() method
Matrix.frustumM(projectionMatrix, 0, -ratio, ratio, -1, 1, 3, 7);
}
}
Here's my shader code
final String vertexShaderCode =
"uniform mat4 uMVPMatrix;" +
"attribute vec4 vPosition;" +
"void main() {" +
" gl_Position = uMVPMatrix * vPosition;" +
"}";
final String fragmentShaderCode =
"precision mediump float;" +
"uniform vec4 vColor;" +
"void main() {" +
" gl_FragColor = vColor;" +
"}";
Thank you in advance.
You have to enable the Depth Test Depth test ensures that fragments that lie behind other fragments are discarded:
GLES20.glEnable(GLES20.GL_DEPTH_TEST);
When you enable the depth test you must also clear the depth buffer:
GLES20.glClear(GLES20.GL_COLOR_BUFFER_BIT | GLES20.GL_DEPTH_BUFFER_BIT);
However, if you have transparent objects and want to use Blending, the depth test must be disabled and you must draw the triangles of the meshes in sorted order from back to front. Also see OpenGL depth sorting
For an experiment I want to pass all edges of a triangle to the pixel shader and manually calculate the pixel position with the triangle edges and bayrcentric coodinates.
In order to do this I wrote a geometry shader that passes all edges of my triangle to the pixel shader:
struct GeoIn
{
float4 projPos : SV_POSITION;
float3 position : POSITION;
};
struct GeoOut
{
float4 projPos : SV_POSITION;
float3 position : POSITION;
float3 p[3] : TRIPOS;
};
[maxvertexcount(3)]
void main(triangle GeoIn i[3], inout TriangleStream<GeoOut> OutputStream)
{
GeoOut o;
// add triangle data
for(uint idx = 0; idx < 3; ++idx)
{
o.p[idx] = i[idx].position;
}
// generate verices
for(idx = 0; idx < 3; ++idx)
{
o.projPos = i[idx].projPos;
o.position = i[idx].position;
OutputStream.Append(o);
}
OutputStream.RestartStrip();
}
The pixel shader outputs the manually reconstructed position:
struct PixelIn
{
float4 projPos : SV_POSITION;
float3 position : POSITION;
float3 p[3] : TRIPOS;
float3 bayr : SV_Barycentrics;
};
float4 main(PixelIn i) : SV_TARGET
{
float3 pos = i.bayr.x * i.p[0] + i.bayr.y * i.p[1] + i.bayr.z * i.p[2];
return float4(abs(pos), 1.0);
}
And I get the following (expected) result:
However, when I modify my PixelIn struct by adding nointerpolation to p[3]:
struct PixelIn
{
...
nointerpolation float3 p[3] : TRIPOS;
};
I get:
I did not expect a different result because I am not changing the values of p[] for a single triangle in the geometry shader. I tried debugging it by changing the output to float4(abs(i.p[0]), 1.0); with and without interpolation. Without nointerpolation the values of p[] do not vary within a triangle (which makes sense, because all should have the same value). With nointerpolation the values of p[] do change slightly. Why is that the case? I thought nointerpolate was not supposed to interpolate anything.
Edit:
This is the wireframe of my geometry:
I'm having massive difficulties with setting up a perspective projection with direct x.
I've been stuck on this for weeks, any help would be appreciated.
As far as I can my pipeline set up is fine, in as far as the shader is getting exactly the data I am packing it, so I'll forgo including that code, unless I'm asked for it.
I am using glm for maths (therefore column matrices) on the host side. It's set to use a left handed coordinate system. However I couldn't seem to get its projection matrix to work,
(I get a blank screen if I use the matrix I get from it) so I wrote this - which I hope only to be temporary.
namespace
{
glm::mat4
perspective()
{
DirectX::XMMATRIX dx = DirectX::XMMatrixPerspectiveFovLH(glm::radians(60.0f), 1, 0.01, 1000.0f);
glm::mat4 glfromdx;
for (int c = 0; c < 3; c++)
{
for (int r = 0; r < 3; r++)
{
glfromdx[c][r] = DirectX::XMVectorGetByIndex(dx.r[c], r);
}
}
return glfromdx;
}
}
this does hand me a projection matrix that works - but I don't really think that it is working. I'm not certain whether
its an actual projection, and my clipspace is miniscule. If I move my camera around at all it clips. Photos included.
photo 1: (straight on)
photo 2: (cam tilted up)
photo 3: (cam tilted left)
Here is my vertex shader
https://gist.github.com/anonymous/31726881a5476e6c8513573909bf4dc6
cbuffer offsets
{
float4x4 model; //<------currently just identity
float4x4 view;
float4x4 proj;
}
struct idata
{
float3 position : POSITION;
float4 colour : COLOR;
};
struct odata
{
float4 position : SV_POSITION;
float4 colour : COLOR;
};
void main(in idata dat, out odata odat)
{
odat.colour = dat.colour;
odat.position = float4(dat.position, 1.0f);
//point' = point * model * view * projection
float4x4 mvp = mul(model, mul(view, proj));
//column matrices.
odat.position = mul(mvp, odat.position);
}
And here is my fragment shader
https://gist.github.com/anonymous/342a707e603b9034deb65eb2c2101e91
struct idata
{
float4 position : SV_POSITION;
float4 colour : COLOR;
};
float4 main(in idata data) : SV_TARGET
{
return float4(data.colour[0], data.colour[1], data.colour[2] , 1.0f);
}
also full camera implementation is here
https://gist.github.com/anonymous/b15de44f08852cbf4fa4d6b9fafa0d43
https://gist.github.com/anonymous/8702429212c411ddb04f5163b1d885b9
Finally these are the vertices being passed in
position 3 colour 4
(0.0f, 0.5f, 0.0f, 1.0f, 0.0f, 0.0f, 1.0f),
(0.45f, -0.5f, 0.0f, 0.0f, 1.0f, 0.0f, 1.0f),
(-0.45f, -0.5f, 0.0f, 0.0f, 0.0f, 1.0f, 1.0f)
Edit:
I changed the P*M*V*P calculation in the shader to this;
odat.position = mul(mul(mul(proj, view), model), odat.position);
still having the same issue.
Edit 2:
Hmm, the above changes mixed with a change of what generates the perspective, now just purely using glm.
glm::mat4 GLtoDX_NDC(glm::translate(glm::vec3(0.0, 0.0, 0.5)) *
glm::scale(glm::vec3(1.0, 1.0, 0.5)));
glm::mat4 gl = GLtoDX_NDC * glm::perspectiveFovLH(60.0f, 500.0f, 500.0f, 100.0f, 0.1f);
This works, in as far as object no longer cull at about a micrometre, and are projected to their approximate sizes. However, now everything is upside down and rotating causes a shear...
I have have a pixel shader that should simply pass the input color through, but instead I am getting a constant result. I think my syntax might be the problem. Here is the shader:
struct PixelShaderInput
{
float3 color : COLOR;
};
struct PixelShaderOutput
{
float4 color : SV_TARGET0;
};
PixelShaderOutput main(PixelShaderInput input)
{
PixelShaderOutput output;
output.color.rgba = float4(input.color, 1.0f); // input.color is 0.5, 0.5, 0.5; output is black
// output.color.rgba = float4(0.5f, 0.5f, 0.5f, 1); // output is gray
return output;
}
For testing, I have the vertex shader that precedes this in the pipleline passing a COLOR parameter of 0.5, 0.5, 0.5. Stepping through the pixel shader in VisualStudio, input.color has the correct values, and these are being assinged to output.color correctly. However when rendered, the vertices that use this shader are all black.
Here is the vertex shader element description:
const D3D11_INPUT_ELEMENT_DESC vertexDesc[] =
{
{ "POSITION", 0, DXGI_FORMAT_R32G32B32_FLOAT, 0, D3D11_APPEND_ALIGNED_ELEMENT, D3D11_INPUT_PER_VERTEX_DATA, 0 },
{ "COLOR", 0, DXGI_FORMAT_R32G32B32_FLOAT, 0, D3D11_APPEND_ALIGNED_ELEMENT, D3D11_INPUT_PER_VERTEX_DATA, 0 },
{ "TEXCOORD", 0, DXGI_FORMAT_R32G32_FLOAT, 0, D3D11_APPEND_ALIGNED_ELEMENT, D3D11_INPUT_PER_VERTEX_DATA, 0 },
};
I'm not sure if it's important that the vertex shader takes colors as RGB outputs the same, but the pixel shader outputs RGBA. The alpha layer is working correctly at least.
If I comment out that first assignment, the one using input.color, and uncomment the other assignment, with the explicit values, then the rendered pixels are gray (as expected).
Any ideas on what I'm doing wrong here?
I'm using shader model 4 level 9_1, with optimizations disabled and debug info enabled.
output.color.rgba = float4(input.color, 1.0f);
your input.color is a float4 and you are passing it into another float4, i think this should work
output.color.rgba = float4(input.color.rgb, 1.0f);
this is all you need to pass it thru simply
return input.color;
if you want to change the colour to red then do something like
input.color = float4(1.0f, 0.0f, 0.0f, 1.0f);
return input.color;
*Are you sure that your vertices are in the place they are supposed to be? You are starting to make me doubt my D3D knowledge. :P
I believe your problem is that you are only passing a color,
BOTH parts of the shader NEED a position in order to work.
Your PixelShaderInput layout should be:
struct PixelShaderInput
{
float4 position :SV_POSITION;
float3 color : COLOR;
};*
Could you maybe try this as your pixel shader?:
float4 main(float3 color : COLOR) : SV_TARGET
{
return float4(color, 1.0f);
}
I have never seen this kind of constructor
float4(input.color, 1.0f);
this might be the problem, but I could be wrong. Try passing the float values one by one like this:
float4(input.color[0], input.color[1], input.color[2], 1.0f);
Edit:
Actually you might have to use float4 as type for COLOR (http://msdn.microsoft.com/en-us/library/windows/desktop/bb509647(v=vs.85).aspx)
For an academic project I'm trying to write a code for drawing billboards from scratch; now I'm at the point of making them translucent. I've managed to make them look good against the background but they still may cover each other with their should-be-transparent corners, like in this picture:
I'm not sure what I'm doing wrong. This is the effect file I'm using to draw the billboards. I've omitted the parts related to the vertex shader, which I think is irrelevant right now.
//cut
texture Texture;
texture MaskTexture;
sampler Sampler = sampler_state
{
Texture = (Texture);
MinFilter = Linear;
MagFilter = Linear;
MipFilter = Point;
AddressU = Clamp;
AddressV = Clamp;
};
sampler MaskSampler = sampler_state
{
Texture = (MaskTexture);
MinFilter = Linear;
MagFilter = Linear;
MipFilter = Point;
AddressU = Clamp;
AddressV = Clamp;
};
//cut
struct VertexShaderOutput
{
float4 Position : POSITION0;
float4 Color : COLOR;
float2 TexCoord : TEXCOORD0;
};
//cut
float4 PixelShaderFunction(VertexShaderOutput input) : COLOR0
{
float4 result = tex2D(Sampler, input.TexCoord) * input.Color;
float4 mask = tex2D(MaskSampler, input.TexCoord);
float alpha = mask.r;
result.rgb *= alpha;
result.a = alpha;
return result;
}
technique Technique1
{
pass Pass1
{
VertexShader = compile vs_2_0 VertexShaderFunction();
PixelShader = compile ps_2_0 PixelShaderFunction();
AlphaBlendEnable = true;
SrcBlend = SrcAlpha;
DestBlend = InvSrcAlpha;
}
}
I've got two textures, named Texture and MaskTexture, the latter being in grayscale. The billboards are, most likely, in the same vertex buffer and are drawn with a single call of GraphicsDevice.DrawIndexedPrimitives() from XNA.
I've got a feeling I'm not doing the whole thing right.
You have to draw them in order. From farthest to closest.
I have found a solution. The shaders are fine, the problem turned out to be in the XNA code, so sorry for drawing your attention to the wrong thing.
The solution is to enable a depth stencil buffer (whatever it is) before drawing the billboards:
device.DepthStencilState = DepthStencilState.DepthRead;
It can be disabled afterwards:
device.DepthStencilState = DepthStencilState.Default;