Vertex attribute not consumed by shader for my Vulkan program - graphics

I'm working with a tutorial to help learn Vulkan in C++, and I'm stuck trying to be able to change what my vertices are and colors. It always shows a triangle with 3 colors interpolated between each vertex (one red, one blue, one green), with neither the positions of the vertices OR the colors changing, even when I manually edit them. The result looks like this. If I try to change the color of one of the vertices, or change the position of it on the screen, nothing changes, and that triangle in the image I linked is displayed.
In my program I have my vertices set as:
const std::vector<Vertex> vertices = {
{{0.0f, -0.5f}, {0.0f, 1.0f, 0.0f}}, // green at the top
{{-0.5f, 0.5f}, {0.0f, 0.0f, 1.0f}}, // blue in the bottom left
{{0.5f, 0.5f}, {1.0f, 0.0f, 0.0f}}, // red in the bottom right
};
However, in the image that I provided, the RED is at the top, while the blue and green are on the bottom... which confuses the heck out of me.
Additionally, a note in my console comes up saying that the information cannot be passed along into the vertex shader (at location 0 and 1, which is where vertices and colors are passed along, respectively). I've looked for solutions regarding this error, and I still cannot figure out for the life of me what the problem is.
The exact error is: validation error: Validation Performance Warning: [ UNASSIGNED-CoreValidation-Shader-OutputNotConsumed ] Object 0: handle = 0xec4bec0000000000b, type = VK_OBJECT_TYPE_SHADER_MODULE; | MessageID = 0x609a13b | Vertex attribute at location 0 not consumed by vertex shader. Location 0 is supposed to be the input for the position on the screen as a vec2, while location is color as a vec3.
The vertex shader code (GLSL) is as follows:
#version 450
layout(location = 0) in vec2 inPosition;
layout(location = 1) in vec3 inColor;
layout(location = 0) out vec3 fragColor;
void main() {
gl_Position = vec4(inPosition, 0.0, 1.0);
fragColor = inColor;
}
Meanwhile, the C++ code where I create the vertex buffer is:
void createVertexBuffer() {
VkBufferCreateInfo bufferInfo{};
bufferInfo.sType = VK_STRUCTURE_TYPE_BUFFER_CREATE_INFO;
bufferInfo.size = sizeof(vertices[0]) * vertices.size();
bufferInfo.usage = VK_BUFFER_USAGE_VERTEX_BUFFER_BIT;
bufferInfo.sharingMode = VK_SHARING_MODE_EXCLUSIVE;
if (vkCreateBuffer(device, &bufferInfo, nullptr, &vertexBuffer) != VK_SUCCESS) {
throw std::runtime_error("failed to create vertex buffer!");
}
}
Finally, the C++ code where I bind the vertices to the appropriate buffer is:
void RTXApp::recordCommandBuffer(VkCommandBuffer commandBuffer, uint32_t imageIndex) {
VkCommandBufferBeginInfo beginInfo{};
beginInfo.sType = VK_STRUCTURE_TYPE_COMMAND_BUFFER_BEGIN_INFO;
beginInfo.flags = 0;
beginInfo.pInheritanceInfo = nullptr;
// error catching
if (vkBeginCommandBuffer(commandBuffer, &beginInfo) != VK_SUCCESS) {
throw std::runtime_error("failed to begin recording command buffer!");
}
VkRenderPassBeginInfo renderPassInfo{};
renderPassInfo.sType = VK_STRUCTURE_TYPE_RENDER_PASS_BEGIN_INFO;
renderPassInfo.renderPass = renderPass;
renderPassInfo.framebuffer = swapChainFramebuffers[imageIndex];
renderPassInfo.renderArea.offset = { 0, 0 };
renderPassInfo.renderArea.extent = swapChainExtent;
VkClearValue clearColor = { {{0.0f, 0.0f, 0.0f, 1.0f}} }; // set "default" color to black
renderPassInfo.clearValueCount = 1;
renderPassInfo.pClearValues = &clearColor;
// begin render pass
vkCmdBeginRenderPass(commandBuffer, &renderPassInfo, VK_SUBPASS_CONTENTS_INLINE);
// bind graphics pipeline object to command buffer
vkCmdBindPipeline(commandBuffer, VK_PIPELINE_BIND_POINT_GRAPHICS, graphicsPipeline);
// bind the vertex buffer to draw from
VkBuffer vertexBuffers[] = { vertexBuffer };
VkDeviceSize offsets[] = { 0 };
vkCmdBindVertexBuffers(commandBuffer, 0, 1, vertexBuffers, offsets);
// draw from the vertex buffer
vkCmdDraw(commandBuffer, static_cast<uint32_t>(vertices.size()), 1, 0, 0);
// end render pass
vkCmdEndRenderPass(commandBuffer);
// error catching
if (vkEndCommandBuffer(commandBuffer) != VK_SUCCESS) {
throw std::runtime_error("failed to record command buffer!");
}
}
I've tried both my version of the code and the code found at the tutorial that I'm using (found here), both seem to have the same effect. I'm not sure what I'm doing wrong, since I've followed the entire thing to the letter, trying very carefully to make sure all of the results are as I want them before moving on. I'm honestly not sure what to do anymore, since I've looked for solutions for this specific problem and I've found nothing.
Apologies if this isn't exactly the clearest, the code is super long, I'm not sure why this error is happening, and I'm a bit new to stack overflow in general. Any help would be appreciated, I'm losing my sanity trying to figure out what's wrong.
The code at the bottom of this page is basically the long version of what I have, and is what I'm trying to recreate with the code that I'm using. I'm on Windows 11, using MSVS 2022 and the Windows 10 Vulkan SDK, if that matters at all - my project settings are supposedly all good, and match the Windows version of this setup.
Edit: here is the part of the code with the VkPipelineVertexInputCreateInfo, as requested. Still cannot figure out what's going on.
auto vertShaderCode = readFile("shaders/vert.spv");
auto fragShaderCode = readFile("shaders/frag.spv");
VkShaderModule vertShaderModule = createShaderModule(vertShaderCode);
VkShaderModule fragShaderModule = createShaderModule(fragShaderCode);
// create the vertex shader stage of the pipleline
VkPipelineShaderStageCreateInfo vertShaderStageInfo{};
vertShaderStageInfo.sType = VK_STRUCTURE_TYPE_PIPELINE_SHADER_STAGE_CREATE_INFO;
vertShaderStageInfo.stage = VK_SHADER_STAGE_VERTEX_BIT;
vertShaderStageInfo.module = vertShaderModule;
vertShaderStageInfo.pName = "main";
// create the fragment shader stages of the pipeline
VkPipelineShaderStageCreateInfo fragShaderStageInfo{};
fragShaderStageInfo.sType = VK_STRUCTURE_TYPE_PIPELINE_SHADER_STAGE_CREATE_INFO;
fragShaderStageInfo.stage = VK_SHADER_STAGE_FRAGMENT_BIT;
fragShaderStageInfo.module = fragShaderModule;
fragShaderStageInfo.pName = "main";
// store steps of shader stages in an array (vertex first, then fragment)
VkPipelineShaderStageCreateInfo shaderStages[] = { vertShaderStageInfo, fragShaderStageInfo };
VkPipelineVertexInputStateCreateInfo vertexInputInfo{};
vertexInputInfo.sType = VK_STRUCTURE_TYPE_PIPELINE_VERTEX_INPUT_STATE_CREATE_INFO;
auto bindingDescription = Vertex::getBindingDescription();
auto attributeDescriptions = Vertex::getAttributeDescriptions();
vertexInputInfo.vertexBindingDescriptionCount = 1;
vertexInputInfo.pVertexBindingDescriptions = &bindingDescription;
vertexInputInfo.vertexAttributeDescriptionCount = static_cast<uint32_t>(attributeDescriptions.size());
vertexInputInfo.pVertexAttributeDescriptions = attributeDescriptions.data();

Related

Why isn't my OpenGL "hello world" rendering?

I've been hitting my head against the wall for two days on this. I'm trying to distill the simplest possible OpenGL Core ~2.0-3.2 drawing sequence so that I can build code off of it and really understand the API. The problem I'm running into is that the tutorials never seem to come with a helpful tag for what version of OpenGL they're using, and it's purely by luck I happened across documentation on how to even request a particular version from my context.
I'm certain that I have 3.2 core enabled now, as immediate mode drawing throws errors (that's a good thing! I want to leave immediate mode behind!), and I've tried to strip out anything fancy like coordinate transforms or triangle winding that might screw up my display. The problem is, I can't get anything to appear on-screen.
In prior iterations of this program, I did manage to get a white triangle on-screen sometimes, using random coordinates, but it seems to me like the vertices aren't getting set properly, and strange bit combinations produce strange results. Sign did not matter in where the triangles appeared - therefore my theory is that either the vertex information is not being transferred properly to the vertex shader, or the shader is mangling it. The problem is, I'm checking all the results and logs I can find, and the shader compiles and links beautifully.
I will provide links and code below, but in addition to just getting the triangle on-screen I'm wondering, can I get the shader program to spit text and/or diagnostic values out to its shaderInfoLog? That would simplify the debugging process immensely.
The various tutorials I'm consulting are...
http://arcsynthesis.org/gltut/Basics/Tutorial%2001.html
http://en.wikibooks.org/wiki/OpenGL_Programming/Modern_OpenGL_Introduction
https://en.wikipedia.org/wiki/Vertex_Buffer_Object
http://www.opengl.org/wiki/Tutorial2:_VAOs,_VBOs,_Vertex_and_Fragment_Shaders_(C_/_SDL)
http://antongerdelan.net/opengl/hellotriangle.html
http://lwjgl.org/wiki/index.php?title=The_Quad_with_DrawArrays
http://lwjgl.org/wiki/index.php?title=Using_Vertex_Buffer_Objects_(VBO)
http://www.opengl.org/wiki/Vertex_Rendering
http://www.opengl.org/wiki/Layout_Qualifier_(GLSL) (not present in
provided code, but something I tried was #version 420 with layout
qualifiers 0 (in_Position) and 1 (in_Color))
Code (LWJGL + Groovy)
package com.thoughtcomplex.gwdg.core
import org.lwjgl.input.Keyboard
import org.lwjgl.opengl.ContextAttribs
import org.lwjgl.opengl.Display
import org.lwjgl.opengl.GL11
import org.lwjgl.opengl.GL15
import org.lwjgl.opengl.GL20
import org.lwjgl.opengl.GL30
import org.lwjgl.opengl.PixelFormat
import org.lwjgl.util.glu.GLU
import java.nio.ByteBuffer
import java.nio.FloatBuffer
/**
* Created by Falkreon on 5/21/2014.
*/
class GwDG {
static final String vertexShader = """
#version 150
in vec2 in_Position;
in vec3 in_Color;
smooth out vec3 ex_Color;
void main(void) {
gl_Position = vec4(in_Position,0.0,1.0);
ex_Color = in_Color;
}
""";
static final String fragmentShader = """
#version 150
smooth in vec3 ex_Color;
out vec4 fragColor;
void main(void) {
//fragColor = vec4(ex_Color, 1.0);
fragColor = vec4(1.0, 1.0, 1.0, 1.0);
}
""";
static int vaoHandle = -1;
static int vboHandle = -1;
protected static int colorHandle = -1;
static int vertexShaderHandle = -1;
static int fragmentShaderHandle = -1;
static int shaderProgram = -1;
protected static FloatBuffer vboBuffer = ByteBuffer.allocateDirect(6*4).asFloatBuffer();
protected static FloatBuffer colorBuffer = ByteBuffer.allocateDirect(9*4).asFloatBuffer();
public static void main(String[] args) {
//Quick and dirty hack to get something on the screen; this *works* for immediate mode drawing
System.setProperty("org.lwjgl.librarypath", "C:\\Users\\Falkreon\\IdeaProjects\\GwDG\\native\\windows");
Display.setTitle("Test");
ContextAttribs attribs = new ContextAttribs();
attribs.profileCompatibility = false;
attribs.profileCore = true;
attribs.majorVersion = 3;
attribs.minorVersion = 2;
Display.create( new PixelFormat().withDepthBits(24).withSamples(4).withSRGB(true), attribs );
//Kill any possible winding error
GL11.glDisable(GL11.GL_CULL_FACE);
vaoHandle = GL30.glGenVertexArrays();
GL30.glBindVertexArray(vaoHandle);
reportErrors("VERTEX_ARRAY");
vboHandle = GL15.glGenBuffers();
colorHandle = GL15.glGenBuffers();
vertexShaderHandle = GL20.glCreateShader(GL20.GL_VERTEX_SHADER);
fragmentShaderHandle = GL20.glCreateShader(GL20.GL_FRAGMENT_SHADER);
reportErrors("CREATE_SHADER");
GL20.glShaderSource( vertexShaderHandle, vertexShader );
GL20.glShaderSource( fragmentShaderHandle, fragmentShader );
GL20.glCompileShader( vertexShaderHandle );
String vertexResult = GL20.glGetShaderInfoLog( vertexShaderHandle, 700 );
if (!vertexResult.isEmpty()) System.out.println("Vertex result: "+vertexResult);
GL20.glCompileShader( fragmentShaderHandle );
String fragmentResult = GL20.glGetShaderInfoLog( fragmentShaderHandle, 700 );
if (!fragmentResult.isEmpty()) System.out.println("Fragment result: "+fragmentResult);
shaderProgram = GL20.glCreateProgram();
reportErrors("CREATE_PROGRAM");
GL20.glAttachShader( shaderProgram, vertexShaderHandle );
GL20.glAttachShader( shaderProgram, fragmentShaderHandle );
GL20.glLinkProgram(shaderProgram);
int result = GL20.glGetProgrami( shaderProgram, GL20.GL_LINK_STATUS );
if (result!=1) System.out.println("LINK STATUS: "+result);
reportErrors("SHADER_LINK");
//Attribs
int vertexParamID = GL20.glGetAttribLocation(shaderProgram, "in_Position");
int colorParamID = GL20.glGetAttribLocation(shaderProgram, "in_Color");
while (!Keyboard.isKeyDown(Keyboard.KEY_ESCAPE)) {
//Intentional flicker so I can see if something I did freezes or lags the program
GL11.glClearColor(Math.random()/6 as Float, Math.random()/8 as Float, (Math.random()/8)+0.4 as Float, 1.0f);
GL11.glClear(GL11.GL_COLOR_BUFFER_BIT | GL11.GL_DEPTH_BUFFER_BIT );
float[] coords = [
0.0f, 0.8f,
-0.8f, -0.8f,
0.8f, -0.8f
];
vboBuffer.clear();
coords.each {
vboBuffer.put it;
}
vboBuffer.flip();
float[] colors = [
1.0f, 0.0f, 0.0f,
0.0f, 1.0f, 0.0f,
0.0f, 0.0f, 1.0f
];
colorBuffer.clear();
colors.each {
colorBuffer.put it;
}
colorBuffer.flip();
//System.out.println(dump(vboBuffer));
reportErrors("SETUP_TRIANGLE_DATA");
GL15.glBindBuffer(GL15.GL_ARRAY_BUFFER, vboHandle);
GL15.glBufferData(GL15.GL_ARRAY_BUFFER, vboBuffer, GL15.GL_STATIC_DRAW);
reportErrors("BIND_VBO_AND_FILL_DATA");
GL15.glBindBuffer(GL15.GL_ARRAY_BUFFER, colorHandle);
GL15.glBufferData(GL15.GL_ARRAY_BUFFER, colorBuffer, GL15.GL_STATIC_DRAW);
reportErrors("BIND_COLOR_BUFFER_AND_FILL_DATA");
GL20.glEnableVertexAttribArray( vertexParamID );
GL15.glBindBuffer(GL15.GL_ARRAY_BUFFER, vboHandle);
GL20.glVertexAttribPointer(
vertexParamID, 2, GL11.GL_FLOAT, false, 0, 0
);
GL20.glEnableVertexAttribArray( colorParamID );
GL15.glBindBuffer(GL15.GL_ARRAY_BUFFER, colorHandle);
GL20.glVertexAttribPointer(
colorParamID, 3, GL11.GL_FLOAT, false, 0, 0
);
reportErrors("VERTEX_ATTRIB_POINTERS");
GL20.glUseProgram( shaderProgram );
GL11.glDrawArrays( GL11.GL_TRIANGLES, 0, 3 );
reportErrors("POST_RENDER");
Display.update(true);
Thread.sleep(12);
Keyboard.poll();
}
Display.destroy();
}
private static String dump(FloatBuffer f) {
String result = "[ ";
f.position(0);
//f.rewind();
for(it in 0..<f.limit()) {
result+= f.get(it);
if (it!=f.limit()-1) result+=", ";
}
result +=" ]";
f.position(0);
result;
}
private static void reportErrors(String prefix) {
int err = GL11.glGetError();
if (err!=0) System.out.println("["+prefix + "]: "+GLU.gluErrorString(err)+" ("+err+")");
}
}
Not that it matters, but the card is an ATI Radeon HD 8550G (part of an A8 APU) with support for GL4.
I'll update with more information at request, I just don't know what else might be helpful in diagnosing this.
Edit: I've updated the code above to reflect changes suggested by Reto Koradi. I've also got a variant of the code running with an alternate vertex declaration:
float[] coords = [
0.0f, 0.8f,
-0.8f, -0.8f,
Math.random(), Math.random(),
//0.8f, -0.8f,
];
This does actually produce something rasterized on the screen, but it is not at all what I would expect. Rather than simply relocating the bottom-right (top-right?) point, it flips between nothing, completely white, and the following two shapes:
If I replace the second or third vertex, this happens. If I replace the first vertex, nothing appears on-screen. So, to check my assumptions about which vertex is actually appearing in the center of the window, I tried the following:
static final String vertexShader = """
#version 150
in vec2 in_Position;
in vec3 in_Color;
smooth out vec3 ex_Color;
void main(void) {
gl_Position = vec4(in_Position,0.0,1.0);
ex_Color = vec3(1.0, 1.0, 1.0);
if (gl_VertexID==0) ex_Color = vec3(1.0, 0.0, 0.0);
if (gl_VertexID==1) ex_Color = vec3(0.0, 1.0, 0.0);
if (gl_VertexID==2) ex_Color = vec3(0.0, 0.0, 1.0);
//ex_Color = in_Color;
}
""";
static final String fragmentShader = """
#version 150
smooth in vec3 ex_Color;
out vec4 fragColor;
void main(void) {
fragColor = vec4(ex_Color, 1.0);
//fragColor = vec4(1.0, 1.0, 1.0, 1.0);
}
""";
Simple, right? The vertex in the middle should probably be the first, "red" vertex, since it's the non-optional vertex without which I can't seem to draw anything on the screen. That is not actually the case. The half-screen blocks are always red as expected, but the left-facing triangle shape is always the color of whatever vertex I replace - replacing the second vertex makes it green, replacing the third vertex makes it blue. It definitely seems like both "-0.8, -0.8" and "0.8, -0.8" are so far off-screen that the triangle sections visible are effectively an infinitely thin line. But I don't think this is due to a transform - this behaves more like an alignment problem, with its arbitrary threshold around 0.9 that sends coordinates shooting off into the farlands. Like perhaps the significand of a value in the vertex buffer is winding up in the exponent of in_Position values.
Just to keep drilling down, I increased the amount of hardcoded GLSL to ignore the buffers completely -
static final String vertexShader = """
#version 150
in vec2 in_Position;
in vec3 in_Color;
smooth out vec3 ex_Color;
void main(void) {
gl_Position = vec4(in_Position,0.0,1.0);
ex_Color = vec3(1.0, 1.0, 1.0);
if (gl_VertexID==0) {
ex_Color = vec3(1.0, 0.0, 0.0);
gl_Position = vec4(0.0, 0.8, 0.0, 1.0);
}
if (gl_VertexID==1) {
ex_Color = vec3(0.0, 1.0, 0.0);
gl_Position = vec4(-0.8, -0.8, 0.0, 1.0);
}
if (gl_VertexID==2) {
ex_Color = vec3(0.0, 0.0, 1.0);
gl_Position = vec4(0.8, -0.8, 0.0, 1.0);
}
//ex_Color = in_Color;
}
""";
This produces the desired result, a nice big triangle with a different color on each vertex. Obviously I want to get this same triangle out of the vertex buffers but it's a really good start - with two vertices I can tell at least what direction the final vertex is shooting off in. In the case of the first vertex, it's definitely down.
I also figured out how to enable debug mode in the profile, and it's spitting color buffer errors at me. Good! That's a start. Now why isn't it throwing massive amounts of VBO errors?
Your code is not compatible with the core profile. Your reportErrors() should actually fire. In the core profile, you have to use vertex array objects (VAO) for your vertex setup. You will have to generate a VAO with glGenVertexArrays(), and bind it with `glBindVertexArray(), before setting up your vertex state.
The use of gl_FragColor in the fragment shader is also deprecated. You need to declare your own out variable in the core profile, for example:
out vec4 FragColor;
...
FragColor = ...
The answer finally came in a flash of inspiration from
http://lwjgl.org/forum/index.php?topic=5171.0
It's fairly wrong, so let me explain what finally went right. My dev environment is java on an intel chip. All of java runs in big-endian. I was mystified that the exponent of my floats seemed to be winding up in the significand, but seeing this post it finally struck me- the easiest way that happens if endianness is flipped! FloatBuffers are still going to be big-endian. The likelihood that OpenGL runs in anything besides native byte order is pretty much zero. Either none of the lwjgl resources I consulted mentioned this quirk, or I missed them.
The correct initializer for the ByteBuffers in this program is:
protected static FloatBuffer vboBuffer = ByteBuffer.allocateDirect(6*4).order( ByteOrder.nativeOrder( ) ).asFloatBuffer();
protected static FloatBuffer colorBuffer = ByteBuffer.allocateDirect(9*4).order( ByteOrder.nativeOrder( ) ).asFloatBuffer();
The important part being the ByteOrder.nativeOrder()

How to remove the effect of light / shadow on my model in XNA?

I am developing a small game and I would draw a field-ground(land) with a repeated texture. My problem is the rendered result. This gives the impression of seeing everything around my cube looked as if a light shadow.
Is it possible to standardize the light or remove the shadow effect in my drawing function?
Sorry for my bad english..
Here is a screenshot to better understand my problem.
Here my code draw function (instancing model with vertexbuffer)
// Draw Function (instancing model - vertexbuffer)
public void DrawModelHardwareInstancing(Model model,Texture2D texture, Matrix[] modelBones,
Matrix[] instances, Matrix view, Matrix projection)
{
if (instances.Length == 0)
return;
// If we have more instances than room in our vertex buffer, grow it to the neccessary size.
if ((instanceVertexBuffer == null) ||
(instances.Length > instanceVertexBuffer.VertexCount))
{
if (instanceVertexBuffer != null)
instanceVertexBuffer.Dispose();
instanceVertexBuffer = new DynamicVertexBuffer(Game.GraphicsDevice, instanceVertexDeclaration,
instances.Length, BufferUsage.WriteOnly);
}
// Transfer the latest instance transform matrices into the instanceVertexBuffer.
instanceVertexBuffer.SetData(instances, 0, instances.Length, SetDataOptions.Discard);
foreach (ModelMesh mesh in model.Meshes)
{
foreach (ModelMeshPart meshPart in mesh.MeshParts)
{
// Tell the GPU to read from both the model vertex buffer plus our instanceVertexBuffer.
Game.GraphicsDevice.SetVertexBuffers(
new VertexBufferBinding(meshPart.VertexBuffer, meshPart.VertexOffset, 0),
new VertexBufferBinding(instanceVertexBuffer, 0, 1)
);
Game.GraphicsDevice.Indices = meshPart.IndexBuffer;
// Set up the instance rendering effect.
Effect effect = meshPart.Effect;
//effect.CurrentTechnique = effect.Techniques["HardwareInstancing"];
effect.Parameters["World"].SetValue(modelBones[mesh.ParentBone.Index]);
effect.Parameters["View"].SetValue(view);
effect.Parameters["Projection"].SetValue(projection);
effect.Parameters["Texture"].SetValue(texture);
// Draw all the instance copies in a single call.
foreach (EffectPass pass in effect.CurrentTechnique.Passes)
{
pass.Apply();
Game.GraphicsDevice.DrawInstancedPrimitives(PrimitiveType.TriangleList, 0, 0,
meshPart.NumVertices, meshPart.StartIndex,
meshPart.PrimitiveCount, instances.Length);
}
}
}
}
// ### END FUNCTION DrawModelHardwareInstancing
The problem is the cube mesh you are using. The normals are averaged, but I guess you want them to be orthogonal to the faces of the cubes.
You will have to use a total of 24 vertices (4 for each side) instead of 8 vertices. Each corner will have 3 vertices with the same position but different normals, one for each adjacent face:
If the FBX exporter cannot be configured to correctly export the normals simply create your own cube mesh:
var vertices = new VertexPositionNormalTexture[24];
// Initialize the vertices, set position and texture coordinates
// ...
// Set normals
// front face
vertices[0].Normal = new Vector3(1, 0, 0);
vertices[1].Normal = new Vector3(1, 0, 0);
vertices[2].Normal = new Vector3(1, 0, 0);
vertices[3].Normal = new Vector3(1, 0, 0);
// back face
vertices[4].Normal = new Vector3(-1, 0, 0);
vertices[5].Normal = new Vector3(-1, 0, 0);
vertices[6].Normal = new Vector3(-1, 0, 0);
vertices[7].Normal = new Vector3(-1, 0, 0);
// ...
It looks like you've got improperly calculated / no normals.
Look at this example, specifically part 3.
A normal is a vector that describes the direction that light would reflect off that vertex/poly if shined orthogonally to it.
I like this picture to demonstrate The blue lines are the normal direction at each particular point on the curve.
In XNA, you can calculate the normal of a polygon with vertices vert1,vert2,and vert3 like so:
Vector3 dir = Vector3.Cross(vert2 - vert1, vert3 - vert1);
Vector3 norm = Vector3.Normalize(dir);
In a lot of cases this is done automatically by modelling software so the calculation is unnecessary. You probably do need to perform that calculation if you're creating your cubes in code though.

DirectX 11: text output, using your own font texture

I'm learning DirectX, using the book "Sherrod A., Jones W. - Beginning DirectX 11 Game Programming - 2011" Now I'm exploring the 4th chapter about drawing text.
Please, help we to fix my function, that I'm using to draw a string on the screen. I've already loaded font texture and in the function I create some sprites with letters and define texture coordinates for them. This compiles correctly, but doesn't draw anything. What's wrong?
bool DirectXSpriteGame :: DrawString(char* StringToDraw, float StartX, float StartY)
{
//VAR
HRESULT D3DResult; //The result of D3D functions
int i; //Counters
const int IndexA = static_cast<char>('A'); //ASCII index of letter A
const int IndexZ = static_cast<char>('Z'); //ASCII index of letter Z
int StringLenth = strlen(StringToDraw); //Lenth of drawing string
float ScreenCharWidth = static_cast<float>(LETTER_WIDTH) / static_cast<float>(SCREEN_WIDTH); //Width of the single char on the screen(in %)
float ScreenCharHeight = static_cast<float>(LETTER_HEIGHT) / static_cast<float>(SCREEN_HEIGHT); //Height of the single char on the screen(in %)
float TexelCharWidth = 1.0f / static_cast<float>(LETTERS_NUM); //Width of the char texel(in the texture %)
float ThisStartX; //The start x of the current letter, drawingh
float ThisStartY; //The start y of the current letter, drawingh
float ThisEndX; //The end x of the current letter, drawing
float ThisEndY; //The end y of the current letter, drawing
int LetterNum; //Letter number in the loaded font
int ThisLetter; //The current letter
D3D11_MAPPED_SUBRESOURCE MapResource; //Map resource
VertexPos* ThisSprite; //Vertecies of the current sprite, drawing
//VAR
//Clamping string, if too long
if(StringLenth > LETTERS_NUM)
{
StringLenth = LETTERS_NUM;
}
//Mapping resource
D3DResult = _DeviceContext -> Map(_vertexBuffer, 0, D3D11_MAP_WRITE_DISCARD, 0, &MapResource);
if(FAILED(D3DResult))
{
throw("Failed to map resource");
}
ThisSprite = (VertexPos*)MapResource.pData;
for(i = 0; i < StringLenth; i++)
{
//Creating geometry for the letter sprite
ThisStartX = StartX + ScreenCharWidth * static_cast<float>(i);
ThisStartY = StartY;
ThisEndX = ThisStartX + ScreenCharWidth;
ThisEndY = StartY + ScreenCharHeight;
ThisSprite[0].Position = XMFLOAT3(ThisEndX, ThisEndY, 1.0f);
ThisSprite[1].Position = XMFLOAT3(ThisEndX, ThisStartY, 1.0f);
ThisSprite[2].Position = XMFLOAT3(ThisStartX, ThisStartY, 1.0f);
ThisSprite[3].Position = XMFLOAT3(ThisStartX, ThisStartY, 1.0f);
ThisSprite[4].Position = XMFLOAT3(ThisStartX, ThisEndY, 1.0f);
ThisSprite[5].Position = XMFLOAT3(ThisEndX, ThisEndY, 1.0f);
ThisLetter = static_cast<char>(StringToDraw[i]);
//Defining the letter place(number) in the font
if(ThisLetter < IndexA || ThisLetter > IndexZ)
{
//Invalid character, the last character in the font, loaded
LetterNum = IndexZ - IndexA + 1;
}
else
{
LetterNum = ThisLetter - IndexA;
}
//Unwraping texture on the geometry
ThisStartX = TexelCharWidth * static_cast<float>(LetterNum);
ThisStartY = 0.0f;
ThisEndY = 1.0f;
ThisEndX = ThisStartX + TexelCharWidth;
ThisSprite[0].TextureCoords = XMFLOAT2(ThisEndX, ThisEndY);
ThisSprite[1].TextureCoords = XMFLOAT2(ThisEndX, ThisStartY);
ThisSprite[2].TextureCoords = XMFLOAT2(ThisStartX, ThisStartY);
ThisSprite[3].TextureCoords = XMFLOAT2(ThisStartX, ThisStartY);
ThisSprite[4].TextureCoords = XMFLOAT2(ThisStartX, ThisEndY);
ThisSprite[5].TextureCoords = XMFLOAT2(ThisEndX, ThisEndY);
ThisSprite += VERTEX_IN_RECT_NUM;
}
for(i = 0; i < StringLenth; i++, ThisSprite -= VERTEX_IN_RECT_NUM);
_DeviceContext -> Unmap(_vertexBuffer, 0);
_DeviceContext -> Draw(VERTEX_IN_RECT_NUM * StringLenth, 0);
return true;
}
Although the piece of code constructing the Vertex Array seems correct to me at first glance, it seems like you are trying to Draw your vertices with a Shader which has not been set yet !
It is difficult to precisely answer you without looking at the whole code, but I can guess that you will need to do something like that :
1) Create Vertex and Pixel Shaders by compiling them first from their respective buffers
2) Create the Input Layout description, which describes the Input Buffers that will be read by the Input Assembler stage. It will have to match your VertexPos structure and your shader structure.
3) Set the Shader parameters.
4) Only now you can Set Shader rendering parameters : Set the InputLayout, as well as the Vertex and Pixel Shaders that will be used to render your triangles by something like :
_DeviceContext -> Unmap(_vertexBuffer, 0);
_DeviceContext->IASetInputLayout(myInputLayout);
_DeviceContext->VSSetShader(myVertexShader, NULL, 0); // Set Vertex shader
_DeviceContext->PSSetShader(myPixelShader, NULL, 0); // Set Pixel shader
_DeviceContext -> Draw(VERTEX_IN_RECT_NUM * StringLenth, 0);
This link should help you achieve what you want to do : http://www.rastertek.com/dx11tut12.html
Also, I recommend you to set an IndexBuffer and to use the method DrawIndexed to render your triangles for performance reasons : It will allow the graphics adapter to store vertices in a vertex cache, allowing recently-used vertex to be fetched from the cache instead of reading it from the vertex buffer.
More about this concern can be found on MSDN : http://msdn.microsoft.com/en-us/library/windows/desktop/bb147325(v=vs.85).aspx
Hope this helps!
P.S : Also, don't forget to release the resources after using them by calling Release().

BlackBerry - image 3D transform

I know how to rotate image on any angle with drawTexturePath:
int displayWidth = Display.getWidth();
int displayHeight = Display.getHeight();
int[] x = new int[] { 0, displayWidth, displayWidth, 0 };
int[] x = new int[] { 0, 0, displayHeight, displayHeight };
int angle = Fixed32.toFP( 45 );
int dux = Fixed32.cosd(angle );
int dvx = -Fixed32.sind( angle );
int duy = Fixed32.sind( angle );
int dvy = Fixed32.cosd( angle );
graphics.drawTexturedPath( x, y, null, null, 0, 0, dvx, dux, dvy, duy, image);
but what I need is a 3d projection of simple image with 3d transformation (something like this)
Can you please advice me how to do this with drawTexturedPath (I'm almost sure it's possible)?
Are there any alternatives?
The method used by this function(2 walk vectors) is the same as the oldskool coding tricks used for the famous 'rotozoomer' effect. rotozoomer example video
This method is a very fast way to rotate, zoom, and skew an image. The rotation is done simply by rotating the walk vectors. The zooming is done simply by scaling the walk vectors. The skewing is done by rotating the walkvectors in respect to one another (e.g. they don't make a 90 degree angle anymore).
Nintendo had made hardware in their SNES to use the same effect on any of the sprites and or backgrounds. This made way for some very cool effects.
One big shortcoming of this technique is that one can not perspectively warp a texture. To do this, every new horizontal line, the walk vectors should be changed slightly. (hard to explain without a drawing).
On the snes they overcame this by altering every scanline the walkvectors (In those days one could set an interrupt when the monitor was drawing any scanline). This mode was later referred to as MODE 7 (since it behaved like a new virtual kind of graphics mode). The most famous games using this mode were Mario kart and F-zero
So to get this working on the blackberry, you'll have to draw your image "displayHeight" times (e.g. Every time one scanline of the image). This is the only way to achieve the desired effect. (This will undoubtedly cost you a performance hit since you are now calling the drawTexturedPath function a lot of times with new values, instead of just one time).
I guess with a bit of googling you can find some formulas (or even an implementation) how to calc the varying walkvectors. With a bit of paper (given your not too bad at math) you might deduce it yourself too. I've done it myself too when I was making games for the Gameboy Advance so I know it can be done.
Be sure to precalc everything! Speed is everything (especially on slow machines like phones)
EDIT: did some googling for you. Here's a detailed explanation how to create the mode7 effect. This will help you achieve the same with the Blackberry function. Mode 7 implementation
With the following code you can skew your image and get a perspective like effect:
int displayWidth = Display.getWidth();
int displayHeight = Display.getHeight();
int[] x = new int[] { 0, displayWidth, displayWidth, 0 };
int[] y = new int[] { 0, 0, displayHeight, displayHeight };
int dux = Fixed32.toFP(-1);
int dvx = Fixed32.toFP(1);
int duy = Fixed32.toFP(1);
int dvy = Fixed32.toFP(0);
graphics.drawTexturedPath( x, y, null, null, 0, 0, dvx, dux, dvy, duy, image);
This will skew your image in a 45º angle, if you want a certain angle you just need to use some trigonometry to determine the lengths of your vectors.
Thanks for answers and guidance, +1 to you all.
MODE 7 was the way I choose to implement 3D transformation, but unfortunately I couldn't make drawTexturedPath to resize my scanlines... so I came down to simple drawImage.
Assuming you have a Bitmap inBmp (input texture), create new Bitmap outBmp (output texture).
Bitmap mInBmp = Bitmap.getBitmapResource("map.png");
int inHeight = mInBmp.getHeight();
int inWidth = mInBmp.getWidth();
int outHeight = 0;
int outWidth = 0;
int outDrawX = 0;
int outDrawY = 0;
Bitmap mOutBmp = null;
public Scr() {
super();
mOutBmp = getMode7YTransform();
outWidth = mOutBmp.getWidth();
outHeight = mOutBmp.getHeight();
outDrawX = (Display.getWidth() - outWidth) / 2;
outDrawY = Display.getHeight() - outHeight;
}
Somewhere in code create a Graphics outBmpGraphics for outBmp.
Then do following in iteration from start y to (texture height)* y transform factor:
1.create a Bitmap lineBmp = new Bitmap(width, 1) for one line
2.create a Graphics lineBmpGraphics from lineBmp
3.paint i line from texture to lineBmpGraphics
4.encode lineBmp to EncodedImage img
5.scale img according to MODE 7
6.paint img to outBmpGraphics
Note: Richard Puckett's PNGEncoder BB port used in my code
private Bitmap getMode7YTransform() {
Bitmap outBmp = new Bitmap(inWidth, inHeight / 2);
Graphics outBmpGraphics = new Graphics(outBmp);
for (int i = 0; i < inHeight / 2; i++) {
Bitmap lineBmp = new Bitmap(inWidth, 1);
Graphics lineBmpGraphics = new Graphics(lineBmp);
lineBmpGraphics.drawBitmap(0, 0, inWidth, 1, mInBmp, 0, 2 * i);
PNGEncoder encoder = new PNGEncoder(lineBmp, true);
byte[] data = null;
try {
data = encoder.encode(true);
} catch (IOException e) {
e.printStackTrace();
}
EncodedImage img = PNGEncodedImage.createEncodedImage(data,
0, -1);
float xScaleFactor = ((float) (inHeight / 2 + i))
/ (float) inHeight;
img = scaleImage(img, xScaleFactor, 1);
int startX = (inWidth - img.getScaledWidth()) / 2;
int imgHeight = img.getScaledHeight();
int imgWidth = img.getScaledWidth();
outBmpGraphics.drawImage(startX, i, imgWidth, imgHeight, img,
0, 0, 0);
}
return outBmp;
}
Then just draw it in paint()
protected void paint(Graphics graphics) {
graphics.drawBitmap(outDrawX, outDrawY, outWidth, outHeight, mOutBmp,
0, 0);
}
To scale, I've do something similar to method described in Resizing a Bitmap using .scaleImage32 instead of .setScale
private EncodedImage scaleImage(EncodedImage image, float ratioX,
float ratioY) {
int currentWidthFixed32 = Fixed32.toFP(image.getWidth());
int currentHeightFixed32 = Fixed32.toFP(image.getHeight());
double w = (double) image.getWidth() * ratioX;
double h = (double) image.getHeight() * ratioY;
int width = (int) w;
int height = (int) h;
int requiredWidthFixed32 = Fixed32.toFP(width);
int requiredHeightFixed32 = Fixed32.toFP(height);
int scaleXFixed32 = Fixed32.div(currentWidthFixed32,
requiredWidthFixed32);
int scaleYFixed32 = Fixed32.div(currentHeightFixed32,
requiredHeightFixed32);
EncodedImage result = image.scaleImage32(scaleXFixed32, scaleYFixed32);
return result;
}
See also
J2ME Mode 7 Floor Renderer - something much more detailed & exciting if you writing a 3D game!
You want to do texture mapping, and that function won't cut it. Maybe you can kludge your way around it but the better option is to use a texture mapping algorithm.
This involves, for each row of pixels, determining the edges of the shape and where on the shape those screen pixels map to (the texture pixels). It's not so hard actually but may take a bit of work. And you'll be drawing the pic only once.
GameDev has a bunch of articles with sourcecode here:
http://www.gamedev.net/reference/list.asp?categoryid=40#212
Wikipedia also has a nice article:
http://en.wikipedia.org/wiki/Texture_mapping
Another site with 3d tutorials:
http://tfpsly.free.fr/Docs/TomHammersley/index.html
In your place I'd seek out a simple demo program that did something close to what you want and use their sources as base to develop my own - or even find a portable source library, I´m sure there must be a few.

Transform a Direct3D Mesh

I tried to write a TransformMesh function. The function accepts a Mesh object and a Matrix object. The idea is to transform the mesh using the matrix. To do this, I locked the vertex buffer, and called Vector3::TransformCoordinate on each vertex. It did not produce expected results. The resulting mesh was unrecognizable.
What am I doing wrong?
// C++/CLI code. My apologies.
int n = verts->Length;
for(int i = 0; i < n; i++){
verts[i].Position = DX::Vector3::TransformCoordinate(verts[i].Position, matrix);
}
Without contextual code around what you are doing, it might be hard to know the exact problem. How is the mesh created? How is verts[] read, how is it written? Are you trying to read from a write only vertex buffer?
My recommendation would be to try with a very simple translation matrix first and debug the code and see the vertex input and output. See if you receive good data and if it's transformed correctly. If so, the problem is in vertex stride, stream declaration or something else deeper in the DirectX pipeline.
As I said, more code would be needed to pinpoint the origin of the problem.
I totally agree with Coincoin, contextual code would help.
And if you want just to draw transformed mesh to the screen, you don't need to transform the mesh in this way. You can just change one of world, view, and projection matrices. This produces expected result. Like in the following sample code.
// Clear the backbuffer to a Blue color.
device.Clear(ClearFlags.Target | ClearFlags.ZBuffer, Color.Blue,
1.0f, 0);
// Begin the scene.
device.BeginScene();
device.Lights[0].Enabled = true;
// Setup the world, view, and projection matrices.
Matrix m = new Matrix();
if( destination.Y != 0 )
y += DXUtil.Timer(DirectXTimer.GetElapsedTime) * (destination.Y
* 25);
if( destination.X != 0 )
x += DXUtil.Timer(DirectXTimer.GetElapsedTime) * (destination.X
* 25);
m = Matrix.RotationY(y);
m *= Matrix.RotationX(x);
device.Transform.World = m;
device.Transform.View = Matrix.LookAtLH(
new Vector3( 0.0f, 3.0f,-5.0f ),
new Vector3( 0.0f, 0.0f, 0.0f ),
new Vector3( 0.0f, 1.0f, 0.0f ) );
device.Transform.Projection = Matrix.PerspectiveFovLH(
(float)Math.PI / 4, 1.0f, 1.0f, 100.0f );
// Render the teapot.
teapot.DrawSubset(0);
// End the scene.
device.EndScene();
This sample is taken from here.
I recommend to use function D3DXConcatenateMeshes. Pass one mesh and one matrix. Result will be transformed mesh. It's quite easy.

Resources