Transform a Direct3D Mesh - graphics

I tried to write a TransformMesh function. The function accepts a Mesh object and a Matrix object. The idea is to transform the mesh using the matrix. To do this, I locked the vertex buffer, and called Vector3::TransformCoordinate on each vertex. It did not produce expected results. The resulting mesh was unrecognizable.
What am I doing wrong?
// C++/CLI code. My apologies.
int n = verts->Length;
for(int i = 0; i < n; i++){
verts[i].Position = DX::Vector3::TransformCoordinate(verts[i].Position, matrix);
}

Without contextual code around what you are doing, it might be hard to know the exact problem. How is the mesh created? How is verts[] read, how is it written? Are you trying to read from a write only vertex buffer?
My recommendation would be to try with a very simple translation matrix first and debug the code and see the vertex input and output. See if you receive good data and if it's transformed correctly. If so, the problem is in vertex stride, stream declaration or something else deeper in the DirectX pipeline.
As I said, more code would be needed to pinpoint the origin of the problem.

I totally agree with Coincoin, contextual code would help.
And if you want just to draw transformed mesh to the screen, you don't need to transform the mesh in this way. You can just change one of world, view, and projection matrices. This produces expected result. Like in the following sample code.
// Clear the backbuffer to a Blue color.
device.Clear(ClearFlags.Target | ClearFlags.ZBuffer, Color.Blue,
1.0f, 0);
// Begin the scene.
device.BeginScene();
device.Lights[0].Enabled = true;
// Setup the world, view, and projection matrices.
Matrix m = new Matrix();
if( destination.Y != 0 )
y += DXUtil.Timer(DirectXTimer.GetElapsedTime) * (destination.Y
* 25);
if( destination.X != 0 )
x += DXUtil.Timer(DirectXTimer.GetElapsedTime) * (destination.X
* 25);
m = Matrix.RotationY(y);
m *= Matrix.RotationX(x);
device.Transform.World = m;
device.Transform.View = Matrix.LookAtLH(
new Vector3( 0.0f, 3.0f,-5.0f ),
new Vector3( 0.0f, 0.0f, 0.0f ),
new Vector3( 0.0f, 1.0f, 0.0f ) );
device.Transform.Projection = Matrix.PerspectiveFovLH(
(float)Math.PI / 4, 1.0f, 1.0f, 100.0f );
// Render the teapot.
teapot.DrawSubset(0);
// End the scene.
device.EndScene();
This sample is taken from here.

I recommend to use function D3DXConcatenateMeshes. Pass one mesh and one matrix. Result will be transformed mesh. It's quite easy.

Related

Vertex attribute not consumed by shader for my Vulkan program

I'm working with a tutorial to help learn Vulkan in C++, and I'm stuck trying to be able to change what my vertices are and colors. It always shows a triangle with 3 colors interpolated between each vertex (one red, one blue, one green), with neither the positions of the vertices OR the colors changing, even when I manually edit them. The result looks like this. If I try to change the color of one of the vertices, or change the position of it on the screen, nothing changes, and that triangle in the image I linked is displayed.
In my program I have my vertices set as:
const std::vector<Vertex> vertices = {
{{0.0f, -0.5f}, {0.0f, 1.0f, 0.0f}}, // green at the top
{{-0.5f, 0.5f}, {0.0f, 0.0f, 1.0f}}, // blue in the bottom left
{{0.5f, 0.5f}, {1.0f, 0.0f, 0.0f}}, // red in the bottom right
};
However, in the image that I provided, the RED is at the top, while the blue and green are on the bottom... which confuses the heck out of me.
Additionally, a note in my console comes up saying that the information cannot be passed along into the vertex shader (at location 0 and 1, which is where vertices and colors are passed along, respectively). I've looked for solutions regarding this error, and I still cannot figure out for the life of me what the problem is.
The exact error is: validation error: Validation Performance Warning: [ UNASSIGNED-CoreValidation-Shader-OutputNotConsumed ] Object 0: handle = 0xec4bec0000000000b, type = VK_OBJECT_TYPE_SHADER_MODULE; | MessageID = 0x609a13b | Vertex attribute at location 0 not consumed by vertex shader. Location 0 is supposed to be the input for the position on the screen as a vec2, while location is color as a vec3.
The vertex shader code (GLSL) is as follows:
#version 450
layout(location = 0) in vec2 inPosition;
layout(location = 1) in vec3 inColor;
layout(location = 0) out vec3 fragColor;
void main() {
gl_Position = vec4(inPosition, 0.0, 1.0);
fragColor = inColor;
}
Meanwhile, the C++ code where I create the vertex buffer is:
void createVertexBuffer() {
VkBufferCreateInfo bufferInfo{};
bufferInfo.sType = VK_STRUCTURE_TYPE_BUFFER_CREATE_INFO;
bufferInfo.size = sizeof(vertices[0]) * vertices.size();
bufferInfo.usage = VK_BUFFER_USAGE_VERTEX_BUFFER_BIT;
bufferInfo.sharingMode = VK_SHARING_MODE_EXCLUSIVE;
if (vkCreateBuffer(device, &bufferInfo, nullptr, &vertexBuffer) != VK_SUCCESS) {
throw std::runtime_error("failed to create vertex buffer!");
}
}
Finally, the C++ code where I bind the vertices to the appropriate buffer is:
void RTXApp::recordCommandBuffer(VkCommandBuffer commandBuffer, uint32_t imageIndex) {
VkCommandBufferBeginInfo beginInfo{};
beginInfo.sType = VK_STRUCTURE_TYPE_COMMAND_BUFFER_BEGIN_INFO;
beginInfo.flags = 0;
beginInfo.pInheritanceInfo = nullptr;
// error catching
if (vkBeginCommandBuffer(commandBuffer, &beginInfo) != VK_SUCCESS) {
throw std::runtime_error("failed to begin recording command buffer!");
}
VkRenderPassBeginInfo renderPassInfo{};
renderPassInfo.sType = VK_STRUCTURE_TYPE_RENDER_PASS_BEGIN_INFO;
renderPassInfo.renderPass = renderPass;
renderPassInfo.framebuffer = swapChainFramebuffers[imageIndex];
renderPassInfo.renderArea.offset = { 0, 0 };
renderPassInfo.renderArea.extent = swapChainExtent;
VkClearValue clearColor = { {{0.0f, 0.0f, 0.0f, 1.0f}} }; // set "default" color to black
renderPassInfo.clearValueCount = 1;
renderPassInfo.pClearValues = &clearColor;
// begin render pass
vkCmdBeginRenderPass(commandBuffer, &renderPassInfo, VK_SUBPASS_CONTENTS_INLINE);
// bind graphics pipeline object to command buffer
vkCmdBindPipeline(commandBuffer, VK_PIPELINE_BIND_POINT_GRAPHICS, graphicsPipeline);
// bind the vertex buffer to draw from
VkBuffer vertexBuffers[] = { vertexBuffer };
VkDeviceSize offsets[] = { 0 };
vkCmdBindVertexBuffers(commandBuffer, 0, 1, vertexBuffers, offsets);
// draw from the vertex buffer
vkCmdDraw(commandBuffer, static_cast<uint32_t>(vertices.size()), 1, 0, 0);
// end render pass
vkCmdEndRenderPass(commandBuffer);
// error catching
if (vkEndCommandBuffer(commandBuffer) != VK_SUCCESS) {
throw std::runtime_error("failed to record command buffer!");
}
}
I've tried both my version of the code and the code found at the tutorial that I'm using (found here), both seem to have the same effect. I'm not sure what I'm doing wrong, since I've followed the entire thing to the letter, trying very carefully to make sure all of the results are as I want them before moving on. I'm honestly not sure what to do anymore, since I've looked for solutions for this specific problem and I've found nothing.
Apologies if this isn't exactly the clearest, the code is super long, I'm not sure why this error is happening, and I'm a bit new to stack overflow in general. Any help would be appreciated, I'm losing my sanity trying to figure out what's wrong.
The code at the bottom of this page is basically the long version of what I have, and is what I'm trying to recreate with the code that I'm using. I'm on Windows 11, using MSVS 2022 and the Windows 10 Vulkan SDK, if that matters at all - my project settings are supposedly all good, and match the Windows version of this setup.
Edit: here is the part of the code with the VkPipelineVertexInputCreateInfo, as requested. Still cannot figure out what's going on.
auto vertShaderCode = readFile("shaders/vert.spv");
auto fragShaderCode = readFile("shaders/frag.spv");
VkShaderModule vertShaderModule = createShaderModule(vertShaderCode);
VkShaderModule fragShaderModule = createShaderModule(fragShaderCode);
// create the vertex shader stage of the pipleline
VkPipelineShaderStageCreateInfo vertShaderStageInfo{};
vertShaderStageInfo.sType = VK_STRUCTURE_TYPE_PIPELINE_SHADER_STAGE_CREATE_INFO;
vertShaderStageInfo.stage = VK_SHADER_STAGE_VERTEX_BIT;
vertShaderStageInfo.module = vertShaderModule;
vertShaderStageInfo.pName = "main";
// create the fragment shader stages of the pipeline
VkPipelineShaderStageCreateInfo fragShaderStageInfo{};
fragShaderStageInfo.sType = VK_STRUCTURE_TYPE_PIPELINE_SHADER_STAGE_CREATE_INFO;
fragShaderStageInfo.stage = VK_SHADER_STAGE_FRAGMENT_BIT;
fragShaderStageInfo.module = fragShaderModule;
fragShaderStageInfo.pName = "main";
// store steps of shader stages in an array (vertex first, then fragment)
VkPipelineShaderStageCreateInfo shaderStages[] = { vertShaderStageInfo, fragShaderStageInfo };
VkPipelineVertexInputStateCreateInfo vertexInputInfo{};
vertexInputInfo.sType = VK_STRUCTURE_TYPE_PIPELINE_VERTEX_INPUT_STATE_CREATE_INFO;
auto bindingDescription = Vertex::getBindingDescription();
auto attributeDescriptions = Vertex::getAttributeDescriptions();
vertexInputInfo.vertexBindingDescriptionCount = 1;
vertexInputInfo.pVertexBindingDescriptions = &bindingDescription;
vertexInputInfo.vertexAttributeDescriptionCount = static_cast<uint32_t>(attributeDescriptions.size());
vertexInputInfo.pVertexAttributeDescriptions = attributeDescriptions.data();

Is it possible to test if an arbitrary pixel is modifiable by the shader?

I am writing a spatial shader in godot to pixelate an object.
Previously, I tried to write outside of an object, however that is only possible in CanvasItem shaders, and now I am going back to 3D shaders due rendering annoyances (I am unable to selectively hide items without using the culling mask, which being limited to 20 layers is not an extensible solution.)
My naive approach:
Define a pixel "cell" resolution (ie. 3x3 real pixels)
For each fragment:
If the entire "cell" of real pixels is within the models draw bounds, color the current pixel as per the lower-left (where the pixel that has coordinates that are the multiple of the cell resolution).
If any pixel of the current "cell" is out of the draw bounds, set alpha to 1 to erase the entire cell.
psuedo-code for people asking for code of the likely non-existant functionality that I am seeking:
int cell_size = 3;
fragment {
// check within a cell to see if all pixels are part of the object being drawn to
for (int y = 0; y < cell_size; y++) {
for (int x = 0; x < cell_size; x++) {
int erase_pixel = 0;
if ( uv_in_model(vec2(FRAGCOORD.x - (FRAGCOORD.x % x), FRAGCOORD.y - (FRAGCOORD.y % y))) == false) {
int erase_pixel = 1;
}
}
}
albedo.a = erase_pixel
}
tl;dr, is it possible to know if any given point will be called by the fragment function?
On your object's material there should be a property called Next Pass. Add a new Spatial Material in this section, open up flags and check transparent and unshaded, and then right-click it to bring up the option to convert it to a Shader Material.
Now, open up the new Shader Material's Shader. The last process should have created a Shader formatted with a fragment() function containing the line vec4 albedo_tex = texture(texture_albedo, base_uv);
In this line, you can replace "texture_albedo" with "SCREEN_TEXTURE" and "base_uv" with "SCREEN_UV". This should make the new shader look like nothing has changed, because the next pass material is just sampling the screen from the last pass.
Above that, make a variable called something along the lines of "pixelated" and set it to the following expression:
vec2 pixelated = floor(SCREEN_UV * scale) / scale; where scale is a float or vec2 containing the pixel size. Finally replace SCREEN_UV in the albedo_tex definition with pixelated.
After this, you can have a float depth which samples DEPTH_TEXTURE with pixelated like this:
float depth = texture(DEPTH_TEXTURE, pixelated).r;
This depth value will be very large for pixels that are just trying to render the background onto your object. So, add a conditional statement:
if (depth > 100000.0f) { ALPHA = 0.0f; }
As long as the flags on this new next pass shader were set correctly (transparent and unshaded) you should have a quick-and-dirty pixelator. I say this because it has some minor artifacts around the edges, but you can make scale a uniform variable and set it from the editor and scripts, so I think it works nicely.
"Testing if a pixel is modifiable" in your case means testing if the object should be rendering it at all with that depth conditional.
Here's the full shader with my modifications from the comments
// NOTE: Shader automatically converted from Godot Engine 3.4.stable's SpatialMaterial.
shader_type spatial;
render_mode blend_mix,depth_draw_opaque,cull_back,unshaded;
//the size of pixelated blocks on the screen relative to pixels
uniform int scale;
void vertex() {
}
//vec2 representation of one used for calculation
const vec2 one = vec2(1.0f, 1.0f);
void fragment() {
//scale SCREEN_UV up to the size of the viewport over the pixelation scale
//assure scale is a multiple of 2 to avoid artefacts
vec2 pixel_scale = VIEWPORT_SIZE / float(scale * 2);
vec2 pixelated = SCREEN_UV * pixel_scale;
//truncate the decimal place from the pixelated uvs and then shift them over by half a pixel
pixelated = pixelated - mod(pixelated, one) + one / 2.0f;
//scale the pixelated uvs back down to the screen
pixelated /= pixel_scale;
vec4 albedo_tex = texture(SCREEN_TEXTURE,pixelated);
ALBEDO = albedo_tex.rgb;
ALPHA = 1.0f;
float depth = texture(DEPTH_TEXTURE, pixelated).r;
if (depth > 10000.0f)
{
ALPHA = 0.0f;
}
}

3D vessel surface reconstruction

I have a 3D vascular free-hand ultrasound volume containing one vessel, and I am trying to reconstruct the surface of the vessel. The 3D volume is constructed from a stack of 2D images/B-scans, and the contour of the vessel in each B-scan has been segmented; that is, I have an ellipse representing the contour of the vessel in each B-scan in the volume. I have tried to reconstruct the contour of the vessel by following the VTK example of 'GenerateModelsFromLabels.cxx' (http://www.vtk.org/Wiki/VTK/Examples/Cxx/Medical/GenerateModelsFromLabels). However, the result is not a smooth surface from one frame to another as I would have hoped for it to be. It is discontinuous and irregular, and the surface doesn't connect the vessel contours between two adjacent frames in the volume if the displacement between the ellipses is large. In my approach, I basically used DiscreteMarchingCubes -> WindowedSincPolyDataFilter -> GeometryFilter.
I played around with the passband, smoothingIterations and featureAngle parameters, and I was able to obtain the best following result:
As you can see, it is not a smooth continuous surface with a lot of uninterpolated "holes" between adjacent frames, but it is all right. Can it be made better? I also tried using a 3D Delaunay triangulation, but it only gave me the convex hull, which is not the output I expected. I would like to know if there is a better approach towards reconstructing a surface that closely follows the contour of the vessel from one B-scan to the next in a volume?
A minimal working example is shown below:
vtkSmartPointer<vtkImageData> vesselVolume =
vtkSmartPointer<vtkImageData>::New();
int totalImages = 210;
for (int z = 0; z < totalImages; z++)
{
std::string strFile = "E:/datasets/vasc/rendering/contour/" + std::to_string(z + 1) + ".png";
cv::Mat im = cv::imread(strFile, CV_LOAD_IMAGE_GRAYSCALE);
if (z == 0)
{
vesselVolume->SetExtent(0, im.cols, 0, im.rows, 0, totalImages - 1);
vesselVolume->SetSpacing(1, 1, 1);
vesselVolume->SetOrigin(0, 0, 0);
vesselVolume->AllocateScalars(VTK_UNSIGNED_CHAR, 0);
}
std::vector<cv::Point2i> locations; // output, locations of non-zero pixels
cv::findNonZero(im, locations);
for (int nzi = 0; nzi < locations.size(); nzi++)
{
unsigned char* pixel = static_cast<unsigned char*>(vesselVolume->GetScalarPointer(locations[nzi].x, locations[nzi].y, z));
pixel[0] = 255;
}
}
vtkSmartPointer<vtkDiscreteMarchingCubes> discreteCubes =
vtkSmartPointer<vtkDiscreteMarchingCubes>::New();
discreteCubes->SetInputData(vesselVolume);
discreteCubes->GenerateValues(1, 255, 255);
discreteCubes->ComputeNormalsOn();
vtkSmartPointer<vtkWindowedSincPolyDataFilter> smoother =
vtkSmartPointer<vtkWindowedSincPolyDataFilter>::New();
unsigned int smoothingIterations = 10;
double passBand = 2;
double featureAngle = 360.0;
smoother->SetInputConnection(discreteCubes->GetOutputPort());
smoother->SetNumberOfIterations(smoothingIterations);
smoother->BoundarySmoothingOff();
//smoother->FeatureEdgeSmoothingOff();
smoother->FeatureEdgeSmoothingOn();
smoother->SetFeatureAngle(featureAngle);
smoother->SetPassBand(passBand);
smoother->NonManifoldSmoothingOn();
smoother->BoundarySmoothingOn();
smoother->NormalizeCoordinatesOn();
smoother->Update();
vtkSmartPointer<vtkThreshold> selector =
vtkSmartPointer<vtkThreshold>::New();
selector->SetInputConnection(smoother->GetOutputPort());
selector->SetInputArrayToProcess(0, 0, 0,
vtkDataObject::FIELD_ASSOCIATION_CELLS,
vtkDataSetAttributes::SCALARS);
vtkSmartPointer<vtkMaskFields> scalarsOff =
vtkSmartPointer<vtkMaskFields>::New();
// Strip the scalars from the output
scalarsOff->SetInputConnection(selector->GetOutputPort());
scalarsOff->CopyAttributeOff(vtkMaskFields::POINT_DATA,
vtkDataSetAttributes::SCALARS);
scalarsOff->CopyAttributeOff(vtkMaskFields::CELL_DATA,
vtkDataSetAttributes::SCALARS);
vtkSmartPointer<vtkGeometryFilter> geometry =
vtkSmartPointer<vtkGeometryFilter>::New();
geometry->SetInputConnection(scalarsOff->GetOutputPort());
geometry->Update();
vtkSmartPointer<vtkPolyDataMapper> mapper =
vtkSmartPointer<vtkPolyDataMapper>::New();
mapper->SetInputConnection(geometry->GetOutputPort());
mapper->ScalarVisibilityOff();
mapper->Update();
vtkSmartPointer<vtkRenderWindow> renderWindow =
vtkSmartPointer<vtkRenderWindow>::New();
vtkSmartPointer<vtkRenderWindowInteractor> renderWindowInteractor =
vtkSmartPointer<vtkRenderWindowInteractor>::New();
renderWindowInteractor->SetRenderWindow(renderWindow);
vtkSmartPointer<vtkRenderer> renderer =
vtkSmartPointer<vtkRenderer>::New();
renderWindow->AddRenderer(renderer);
renderer->SetBackground(.2, .3, .4);
vtkSmartPointer<vtkActor> actor =
vtkSmartPointer<vtkActor>::New();
actor->SetMapper(mapper);
renderer->AddActor(actor);
renderer->ResetCamera();
renderWindow->Render();
renderWindowInteractor->Start();
Assuming that your problem is hand shaking between slices, one possible way to improve your result is to apply slice to slice registration. It should be easy to try using ImageJ. Use the transforms between slices to also transform your labeled images. Then run your transformed label images through your current pipeline.

How to remove the effect of light / shadow on my model in XNA?

I am developing a small game and I would draw a field-ground(land) with a repeated texture. My problem is the rendered result. This gives the impression of seeing everything around my cube looked as if a light shadow.
Is it possible to standardize the light or remove the shadow effect in my drawing function?
Sorry for my bad english..
Here is a screenshot to better understand my problem.
Here my code draw function (instancing model with vertexbuffer)
// Draw Function (instancing model - vertexbuffer)
public void DrawModelHardwareInstancing(Model model,Texture2D texture, Matrix[] modelBones,
Matrix[] instances, Matrix view, Matrix projection)
{
if (instances.Length == 0)
return;
// If we have more instances than room in our vertex buffer, grow it to the neccessary size.
if ((instanceVertexBuffer == null) ||
(instances.Length > instanceVertexBuffer.VertexCount))
{
if (instanceVertexBuffer != null)
instanceVertexBuffer.Dispose();
instanceVertexBuffer = new DynamicVertexBuffer(Game.GraphicsDevice, instanceVertexDeclaration,
instances.Length, BufferUsage.WriteOnly);
}
// Transfer the latest instance transform matrices into the instanceVertexBuffer.
instanceVertexBuffer.SetData(instances, 0, instances.Length, SetDataOptions.Discard);
foreach (ModelMesh mesh in model.Meshes)
{
foreach (ModelMeshPart meshPart in mesh.MeshParts)
{
// Tell the GPU to read from both the model vertex buffer plus our instanceVertexBuffer.
Game.GraphicsDevice.SetVertexBuffers(
new VertexBufferBinding(meshPart.VertexBuffer, meshPart.VertexOffset, 0),
new VertexBufferBinding(instanceVertexBuffer, 0, 1)
);
Game.GraphicsDevice.Indices = meshPart.IndexBuffer;
// Set up the instance rendering effect.
Effect effect = meshPart.Effect;
//effect.CurrentTechnique = effect.Techniques["HardwareInstancing"];
effect.Parameters["World"].SetValue(modelBones[mesh.ParentBone.Index]);
effect.Parameters["View"].SetValue(view);
effect.Parameters["Projection"].SetValue(projection);
effect.Parameters["Texture"].SetValue(texture);
// Draw all the instance copies in a single call.
foreach (EffectPass pass in effect.CurrentTechnique.Passes)
{
pass.Apply();
Game.GraphicsDevice.DrawInstancedPrimitives(PrimitiveType.TriangleList, 0, 0,
meshPart.NumVertices, meshPart.StartIndex,
meshPart.PrimitiveCount, instances.Length);
}
}
}
}
// ### END FUNCTION DrawModelHardwareInstancing
The problem is the cube mesh you are using. The normals are averaged, but I guess you want them to be orthogonal to the faces of the cubes.
You will have to use a total of 24 vertices (4 for each side) instead of 8 vertices. Each corner will have 3 vertices with the same position but different normals, one for each adjacent face:
If the FBX exporter cannot be configured to correctly export the normals simply create your own cube mesh:
var vertices = new VertexPositionNormalTexture[24];
// Initialize the vertices, set position and texture coordinates
// ...
// Set normals
// front face
vertices[0].Normal = new Vector3(1, 0, 0);
vertices[1].Normal = new Vector3(1, 0, 0);
vertices[2].Normal = new Vector3(1, 0, 0);
vertices[3].Normal = new Vector3(1, 0, 0);
// back face
vertices[4].Normal = new Vector3(-1, 0, 0);
vertices[5].Normal = new Vector3(-1, 0, 0);
vertices[6].Normal = new Vector3(-1, 0, 0);
vertices[7].Normal = new Vector3(-1, 0, 0);
// ...
It looks like you've got improperly calculated / no normals.
Look at this example, specifically part 3.
A normal is a vector that describes the direction that light would reflect off that vertex/poly if shined orthogonally to it.
I like this picture to demonstrate The blue lines are the normal direction at each particular point on the curve.
In XNA, you can calculate the normal of a polygon with vertices vert1,vert2,and vert3 like so:
Vector3 dir = Vector3.Cross(vert2 - vert1, vert3 - vert1);
Vector3 norm = Vector3.Normalize(dir);
In a lot of cases this is done automatically by modelling software so the calculation is unnecessary. You probably do need to perform that calculation if you're creating your cubes in code though.

DirectX 11: text output, using your own font texture

I'm learning DirectX, using the book "Sherrod A., Jones W. - Beginning DirectX 11 Game Programming - 2011" Now I'm exploring the 4th chapter about drawing text.
Please, help we to fix my function, that I'm using to draw a string on the screen. I've already loaded font texture and in the function I create some sprites with letters and define texture coordinates for them. This compiles correctly, but doesn't draw anything. What's wrong?
bool DirectXSpriteGame :: DrawString(char* StringToDraw, float StartX, float StartY)
{
//VAR
HRESULT D3DResult; //The result of D3D functions
int i; //Counters
const int IndexA = static_cast<char>('A'); //ASCII index of letter A
const int IndexZ = static_cast<char>('Z'); //ASCII index of letter Z
int StringLenth = strlen(StringToDraw); //Lenth of drawing string
float ScreenCharWidth = static_cast<float>(LETTER_WIDTH) / static_cast<float>(SCREEN_WIDTH); //Width of the single char on the screen(in %)
float ScreenCharHeight = static_cast<float>(LETTER_HEIGHT) / static_cast<float>(SCREEN_HEIGHT); //Height of the single char on the screen(in %)
float TexelCharWidth = 1.0f / static_cast<float>(LETTERS_NUM); //Width of the char texel(in the texture %)
float ThisStartX; //The start x of the current letter, drawingh
float ThisStartY; //The start y of the current letter, drawingh
float ThisEndX; //The end x of the current letter, drawing
float ThisEndY; //The end y of the current letter, drawing
int LetterNum; //Letter number in the loaded font
int ThisLetter; //The current letter
D3D11_MAPPED_SUBRESOURCE MapResource; //Map resource
VertexPos* ThisSprite; //Vertecies of the current sprite, drawing
//VAR
//Clamping string, if too long
if(StringLenth > LETTERS_NUM)
{
StringLenth = LETTERS_NUM;
}
//Mapping resource
D3DResult = _DeviceContext -> Map(_vertexBuffer, 0, D3D11_MAP_WRITE_DISCARD, 0, &MapResource);
if(FAILED(D3DResult))
{
throw("Failed to map resource");
}
ThisSprite = (VertexPos*)MapResource.pData;
for(i = 0; i < StringLenth; i++)
{
//Creating geometry for the letter sprite
ThisStartX = StartX + ScreenCharWidth * static_cast<float>(i);
ThisStartY = StartY;
ThisEndX = ThisStartX + ScreenCharWidth;
ThisEndY = StartY + ScreenCharHeight;
ThisSprite[0].Position = XMFLOAT3(ThisEndX, ThisEndY, 1.0f);
ThisSprite[1].Position = XMFLOAT3(ThisEndX, ThisStartY, 1.0f);
ThisSprite[2].Position = XMFLOAT3(ThisStartX, ThisStartY, 1.0f);
ThisSprite[3].Position = XMFLOAT3(ThisStartX, ThisStartY, 1.0f);
ThisSprite[4].Position = XMFLOAT3(ThisStartX, ThisEndY, 1.0f);
ThisSprite[5].Position = XMFLOAT3(ThisEndX, ThisEndY, 1.0f);
ThisLetter = static_cast<char>(StringToDraw[i]);
//Defining the letter place(number) in the font
if(ThisLetter < IndexA || ThisLetter > IndexZ)
{
//Invalid character, the last character in the font, loaded
LetterNum = IndexZ - IndexA + 1;
}
else
{
LetterNum = ThisLetter - IndexA;
}
//Unwraping texture on the geometry
ThisStartX = TexelCharWidth * static_cast<float>(LetterNum);
ThisStartY = 0.0f;
ThisEndY = 1.0f;
ThisEndX = ThisStartX + TexelCharWidth;
ThisSprite[0].TextureCoords = XMFLOAT2(ThisEndX, ThisEndY);
ThisSprite[1].TextureCoords = XMFLOAT2(ThisEndX, ThisStartY);
ThisSprite[2].TextureCoords = XMFLOAT2(ThisStartX, ThisStartY);
ThisSprite[3].TextureCoords = XMFLOAT2(ThisStartX, ThisStartY);
ThisSprite[4].TextureCoords = XMFLOAT2(ThisStartX, ThisEndY);
ThisSprite[5].TextureCoords = XMFLOAT2(ThisEndX, ThisEndY);
ThisSprite += VERTEX_IN_RECT_NUM;
}
for(i = 0; i < StringLenth; i++, ThisSprite -= VERTEX_IN_RECT_NUM);
_DeviceContext -> Unmap(_vertexBuffer, 0);
_DeviceContext -> Draw(VERTEX_IN_RECT_NUM * StringLenth, 0);
return true;
}
Although the piece of code constructing the Vertex Array seems correct to me at first glance, it seems like you are trying to Draw your vertices with a Shader which has not been set yet !
It is difficult to precisely answer you without looking at the whole code, but I can guess that you will need to do something like that :
1) Create Vertex and Pixel Shaders by compiling them first from their respective buffers
2) Create the Input Layout description, which describes the Input Buffers that will be read by the Input Assembler stage. It will have to match your VertexPos structure and your shader structure.
3) Set the Shader parameters.
4) Only now you can Set Shader rendering parameters : Set the InputLayout, as well as the Vertex and Pixel Shaders that will be used to render your triangles by something like :
_DeviceContext -> Unmap(_vertexBuffer, 0);
_DeviceContext->IASetInputLayout(myInputLayout);
_DeviceContext->VSSetShader(myVertexShader, NULL, 0); // Set Vertex shader
_DeviceContext->PSSetShader(myPixelShader, NULL, 0); // Set Pixel shader
_DeviceContext -> Draw(VERTEX_IN_RECT_NUM * StringLenth, 0);
This link should help you achieve what you want to do : http://www.rastertek.com/dx11tut12.html
Also, I recommend you to set an IndexBuffer and to use the method DrawIndexed to render your triangles for performance reasons : It will allow the graphics adapter to store vertices in a vertex cache, allowing recently-used vertex to be fetched from the cache instead of reading it from the vertex buffer.
More about this concern can be found on MSDN : http://msdn.microsoft.com/en-us/library/windows/desktop/bb147325(v=vs.85).aspx
Hope this helps!
P.S : Also, don't forget to release the resources after using them by calling Release().

Resources