Unexpected behavior of geometry shader using line adjacency input - graphics

I am trying to write a simple shader to draw 3D line with thickness just to learn geometry shader in unity. However I am facing problem with the output from the shader when setting the input of the geometry shader to lineadj topology, which i suspect has something to do with the "weird" value of the third and fourth vertex taken in by the geometry shader.
This is how i generate my mesh from a c# script:
public static GameObject DrawShaderLine( Vector3[] posCont, float thickness, Material mat)
{
GameObject line = CreateObject ("Line", mat);
line.GetComponent<Renderer>().material.SetFloat("_Thickness", thickness);
int posContLen = posCont.Length;
int newVerticeLen = posContLen + 2;
Vector3[] newVertices = new Vector3[newVerticeLen];
newVertices[0] = posCont[0] + (posCont[0]-posCont[1]);
for(int i =0; i < posContLen; ++i)
{
newVertices[i+1] = posCont[i];
}
newVertices[newVerticeLen-1] = posCont[posContLen-1] + ( posCont[posContLen-1] - posCont[posContLen-2]);
List<int> newIndices = new List<int>();
for(int i = 1; i< newVerticeLen-2; ++i)
{
newIndices.Add(i-1);
newIndices.Add(i);
newIndices.Add(i+1);
newIndices.Add(i+2);
}
Mesh mesh = (line.GetComponent (typeof(MeshFilter)) as MeshFilter).mesh;
mesh.Clear ();
mesh.vertices = newVertices;
//mesh.triangles = newTriangles;
mesh.SetIndices(newIndices.ToArray(), MeshTopology.LineStripe, 0);
return line;
}
And this is the GS that is running in the shader program
v2g vert(appdata_base v)
{
v2g OUT;
OUT.pos = v.vertex;
return OUT;
}
[maxvertexcount(2)]
void geom(lineadj v2g p[4], inout LineStream<g2f> triStream)
{
float4x4 vp = mul(UNITY_MATRIX_MVP, _World2Object);
g2f OUT;
float4 pos0 = mul(vp, p[0].pos);
float4 pos1 = mul(vp, p[1].pos);
float4 pos2 = mul(vp, p[2].pos);
float4 pos3 = mul(vp, p[3].pos);
OUT.pos = pos1;
OUT.c = half4(1,0,0,1);
triStream.Append(OUT);
OUT.pos = pos2;
OUT.c = half4(0,1,0,1);
triStream.Append(OUT);
triStream.RestartStrip();
}
From my understanding, lineadj will take in 4 vertex with vertex[0] and vertex[3] being the adjacent vertexes. So by drawing vertex 1 and vertex 2 i am suppose to get my line drawn. However this is the output i get
This input data vertex position is (-20,0,0) and (0,-20,0) which is marked by the center 2 squares. The top left and bottom right cube are the position of the adjacent vertex generated by the c# function. As you can see the line seem to be connecting to position (0,0,0) and the lines are flickering rapidly, which make me suspect that vertex in the GS is corrupted? Start of the line is colored red and the end of the line is colored green.
If I edit the GS to output pos0 and pos1 instead of pos1 and pos2, i get this
with no flickering lines.
and if i plot pos2 and pos3, the result is way crazier(pos2 and pos 3 seems to be rubbish value).
I have been trying to debug this for the whole day but with no progress, so I need some help here! Thanks in advance

Related

Screen-space shadows producing white result

I've been trying to learn screen-space techniques, specifically Ray-marching ones but I have been struggling to get a single working example to continue learning from and solidify my knowledge. I'm implementing Screen-space shadows following this article but my result just seems to be a white image and I cannot seem to understand why. The code makes sense to me but the result does not seem to be right. I can't seem to understand where I might have gone wrong while attempting this screen-space ray-marching technique and would appreciate any insight to that will help me continue learning.
Using Vulkan + GLSL
Full shader: screen_space_shadows.glsl
// calculate screen space shadows
float computeScreenSpaceShadow()
{
vec3 FragPos = texture(gPosition, uvCoords).rgb;
vec4 ViewSpaceLightPosition = camera.view * light.LightPosition;
vec3 LightDirection = ViewSpaceLightPosition.xyz - FragPos.xyz;
// Ray position and direction in view-space.
vec3 RayPos = texture(gPosition, uvCoords).xyz; // ray start position
vec3 RayDirection = normalize(-LightDirection.xyz);
// Save original depth of the position
float DepthOriginal = RayPos.z;
// Ray step
vec3 RayStep = RayDirection * STEP_LENGTH;
float occlusion = 0.0;
for(uint i = 0; i < MAX_STEPS; i++)
{
RayPos += RayStep;
vec2 Ray_UV = ViewToScreen(RayPos);
// Make sure the UV is inside screen-space
if(!ValidRay(Ray_UV)){
return 1.0;
}
// Compute difference between ray and cameras depth
float DepthZ = linearize_depth(texture(depthMap, Ray_UV).x);
float DepthDelta = RayPos.z - DepthZ;
// Check if camera cannot see the ray. Ray depth must be larger than camera depth = positive delta
bool canCameraSeeRay = (DepthDelta > 0.0) && (DepthDelta < THICKNESS);
bool occludedByOriginalPixel = abs(RayPos.z - DepthOriginal) < MAX_DELTA_FROM_ORIGINAL_DEPTH;
if(canCameraSeeRay && occludedByOriginalPixel)
{
// Mark as occluded
occlusion = 1.0;
break;
}
}
return 1.0 - occlusion;
}
Output

Flipped normals after loaded in my raytracer

I'm working on a path/ray tracer in c++. I'm now working on loading obj files. But some objects have flipped normals after loaded in. I can't figure this out where this behaviour comes from or how to fix it.
See image for better understanding of the issue.
image showing current behaviour:
Link to full GitHub page
First I thought it was an issue with the normals behind the surface. but after rendering the color based on the surface normal. It's obvious that the normals are flipped in some cases.
Here is my very basic code of loading the model.
//OBJ Loader object.
bool OBJLoader::loadMesh (std::string filePath){
// If the file is not an .obj file return false
if (filePath.substr(filePath.size() - 4, 4) != ".obj"){
std::cout << "No .obj file found at given file location: "<<filePath << std::endl;
}
//Open file stream
std::ifstream file(filePath);
//check if file is open.
if (!file.is_open()){
std::cout << "File was not opened!" << std::endl;
return false;
}
//Do file loading.
std::cout << "Parsing obj-file: "<<filePath << std::endl;
//constuct mesh data.
bool smoothShading = false;
std::string obj_name;
std::vector<Vertex> vertices;
std::vector<Vect3> Positions;
std::vector<Vect3> Normals;
std::vector<Vect2> UVs;
std::vector<unsigned int> V_indices;
//the current line
std::string currentLine;
//loop over each line and parse the needed data.
while(std::getline(file, currentLine)){
//for now we just print the line
//std::cout << currentLine << std::endl;
if(algorithm::startsWith(currentLine, "s ")){
std::vector<std::string> line_split = algorithm::split(currentLine,' ');
if( line_split[1] == std::string("off")){
smoothShading = false;
}else if(line_split[1] == std::string("1")){
//enalbe smooth shading;
smoothShading = true;
}
}
//check if the line starts with v -> vertex.
if(algorithm::startsWith(currentLine, "o ")){
//construct new vertex position.
std::vector<std::string> line_split = algorithm::split(currentLine,' ');
obj_name = line_split[1];
}
//check if the line starts with v -> vertex.
if(algorithm::startsWith(currentLine, "v ")){
//construct new vertex position.
std::vector<std::string> line_split = algorithm::split(currentLine,' ');
float x = std::stof(line_split[1]);
float y = std::stof(line_split[2]);
float z = std::stof(line_split[3]);
Vect3 pos = Vect3(x,y,z);
Positions.push_back(pos);
}
//check if the line starts with vt -> vertex uv.
if(algorithm::startsWith(currentLine, "vt ")){
//construct new vertex uv.
std::vector<std::string> line_split = algorithm::split(currentLine,' ');
float u = std::stof(line_split[1]);
float v = std::stof(line_split[2]);
Vect2 uv = Vect2(u,v);
UVs.push_back(uv);
}
//check if the line starts with vn -> vertex normals.
if(algorithm::startsWith(currentLine, "vn ")){
//construct new vertex normal.
std::vector<std::string> line_split = algorithm::split(currentLine,' ');
float x = std::stof(line_split[1]);
float y = std::stof(line_split[2]);
float z = std::stof(line_split[3]);
Vect3 normal = Vect3(x,y,z);
Normals.push_back(normal);
}
//check if the line starts with f -> constuct faces.
if(algorithm::startsWith(currentLine, "f ")){
//construct new vertex position.
std::vector<std::string> line_split = algorithm::split(currentLine,' ');
//#NOTE: this only works when mesh is already triangulated.
//Parse all vertices.
std::vector<std::string> vertex1 = algorithm::split(line_split[1],'/');
std::vector<std::string> vertex2 = algorithm::split(line_split[2],'/');
std::vector<std::string> vertex3 = algorithm::split(line_split[3],'/');
if(vertex1.size() <= 1){
//VERTEX 1
Vect3 position = Positions[std::stoi(vertex1[0])-1];
Vertex v1(position);
vertices.push_back(v1);
//VERTEX 2
position = Positions[std::stoi(vertex2[0])-1];
Vertex v2(position);
vertices.push_back(v2);
//VERTEX 3
position = Positions[std::stoi(vertex3[0])-1];
Vertex v3(position);
vertices.push_back(v3);
//Add to Indices array.
//calculate the index number
//The 3 comes from 3 vertices per face.
unsigned int index = vertices.size() - 3;
V_indices.push_back(index);
V_indices.push_back(index+1);
V_indices.push_back(index+2);
}
//check if T exist.
else if(vertex1[1] == ""){
//NO Uv
//V -> index in the positions array.
//N -> index in the normals array.
//VERTEX 1
Vect3 position = Positions[std::stoi(vertex1[0])-1];
Vect3 normal = Normals[std::stoi(vertex1[2])-1];
Vertex v1(position,normal);
vertices.push_back(v1);
//VERTEX 2
position = Positions[std::stoi(vertex2[0])-1];
normal = Normals[std::stoi(vertex2[2])-1];
Vertex v2(position,normal);
vertices.push_back(v2);
//VERTEX 3
position = Positions[std::stoi(vertex3[0])-1];
normal = Normals[std::stoi(vertex3[2])-1];
Vertex v3(position,normal);
vertices.push_back(v3);
//Add to Indices array.
//calculate the index number
//The 3 comes from 3 vertices per face.
unsigned int index = vertices.size() - 3;
V_indices.push_back(index);
V_indices.push_back(index+1);
V_indices.push_back(index+2);
}else if (vertex1[1] != ""){
//We have UV
//V -> index in the positions array.
//T -> index of UV
//N -> index in the normals array.
//VERTEX 1
Vect3 position = Positions[std::stoi(vertex1[0])-1];
Vect2 uv = UVs[std::stoi(vertex1[1])-1];
Vect3 normal = Normals[std::stoi(vertex1[2])-1];
Vertex v1(position,normal,uv);
vertices.push_back(v1);
//VERTEX 2
position = Positions[std::stoi(vertex2[0])-1];
uv = UVs[std::stoi(vertex2[1])-1];
normal = Normals[std::stoi(vertex2[2])-1];
Vertex v2(position,normal,uv);
vertices.push_back(v2);
//VERTEX 3
position = Positions[std::stoi(vertex3[0])-1];
uv = UVs[std::stoi(vertex3[1])-1];
normal = Normals[std::stoi(vertex3[2])-1];
Vertex v3(position,normal,uv);
vertices.push_back(v3);
//Add to Indices array.
//calculate the index number
//The 3 comes from 3 vertices per face.
unsigned int index = vertices.size() - 3;
V_indices.push_back(index);
V_indices.push_back(index+1);
V_indices.push_back(index+2);
}
//We can check here in which format. V/T/N, V//N, V//, ...
//For now we ignore this and use V//N.
}
}
//close stream
file.close();
Positions.clear();
Normals.clear();
UVs.clear();
//reorder the arrays so the coresponding index match the position,uv and normal.
for (Vertex v: vertices) {
Positions.push_back(v.getPosition());
Normals.push_back(v.getNormal());
UVs.push_back(v.getUV());
}
//Load mesh data.
_mesh = Mesh(smoothShading,obj_name, Positions, Normals, UVs, V_indices);
//return true, succes.
return true;
After this the model is inserted in a grid structure for faster intersection tests.
for(int i= 0;i<mesh._indices.size();i=i+3){
Triangle* tri;
if(mesh.smoothShading){
tri = new SmoothTriangle(Point3(mesh._positions[mesh._indices[i]]),
Point3(mesh._positions[mesh._indices[i+1]]),
Point3(mesh._positions[mesh._indices[i+2]]),
Normal(mesh._normals[mesh._indices[i]]),
Normal(mesh._normals[mesh._indices[i+1]]),
Normal(mesh._normals[mesh._indices[i+2]]),material);
}else{
tri = new Triangle(Point3(mesh._positions[mesh._indices[i]]),
Point3(mesh._positions[mesh._indices[i+1]]),Point3(mesh._positions[mesh._indices[i+2]]),Normal(mesh._normals[mesh._indices[i]]),material);
}
add_object(tri);
}
constructCells();
Maybe good to add is the code for interpolating normals
Normal SmoothTriangle::calculate_normal(double gamma, double beta){
return (Normal((1 - beta - gamma) * n0 + beta * n1 + gamma * n2)).normalize();
}
FIXED
I Fixed the issue. It was not in my OBJ loader. the model was exported from blender and when exporting it applied all modifiers, but the Solidify caused some back faces to clip with the front faces, after exporting to .obj file. After removing this modifier everything was back to "normal" (Just a funny pun to finish this answer)
It might be nothing wrong in your code I assume that the obj is corrupted as some obj models have flipped normals ...
Wavefront obj format does not specify the normal direction at all I saw even models without consistency so some normals points out others in. You can not even be sure the faces have single winding rule. So its safer to use bidirectional normals (you know using
|dot(normal,light)|
instead of
dot(normal,light)
and no face culling or recompute the normals and even winding rule on your own after load.
The bidirectional normals/lighting are sometimes set by different material settings for each side of face FRONT and BACK or FRONT_AND_BACK or DOUBLE_SIDED etc or its configuration... just look in your gfx API for such stuff. To turn off the face culling look for things like CULL_FACE

Using 2D metaballs to draw an outline with a constant thickness

I'm apply the concept of metaballs to a game I'm making in order to show that the player has selected a few ships, like so http://prntscr.com/klgktf
However, my goal is to keep a constant thickness of this outline, and that's not what I'm getting with the current code.
I'm using a GLSL shader to do this, and I pass to the fragmentation shader a uniform array of positions for the ships (u_metaballs).
Vertex shader:
#version 120
void main() {
gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex;
}
Fragmentation shader:
#version 120
uniform vec2 u_metaballs[128];
void main() {
float intensity = 0;
for(int i = 0; i < 128 && u_metaballs[i].x != 0; i++){
float r = length(u_metaballs[i] - gl_FragCoord.xy);
intensity += 1 / r;
}
gl_FragColor = vec4(0, 0, 0, 0);
if(intensity > .2 && intensity < .21)
gl_FragColor = vec4(.5, 1, .7, .2);
}
I've tried playing around with the intensity ranges, and even changing 1 / r to 10000 / (r ^ 4) which (although it makes no sense) helps a bit, though it does not fix the problem.
Any help or suggestions would be greatly appreciated.
after some more taught it is doable even in single pass ... you just compute the distance to nearest metaball and if less or equal to the boundary thickness render fragment otherwise discard it ... Here example (assuming single quad <-1,+1> is rendered covering whole screen):
Vertex:
// Vertex
varying vec2 pos; // fragment position in world space
void main()
{
pos=gl_Vertex.xy;
gl_Position=ftransform();
}
Fragment:
// Fragment
#version 120
varying vec2 pos;
const float r=0.3; // metabal radius
const float w=0.02; // border line thickness
uniform vec2 u_metaballs[5]=
{
vec2(-0.25,-0.25),
vec2(+0.25,-0.25),
vec2( 0.00,+0.05),
vec2(+0.30,+0.35),
vec2(-1000.1,-1000.1), // end of metaballs
};
void main()
{
int i;
float d;
// d = min distance to any metaball
for (d=r+r+w+w,i=0;u_metaballs[i].x>-1000.0;i++)
d=min(d,length(pos-u_metaballs[i].xy));
// if outside range ignore fragment
if ((d<r)||(d>r+w)) discard;
// otherwise render it
gl_FragColor=vec4(1.0,1.0,1.0,1.0);
}
Preview:

How can I handle drawing a circle, having that circle break, and begin drawing elsewhere?

Working in Processing, I am trying to build my first generative patch. What I want to have happen is start drawing a circle somewhere on screen (a point following the path of a circle), but after a random amount of time, the circle breaks, the line goes in a random direction for a random amount of time, and begins drawing a new circle elsewhere.
Right now I have the circle being drawn, and I have a toggle mechanism that turns on and off after a random period of time. I can't figure out how to get it "break" from that original circle, let alone get it to start a new circle elsewhere. Would anybody have some advice on how to accomplish this? I think it might have an interesting visual effect.
Rotor r;
float timer = 0;
boolean freeze = false;
void setup() {
size(1000,600);
smooth();
noFill();
frameRate(60);
background(255);
timeLimit();
r = new Rotor(random(width),random(height),random(40,100));
}
void draw() {
float t = frameCount / 100.0;
timer = timer + frameRate/1000;
r.drawRotor(t);
if(timer > timeLimit()){
timer = 0;
timeLimit();
if(freeze == true){
freeze = false;
}else{
freeze = true;
}
background(255);
}
}
float timeLimit(){
float timeLimit = random(200);
return timeLimit;
}
Rotor Class:
class Rotor {
color c;
int thickness;
float xPoint;
float yPoint;
float radius;
float angle = 0;
float centerX;
float centerY;
Rotor(float cX, float cY, float rad) {
c = color(0);
thickness = 1;
centerX = cX;
centerY = cY;
radius = rad;
}
void drawRotor(float t) {
stroke(c);
strokeWeight(thickness);
angle = angle + frameRate/1000;
xPoint = centerX + cos(angle) * radius;
yPoint = centerY + sin(angle) * radius;
ellipse(xPoint, yPoint,thickness,thickness);
}
}
First to answer your question about "breaking" circle: you need to create new rotor instance or just change its properties like center and radius. If I got your idea right you just need one instance of rotor so just change this values:
r.centerX = newX;
r.centerY = newY
r.radius = random(40,100) //as you have in setup
But how you can calculate new position? It could be random but you want to create path so you need to calculate it. And here comes the tricky part. So how to make connecting line and start new circle?
First you will need two mode. First will draw circle second will draw line. Simplest way to achieve that is by updating rotor draw method [You can pass mode variable as parameter of drawRotor function or as global variable]:
if(mode == 1){
angle += frameRate/1000;
}else{
radius += 2;
}
As you can see I just differ between increasing angle to draw circle and increasing radius to draw line (not in random direction but in way from center). Then we will need to calculate new position of circle's center. To do this we simple calculate how it would continue according to angle and substitute new radiusso whole part will looks like this:
if(mode != 1){
float newR = random(40,100);
float newX = r.centerX + cos(r.angle) * (r.radius - newR);
float newY = r.centerY + sin(r.angle) * (r.radius - newR);
r.newPos(newX, newY);
r.radius = newR; //we cant change it earlier because we need also old value
}
This will happen inside your "time handler" function only when you change mode back to drawing circle. Mode can be simple changed within handler
mode *= -1; //but need to be init to 1 inside setup()
If you want to have path always visible just delete background() function but if you want some cool effect add this at the begging of draw()
noStroke(); //No stroke needed and you turn it on again in drawRotor()
fill( 255,255,255, 10 ); //This will set transparency to 10%
rect(0,0,width,height); //You put layer after each "point" you draw
noFill(); //This will restore fill settings as you have before
Here I paste whole code just for demonstration and you should modify it according your own purpose. Better to code own version.
The call to background()usually comes as first thing in draw. That's because the draw only renders at the end of each loop (frame). So calling bg at the beginning will clear all stuff drawn in last frame. If you need to persist the draws trough frames can either remove the call to background() or draw your stuff every frame. Or yet draw stuff in a PGraphics and display it.
The other thing is each time the 'Rotor' stops you should give it new random coordinates.
If you go for removing the background() call this will do the trick:
Rotor r;
float timer = 0;
boolean freeze = false;
void setup() {
size(1000,600);
smooth();
noFill();
frameRate(60);
background(255);
timeLimit();
r = new Rotor(random(width),random(height),random(40,100));
}
void draw() {
float t = frameCount / 100.0;
timer = timer + frameRate/1000;
r.drawRotor(t);
if(timer > timeLimit()){
timer = 0;
timeLimit();
//***** here new coordinates!!
r = new Rotor(random(width),random(height),random(40,100));
//*****
if(freeze == true){
freeze = false;
}else{
freeze = true;
}
//***** no background()
// background(255);
}
}
float timeLimit(){
float timeLimit = random(200);
return timeLimit;
}
class Rotor {
color c;
int thickness;
float xPoint;
float yPoint;
float radius;
float angle = 0;
float centerX;
float centerY;
Rotor(float cX, float cY, float rad) {
c = color(0);
thickness = 1;
centerX = cX;
centerY = cY;
radius = rad;
}
void drawRotor(float t) {
stroke(c);
strokeWeight(thickness);
angle = angle + frameRate/1000;
xPoint = centerX + cos(angle) * radius;
yPoint = centerY + sin(angle) * radius;
ellipse(xPoint, yPoint,thickness,thickness);
}
}
now, if you need to clear the screen, You can make a List (ArrayList?) and add a new Rotor to it when the previous is done. But you need to manage the Rotor to be able to display it self without animating as well. So new created Rotor would animate, and old ones would just display its arc without animating. Or make a PGraphis with no call to bg and display it in main canvas that can have a bg call...
A side note, be aware that relying in frameRate to times stuff makes it dependable on the system performance. You can do the same thing using millis()to avoid that. Not an issue so far, as this is very light yet, but may become an issue if the project grows further.

DirectX 11: text output, using your own font texture

I'm learning DirectX, using the book "Sherrod A., Jones W. - Beginning DirectX 11 Game Programming - 2011" Now I'm exploring the 4th chapter about drawing text.
Please, help we to fix my function, that I'm using to draw a string on the screen. I've already loaded font texture and in the function I create some sprites with letters and define texture coordinates for them. This compiles correctly, but doesn't draw anything. What's wrong?
bool DirectXSpriteGame :: DrawString(char* StringToDraw, float StartX, float StartY)
{
//VAR
HRESULT D3DResult; //The result of D3D functions
int i; //Counters
const int IndexA = static_cast<char>('A'); //ASCII index of letter A
const int IndexZ = static_cast<char>('Z'); //ASCII index of letter Z
int StringLenth = strlen(StringToDraw); //Lenth of drawing string
float ScreenCharWidth = static_cast<float>(LETTER_WIDTH) / static_cast<float>(SCREEN_WIDTH); //Width of the single char on the screen(in %)
float ScreenCharHeight = static_cast<float>(LETTER_HEIGHT) / static_cast<float>(SCREEN_HEIGHT); //Height of the single char on the screen(in %)
float TexelCharWidth = 1.0f / static_cast<float>(LETTERS_NUM); //Width of the char texel(in the texture %)
float ThisStartX; //The start x of the current letter, drawingh
float ThisStartY; //The start y of the current letter, drawingh
float ThisEndX; //The end x of the current letter, drawing
float ThisEndY; //The end y of the current letter, drawing
int LetterNum; //Letter number in the loaded font
int ThisLetter; //The current letter
D3D11_MAPPED_SUBRESOURCE MapResource; //Map resource
VertexPos* ThisSprite; //Vertecies of the current sprite, drawing
//VAR
//Clamping string, if too long
if(StringLenth > LETTERS_NUM)
{
StringLenth = LETTERS_NUM;
}
//Mapping resource
D3DResult = _DeviceContext -> Map(_vertexBuffer, 0, D3D11_MAP_WRITE_DISCARD, 0, &MapResource);
if(FAILED(D3DResult))
{
throw("Failed to map resource");
}
ThisSprite = (VertexPos*)MapResource.pData;
for(i = 0; i < StringLenth; i++)
{
//Creating geometry for the letter sprite
ThisStartX = StartX + ScreenCharWidth * static_cast<float>(i);
ThisStartY = StartY;
ThisEndX = ThisStartX + ScreenCharWidth;
ThisEndY = StartY + ScreenCharHeight;
ThisSprite[0].Position = XMFLOAT3(ThisEndX, ThisEndY, 1.0f);
ThisSprite[1].Position = XMFLOAT3(ThisEndX, ThisStartY, 1.0f);
ThisSprite[2].Position = XMFLOAT3(ThisStartX, ThisStartY, 1.0f);
ThisSprite[3].Position = XMFLOAT3(ThisStartX, ThisStartY, 1.0f);
ThisSprite[4].Position = XMFLOAT3(ThisStartX, ThisEndY, 1.0f);
ThisSprite[5].Position = XMFLOAT3(ThisEndX, ThisEndY, 1.0f);
ThisLetter = static_cast<char>(StringToDraw[i]);
//Defining the letter place(number) in the font
if(ThisLetter < IndexA || ThisLetter > IndexZ)
{
//Invalid character, the last character in the font, loaded
LetterNum = IndexZ - IndexA + 1;
}
else
{
LetterNum = ThisLetter - IndexA;
}
//Unwraping texture on the geometry
ThisStartX = TexelCharWidth * static_cast<float>(LetterNum);
ThisStartY = 0.0f;
ThisEndY = 1.0f;
ThisEndX = ThisStartX + TexelCharWidth;
ThisSprite[0].TextureCoords = XMFLOAT2(ThisEndX, ThisEndY);
ThisSprite[1].TextureCoords = XMFLOAT2(ThisEndX, ThisStartY);
ThisSprite[2].TextureCoords = XMFLOAT2(ThisStartX, ThisStartY);
ThisSprite[3].TextureCoords = XMFLOAT2(ThisStartX, ThisStartY);
ThisSprite[4].TextureCoords = XMFLOAT2(ThisStartX, ThisEndY);
ThisSprite[5].TextureCoords = XMFLOAT2(ThisEndX, ThisEndY);
ThisSprite += VERTEX_IN_RECT_NUM;
}
for(i = 0; i < StringLenth; i++, ThisSprite -= VERTEX_IN_RECT_NUM);
_DeviceContext -> Unmap(_vertexBuffer, 0);
_DeviceContext -> Draw(VERTEX_IN_RECT_NUM * StringLenth, 0);
return true;
}
Although the piece of code constructing the Vertex Array seems correct to me at first glance, it seems like you are trying to Draw your vertices with a Shader which has not been set yet !
It is difficult to precisely answer you without looking at the whole code, but I can guess that you will need to do something like that :
1) Create Vertex and Pixel Shaders by compiling them first from their respective buffers
2) Create the Input Layout description, which describes the Input Buffers that will be read by the Input Assembler stage. It will have to match your VertexPos structure and your shader structure.
3) Set the Shader parameters.
4) Only now you can Set Shader rendering parameters : Set the InputLayout, as well as the Vertex and Pixel Shaders that will be used to render your triangles by something like :
_DeviceContext -> Unmap(_vertexBuffer, 0);
_DeviceContext->IASetInputLayout(myInputLayout);
_DeviceContext->VSSetShader(myVertexShader, NULL, 0); // Set Vertex shader
_DeviceContext->PSSetShader(myPixelShader, NULL, 0); // Set Pixel shader
_DeviceContext -> Draw(VERTEX_IN_RECT_NUM * StringLenth, 0);
This link should help you achieve what you want to do : http://www.rastertek.com/dx11tut12.html
Also, I recommend you to set an IndexBuffer and to use the method DrawIndexed to render your triangles for performance reasons : It will allow the graphics adapter to store vertices in a vertex cache, allowing recently-used vertex to be fetched from the cache instead of reading it from the vertex buffer.
More about this concern can be found on MSDN : http://msdn.microsoft.com/en-us/library/windows/desktop/bb147325(v=vs.85).aspx
Hope this helps!
P.S : Also, don't forget to release the resources after using them by calling Release().

Resources