DirectX 11: text output, using your own font texture - text

I'm learning DirectX, using the book "Sherrod A., Jones W. - Beginning DirectX 11 Game Programming - 2011" Now I'm exploring the 4th chapter about drawing text.
Please, help we to fix my function, that I'm using to draw a string on the screen. I've already loaded font texture and in the function I create some sprites with letters and define texture coordinates for them. This compiles correctly, but doesn't draw anything. What's wrong?
bool DirectXSpriteGame :: DrawString(char* StringToDraw, float StartX, float StartY)
{
//VAR
HRESULT D3DResult; //The result of D3D functions
int i; //Counters
const int IndexA = static_cast<char>('A'); //ASCII index of letter A
const int IndexZ = static_cast<char>('Z'); //ASCII index of letter Z
int StringLenth = strlen(StringToDraw); //Lenth of drawing string
float ScreenCharWidth = static_cast<float>(LETTER_WIDTH) / static_cast<float>(SCREEN_WIDTH); //Width of the single char on the screen(in %)
float ScreenCharHeight = static_cast<float>(LETTER_HEIGHT) / static_cast<float>(SCREEN_HEIGHT); //Height of the single char on the screen(in %)
float TexelCharWidth = 1.0f / static_cast<float>(LETTERS_NUM); //Width of the char texel(in the texture %)
float ThisStartX; //The start x of the current letter, drawingh
float ThisStartY; //The start y of the current letter, drawingh
float ThisEndX; //The end x of the current letter, drawing
float ThisEndY; //The end y of the current letter, drawing
int LetterNum; //Letter number in the loaded font
int ThisLetter; //The current letter
D3D11_MAPPED_SUBRESOURCE MapResource; //Map resource
VertexPos* ThisSprite; //Vertecies of the current sprite, drawing
//VAR
//Clamping string, if too long
if(StringLenth > LETTERS_NUM)
{
StringLenth = LETTERS_NUM;
}
//Mapping resource
D3DResult = _DeviceContext -> Map(_vertexBuffer, 0, D3D11_MAP_WRITE_DISCARD, 0, &MapResource);
if(FAILED(D3DResult))
{
throw("Failed to map resource");
}
ThisSprite = (VertexPos*)MapResource.pData;
for(i = 0; i < StringLenth; i++)
{
//Creating geometry for the letter sprite
ThisStartX = StartX + ScreenCharWidth * static_cast<float>(i);
ThisStartY = StartY;
ThisEndX = ThisStartX + ScreenCharWidth;
ThisEndY = StartY + ScreenCharHeight;
ThisSprite[0].Position = XMFLOAT3(ThisEndX, ThisEndY, 1.0f);
ThisSprite[1].Position = XMFLOAT3(ThisEndX, ThisStartY, 1.0f);
ThisSprite[2].Position = XMFLOAT3(ThisStartX, ThisStartY, 1.0f);
ThisSprite[3].Position = XMFLOAT3(ThisStartX, ThisStartY, 1.0f);
ThisSprite[4].Position = XMFLOAT3(ThisStartX, ThisEndY, 1.0f);
ThisSprite[5].Position = XMFLOAT3(ThisEndX, ThisEndY, 1.0f);
ThisLetter = static_cast<char>(StringToDraw[i]);
//Defining the letter place(number) in the font
if(ThisLetter < IndexA || ThisLetter > IndexZ)
{
//Invalid character, the last character in the font, loaded
LetterNum = IndexZ - IndexA + 1;
}
else
{
LetterNum = ThisLetter - IndexA;
}
//Unwraping texture on the geometry
ThisStartX = TexelCharWidth * static_cast<float>(LetterNum);
ThisStartY = 0.0f;
ThisEndY = 1.0f;
ThisEndX = ThisStartX + TexelCharWidth;
ThisSprite[0].TextureCoords = XMFLOAT2(ThisEndX, ThisEndY);
ThisSprite[1].TextureCoords = XMFLOAT2(ThisEndX, ThisStartY);
ThisSprite[2].TextureCoords = XMFLOAT2(ThisStartX, ThisStartY);
ThisSprite[3].TextureCoords = XMFLOAT2(ThisStartX, ThisStartY);
ThisSprite[4].TextureCoords = XMFLOAT2(ThisStartX, ThisEndY);
ThisSprite[5].TextureCoords = XMFLOAT2(ThisEndX, ThisEndY);
ThisSprite += VERTEX_IN_RECT_NUM;
}
for(i = 0; i < StringLenth; i++, ThisSprite -= VERTEX_IN_RECT_NUM);
_DeviceContext -> Unmap(_vertexBuffer, 0);
_DeviceContext -> Draw(VERTEX_IN_RECT_NUM * StringLenth, 0);
return true;
}

Although the piece of code constructing the Vertex Array seems correct to me at first glance, it seems like you are trying to Draw your vertices with a Shader which has not been set yet !
It is difficult to precisely answer you without looking at the whole code, but I can guess that you will need to do something like that :
1) Create Vertex and Pixel Shaders by compiling them first from their respective buffers
2) Create the Input Layout description, which describes the Input Buffers that will be read by the Input Assembler stage. It will have to match your VertexPos structure and your shader structure.
3) Set the Shader parameters.
4) Only now you can Set Shader rendering parameters : Set the InputLayout, as well as the Vertex and Pixel Shaders that will be used to render your triangles by something like :
_DeviceContext -> Unmap(_vertexBuffer, 0);
_DeviceContext->IASetInputLayout(myInputLayout);
_DeviceContext->VSSetShader(myVertexShader, NULL, 0); // Set Vertex shader
_DeviceContext->PSSetShader(myPixelShader, NULL, 0); // Set Pixel shader
_DeviceContext -> Draw(VERTEX_IN_RECT_NUM * StringLenth, 0);
This link should help you achieve what you want to do : http://www.rastertek.com/dx11tut12.html
Also, I recommend you to set an IndexBuffer and to use the method DrawIndexed to render your triangles for performance reasons : It will allow the graphics adapter to store vertices in a vertex cache, allowing recently-used vertex to be fetched from the cache instead of reading it from the vertex buffer.
More about this concern can be found on MSDN : http://msdn.microsoft.com/en-us/library/windows/desktop/bb147325(v=vs.85).aspx
Hope this helps!
P.S : Also, don't forget to release the resources after using them by calling Release().

Related

Flipped normals after loaded in my raytracer

I'm working on a path/ray tracer in c++. I'm now working on loading obj files. But some objects have flipped normals after loaded in. I can't figure this out where this behaviour comes from or how to fix it.
See image for better understanding of the issue.
image showing current behaviour:
Link to full GitHub page
First I thought it was an issue with the normals behind the surface. but after rendering the color based on the surface normal. It's obvious that the normals are flipped in some cases.
Here is my very basic code of loading the model.
//OBJ Loader object.
bool OBJLoader::loadMesh (std::string filePath){
// If the file is not an .obj file return false
if (filePath.substr(filePath.size() - 4, 4) != ".obj"){
std::cout << "No .obj file found at given file location: "<<filePath << std::endl;
}
//Open file stream
std::ifstream file(filePath);
//check if file is open.
if (!file.is_open()){
std::cout << "File was not opened!" << std::endl;
return false;
}
//Do file loading.
std::cout << "Parsing obj-file: "<<filePath << std::endl;
//constuct mesh data.
bool smoothShading = false;
std::string obj_name;
std::vector<Vertex> vertices;
std::vector<Vect3> Positions;
std::vector<Vect3> Normals;
std::vector<Vect2> UVs;
std::vector<unsigned int> V_indices;
//the current line
std::string currentLine;
//loop over each line and parse the needed data.
while(std::getline(file, currentLine)){
//for now we just print the line
//std::cout << currentLine << std::endl;
if(algorithm::startsWith(currentLine, "s ")){
std::vector<std::string> line_split = algorithm::split(currentLine,' ');
if( line_split[1] == std::string("off")){
smoothShading = false;
}else if(line_split[1] == std::string("1")){
//enalbe smooth shading;
smoothShading = true;
}
}
//check if the line starts with v -> vertex.
if(algorithm::startsWith(currentLine, "o ")){
//construct new vertex position.
std::vector<std::string> line_split = algorithm::split(currentLine,' ');
obj_name = line_split[1];
}
//check if the line starts with v -> vertex.
if(algorithm::startsWith(currentLine, "v ")){
//construct new vertex position.
std::vector<std::string> line_split = algorithm::split(currentLine,' ');
float x = std::stof(line_split[1]);
float y = std::stof(line_split[2]);
float z = std::stof(line_split[3]);
Vect3 pos = Vect3(x,y,z);
Positions.push_back(pos);
}
//check if the line starts with vt -> vertex uv.
if(algorithm::startsWith(currentLine, "vt ")){
//construct new vertex uv.
std::vector<std::string> line_split = algorithm::split(currentLine,' ');
float u = std::stof(line_split[1]);
float v = std::stof(line_split[2]);
Vect2 uv = Vect2(u,v);
UVs.push_back(uv);
}
//check if the line starts with vn -> vertex normals.
if(algorithm::startsWith(currentLine, "vn ")){
//construct new vertex normal.
std::vector<std::string> line_split = algorithm::split(currentLine,' ');
float x = std::stof(line_split[1]);
float y = std::stof(line_split[2]);
float z = std::stof(line_split[3]);
Vect3 normal = Vect3(x,y,z);
Normals.push_back(normal);
}
//check if the line starts with f -> constuct faces.
if(algorithm::startsWith(currentLine, "f ")){
//construct new vertex position.
std::vector<std::string> line_split = algorithm::split(currentLine,' ');
//#NOTE: this only works when mesh is already triangulated.
//Parse all vertices.
std::vector<std::string> vertex1 = algorithm::split(line_split[1],'/');
std::vector<std::string> vertex2 = algorithm::split(line_split[2],'/');
std::vector<std::string> vertex3 = algorithm::split(line_split[3],'/');
if(vertex1.size() <= 1){
//VERTEX 1
Vect3 position = Positions[std::stoi(vertex1[0])-1];
Vertex v1(position);
vertices.push_back(v1);
//VERTEX 2
position = Positions[std::stoi(vertex2[0])-1];
Vertex v2(position);
vertices.push_back(v2);
//VERTEX 3
position = Positions[std::stoi(vertex3[0])-1];
Vertex v3(position);
vertices.push_back(v3);
//Add to Indices array.
//calculate the index number
//The 3 comes from 3 vertices per face.
unsigned int index = vertices.size() - 3;
V_indices.push_back(index);
V_indices.push_back(index+1);
V_indices.push_back(index+2);
}
//check if T exist.
else if(vertex1[1] == ""){
//NO Uv
//V -> index in the positions array.
//N -> index in the normals array.
//VERTEX 1
Vect3 position = Positions[std::stoi(vertex1[0])-1];
Vect3 normal = Normals[std::stoi(vertex1[2])-1];
Vertex v1(position,normal);
vertices.push_back(v1);
//VERTEX 2
position = Positions[std::stoi(vertex2[0])-1];
normal = Normals[std::stoi(vertex2[2])-1];
Vertex v2(position,normal);
vertices.push_back(v2);
//VERTEX 3
position = Positions[std::stoi(vertex3[0])-1];
normal = Normals[std::stoi(vertex3[2])-1];
Vertex v3(position,normal);
vertices.push_back(v3);
//Add to Indices array.
//calculate the index number
//The 3 comes from 3 vertices per face.
unsigned int index = vertices.size() - 3;
V_indices.push_back(index);
V_indices.push_back(index+1);
V_indices.push_back(index+2);
}else if (vertex1[1] != ""){
//We have UV
//V -> index in the positions array.
//T -> index of UV
//N -> index in the normals array.
//VERTEX 1
Vect3 position = Positions[std::stoi(vertex1[0])-1];
Vect2 uv = UVs[std::stoi(vertex1[1])-1];
Vect3 normal = Normals[std::stoi(vertex1[2])-1];
Vertex v1(position,normal,uv);
vertices.push_back(v1);
//VERTEX 2
position = Positions[std::stoi(vertex2[0])-1];
uv = UVs[std::stoi(vertex2[1])-1];
normal = Normals[std::stoi(vertex2[2])-1];
Vertex v2(position,normal,uv);
vertices.push_back(v2);
//VERTEX 3
position = Positions[std::stoi(vertex3[0])-1];
uv = UVs[std::stoi(vertex3[1])-1];
normal = Normals[std::stoi(vertex3[2])-1];
Vertex v3(position,normal,uv);
vertices.push_back(v3);
//Add to Indices array.
//calculate the index number
//The 3 comes from 3 vertices per face.
unsigned int index = vertices.size() - 3;
V_indices.push_back(index);
V_indices.push_back(index+1);
V_indices.push_back(index+2);
}
//We can check here in which format. V/T/N, V//N, V//, ...
//For now we ignore this and use V//N.
}
}
//close stream
file.close();
Positions.clear();
Normals.clear();
UVs.clear();
//reorder the arrays so the coresponding index match the position,uv and normal.
for (Vertex v: vertices) {
Positions.push_back(v.getPosition());
Normals.push_back(v.getNormal());
UVs.push_back(v.getUV());
}
//Load mesh data.
_mesh = Mesh(smoothShading,obj_name, Positions, Normals, UVs, V_indices);
//return true, succes.
return true;
After this the model is inserted in a grid structure for faster intersection tests.
for(int i= 0;i<mesh._indices.size();i=i+3){
Triangle* tri;
if(mesh.smoothShading){
tri = new SmoothTriangle(Point3(mesh._positions[mesh._indices[i]]),
Point3(mesh._positions[mesh._indices[i+1]]),
Point3(mesh._positions[mesh._indices[i+2]]),
Normal(mesh._normals[mesh._indices[i]]),
Normal(mesh._normals[mesh._indices[i+1]]),
Normal(mesh._normals[mesh._indices[i+2]]),material);
}else{
tri = new Triangle(Point3(mesh._positions[mesh._indices[i]]),
Point3(mesh._positions[mesh._indices[i+1]]),Point3(mesh._positions[mesh._indices[i+2]]),Normal(mesh._normals[mesh._indices[i]]),material);
}
add_object(tri);
}
constructCells();
Maybe good to add is the code for interpolating normals
Normal SmoothTriangle::calculate_normal(double gamma, double beta){
return (Normal((1 - beta - gamma) * n0 + beta * n1 + gamma * n2)).normalize();
}
FIXED
I Fixed the issue. It was not in my OBJ loader. the model was exported from blender and when exporting it applied all modifiers, but the Solidify caused some back faces to clip with the front faces, after exporting to .obj file. After removing this modifier everything was back to "normal" (Just a funny pun to finish this answer)
It might be nothing wrong in your code I assume that the obj is corrupted as some obj models have flipped normals ...
Wavefront obj format does not specify the normal direction at all I saw even models without consistency so some normals points out others in. You can not even be sure the faces have single winding rule. So its safer to use bidirectional normals (you know using
|dot(normal,light)|
instead of
dot(normal,light)
and no face culling or recompute the normals and even winding rule on your own after load.
The bidirectional normals/lighting are sometimes set by different material settings for each side of face FRONT and BACK or FRONT_AND_BACK or DOUBLE_SIDED etc or its configuration... just look in your gfx API for such stuff. To turn off the face culling look for things like CULL_FACE

What is the type and range of values vtkCamera focal point expects?

I am using PCL viewer (which uses VTK) for visualizing a 3D point cloud generated by SLAM algorithm. I am trying to render the view of point cloud as seen by the robot at a given pose (position and orientation). I am able to set the position and ViewUp vector of the camera, but I am unable to set the Focal point of the camera to the heading of the robot. Currently, I am using sliders to set the Focal Point, but I want to set it programmatically based on the heading.
I am trying understand the type (angle in rad / distance in m) and range of values VTKCamera Focal Point expects and how that's related to the heading.
Function where I am updating camera
void Widget::setcamView(){
//transfrom position
Eigen::Vector3d position = this->transformpose(Eigen::Vector3d(image_pose.at(pose_ittr).position[0], image_pose.at(pose_ittr).position[1], image_pose.at(pose_ittr).position[2]));
posx = position(0);
posy = position(1);
posz = position(2);
//transform the pose
Eigen::Vector3d attitude = this->transformpose(Eigen::Vector3d(image_pose.at(pose_ittr).orientation[0],image_pose.at(pose_ittr).orientation[1],image_pose.at(pose_ittr).orientation[2]));
roll = attitude(0);
pitch = attitude(1);
yaw = attitude(2);
viewx = ui->viewxhSlider->value();// * std::pow(10,-3);
viewy = ui->viewyhSlider->value();// * std::pow(10,-3);
viewz = ui->viewzhSlider->value();// * std::pow(10,-3);
// debug
std::cout<<"Positon: "<<posx<<"\t"<<posy<<"\t"<<posz<<std::endl<<
"View: "<<viewx<<"\t"<<viewy<<"\t"<<viewz<<std::endl<<
"Orientation: "<<roll<<"\t"<<pitch<<"\t"<<yaw<<std::endl;
point_cutoffy = ui->ptcutoffhSlider->value();
if(yaw <=0)
yaw = yaw * -1;
viewer->setCameraPosition(posx,posy,posz+1,
viewz,viewy,viewz,
0, 0, 1, 0);
viewer->setCameraFieldOfView(1);
viewer->setCameraClipDistances(point_cutoffx,point_cutoffy,0);
ui->qvtkWidget->update();
count++;
}
Any help is greatly appreciated.
-Thanks
P.S
PCL Viewer Set Camera Implementation (uses VTK)
void pcl::visualization::PCLVisualizer::setCameraPosition (
double pos_x, double pos_y, double pos_z,
double view_x, double view_y, double view_z,
double up_x, double up_y, double up_z,
int viewport)
{
rens_->InitTraversal ();
vtkRenderer* renderer = NULL;
int i = 0;
while ((renderer = rens_->GetNextItem ()) != NULL)
{
// Modify all renderer's cameras
if (viewport == 0 || viewport == i)
{
vtkSmartPointer<vtkCamera> cam = renderer->GetActiveCamera ();
cam->SetPosition (pos_x, pos_y, pos_z);
cam->SetFocalPoint (view_x, view_y, view_z);
cam->SetViewUp (up_x, up_y, up_z);
renderer->ResetCameraClippingRange ();
}
++i;
}
win_->Render ();
}
I'm working with very similar problem via opencv Viz, which also uses VTK. Relatively to your question, I think you can find an answer HERE

Can I do random writes from a kernel without worrying about synchronization issues?

Consider a simple depth-of-field filter (my actual use case is similar). It loops over the image and scatters every pixel over a circular neighborhood of its. The radius of the neighborhood depends on the depth of the pixel - the closer the it is to the focal plane, the smaller the radius.
Note that I said "scatters" and not "gathers". In simpler image processing applications, you normally use the "gather" technique to perform an uniform Gaussian blur. IOW, you loop over the neighborhood of each pixel, and "gather" the nearby values into a weighted average. This works fine in that case, but if you make the blur kernel vary between pixels, while still using "gathering", you'll get a somewhat unrealistic effect. Such "space-variant filtering" scenarios are where "scattering" is different from "gathering".
To be clear: the scatter algo is something like this:
init resultImage to black
loop over sourceImage
var c = fetch current pixel from sourceImage
var toAdd = c * weight // weight < 1
loop over circular neighbourhood of current sourcepixel
add toAdd to current neighbor from resultImage
My question is: if I do a direct translation of this pseudocode to OpenCL, will there be synchronization issues due to different work-items simultaneously writing to the same output pixel?
Does the answer vary depending on whether I'm using Buffers or Images?
The course I'm reading suggests that there will be synchronization issues. But OTOH I read the source of Mandelbulber 1.21-2, which does a straightforward OpenCL DOF just like my above pseudocode, and it seems to work fine.
(the relevant code is in mandelbulber-opencl-1.21-2.orig/usr/share/cl/cl_DOF.cl and it's as follows)
//*********************************************************
// MANDELBULBER
// kernel for DOF effect
//
//
// author: Krzysztof Marczak
// contact: buddhi1980#gmail.com
// licence: GNU GPL v3.0
//
//*********************************************************
typedef struct
{
int width;
int height;
float focus;
float radius;
} sParamsDOF;
typedef struct
{
float z;
int i;
} sSortZ;
//------------------ MAIN RENDER FUNCTION --------------------
kernel void DOF(__global ushort4 *in_image, __global ushort4 *out_image, __global sSortZ *zBuffer, sParamsDOF p)
{
const unsigned int i = get_global_id(0);
uint index = p.height * p.width - i - 1;
int ii = zBuffer[index].i;
int2 scr = (int2){ii % p.width, ii / p.width};
float z = zBuffer[index].z;
float blur = fabs(z - p.focus) / z * p.radius;
blur = min(blur, 500.0f);
float4 center = convert_float4(in_image[scr.x + scr.y * p.width]);
float factor = blur * blur * sqrt(blur)* M_PI_F/3.0f;
int blurInt = (int)blur;
int2 scr2;
int2 start = (int2){scr.x - blurInt, scr.y - blurInt};
start = max(start, 0);
int2 end = (int2){scr.x + blurInt, scr.y + blurInt};
end = min(end, (int2){p.width - 1, p.height - 1});
for (scr2.y = start.y; scr2.y <= end.y; scr2.y++)
{
for(scr2.x = start.x; scr2.x <= end.x; scr2.x++)
{
float2 d = scr - scr2;
float r = length(d);
float op = (blur - r) / factor;
op = clamp(op, 0.0f, 1.0f);
float opN = 1.0f - op;
uint address = scr2.x + scr2.y * p.width;
float4 old = convert_float4(out_image[address]);
out_image[address] = convert_ushort4(opN * old + op * center);
}
}
}
No, you can't without worrying about synchronization. If two work items scatter to the same location without synchronization, you have a race condition and won't get the correct results. Same for both buffers and images. With buffers you could use atomics, but they can slow down your code, especially when there is contention (but even when not). AFAIK, read/write images don't have atomic operations.

Unexpected behavior of geometry shader using line adjacency input

I am trying to write a simple shader to draw 3D line with thickness just to learn geometry shader in unity. However I am facing problem with the output from the shader when setting the input of the geometry shader to lineadj topology, which i suspect has something to do with the "weird" value of the third and fourth vertex taken in by the geometry shader.
This is how i generate my mesh from a c# script:
public static GameObject DrawShaderLine( Vector3[] posCont, float thickness, Material mat)
{
GameObject line = CreateObject ("Line", mat);
line.GetComponent<Renderer>().material.SetFloat("_Thickness", thickness);
int posContLen = posCont.Length;
int newVerticeLen = posContLen + 2;
Vector3[] newVertices = new Vector3[newVerticeLen];
newVertices[0] = posCont[0] + (posCont[0]-posCont[1]);
for(int i =0; i < posContLen; ++i)
{
newVertices[i+1] = posCont[i];
}
newVertices[newVerticeLen-1] = posCont[posContLen-1] + ( posCont[posContLen-1] - posCont[posContLen-2]);
List<int> newIndices = new List<int>();
for(int i = 1; i< newVerticeLen-2; ++i)
{
newIndices.Add(i-1);
newIndices.Add(i);
newIndices.Add(i+1);
newIndices.Add(i+2);
}
Mesh mesh = (line.GetComponent (typeof(MeshFilter)) as MeshFilter).mesh;
mesh.Clear ();
mesh.vertices = newVertices;
//mesh.triangles = newTriangles;
mesh.SetIndices(newIndices.ToArray(), MeshTopology.LineStripe, 0);
return line;
}
And this is the GS that is running in the shader program
v2g vert(appdata_base v)
{
v2g OUT;
OUT.pos = v.vertex;
return OUT;
}
[maxvertexcount(2)]
void geom(lineadj v2g p[4], inout LineStream<g2f> triStream)
{
float4x4 vp = mul(UNITY_MATRIX_MVP, _World2Object);
g2f OUT;
float4 pos0 = mul(vp, p[0].pos);
float4 pos1 = mul(vp, p[1].pos);
float4 pos2 = mul(vp, p[2].pos);
float4 pos3 = mul(vp, p[3].pos);
OUT.pos = pos1;
OUT.c = half4(1,0,0,1);
triStream.Append(OUT);
OUT.pos = pos2;
OUT.c = half4(0,1,0,1);
triStream.Append(OUT);
triStream.RestartStrip();
}
From my understanding, lineadj will take in 4 vertex with vertex[0] and vertex[3] being the adjacent vertexes. So by drawing vertex 1 and vertex 2 i am suppose to get my line drawn. However this is the output i get
This input data vertex position is (-20,0,0) and (0,-20,0) which is marked by the center 2 squares. The top left and bottom right cube are the position of the adjacent vertex generated by the c# function. As you can see the line seem to be connecting to position (0,0,0) and the lines are flickering rapidly, which make me suspect that vertex in the GS is corrupted? Start of the line is colored red and the end of the line is colored green.
If I edit the GS to output pos0 and pos1 instead of pos1 and pos2, i get this
with no flickering lines.
and if i plot pos2 and pos3, the result is way crazier(pos2 and pos 3 seems to be rubbish value).
I have been trying to debug this for the whole day but with no progress, so I need some help here! Thanks in advance

BlackBerry - image 3D transform

I know how to rotate image on any angle with drawTexturePath:
int displayWidth = Display.getWidth();
int displayHeight = Display.getHeight();
int[] x = new int[] { 0, displayWidth, displayWidth, 0 };
int[] x = new int[] { 0, 0, displayHeight, displayHeight };
int angle = Fixed32.toFP( 45 );
int dux = Fixed32.cosd(angle );
int dvx = -Fixed32.sind( angle );
int duy = Fixed32.sind( angle );
int dvy = Fixed32.cosd( angle );
graphics.drawTexturedPath( x, y, null, null, 0, 0, dvx, dux, dvy, duy, image);
but what I need is a 3d projection of simple image with 3d transformation (something like this)
Can you please advice me how to do this with drawTexturedPath (I'm almost sure it's possible)?
Are there any alternatives?
The method used by this function(2 walk vectors) is the same as the oldskool coding tricks used for the famous 'rotozoomer' effect. rotozoomer example video
This method is a very fast way to rotate, zoom, and skew an image. The rotation is done simply by rotating the walk vectors. The zooming is done simply by scaling the walk vectors. The skewing is done by rotating the walkvectors in respect to one another (e.g. they don't make a 90 degree angle anymore).
Nintendo had made hardware in their SNES to use the same effect on any of the sprites and or backgrounds. This made way for some very cool effects.
One big shortcoming of this technique is that one can not perspectively warp a texture. To do this, every new horizontal line, the walk vectors should be changed slightly. (hard to explain without a drawing).
On the snes they overcame this by altering every scanline the walkvectors (In those days one could set an interrupt when the monitor was drawing any scanline). This mode was later referred to as MODE 7 (since it behaved like a new virtual kind of graphics mode). The most famous games using this mode were Mario kart and F-zero
So to get this working on the blackberry, you'll have to draw your image "displayHeight" times (e.g. Every time one scanline of the image). This is the only way to achieve the desired effect. (This will undoubtedly cost you a performance hit since you are now calling the drawTexturedPath function a lot of times with new values, instead of just one time).
I guess with a bit of googling you can find some formulas (or even an implementation) how to calc the varying walkvectors. With a bit of paper (given your not too bad at math) you might deduce it yourself too. I've done it myself too when I was making games for the Gameboy Advance so I know it can be done.
Be sure to precalc everything! Speed is everything (especially on slow machines like phones)
EDIT: did some googling for you. Here's a detailed explanation how to create the mode7 effect. This will help you achieve the same with the Blackberry function. Mode 7 implementation
With the following code you can skew your image and get a perspective like effect:
int displayWidth = Display.getWidth();
int displayHeight = Display.getHeight();
int[] x = new int[] { 0, displayWidth, displayWidth, 0 };
int[] y = new int[] { 0, 0, displayHeight, displayHeight };
int dux = Fixed32.toFP(-1);
int dvx = Fixed32.toFP(1);
int duy = Fixed32.toFP(1);
int dvy = Fixed32.toFP(0);
graphics.drawTexturedPath( x, y, null, null, 0, 0, dvx, dux, dvy, duy, image);
This will skew your image in a 45º angle, if you want a certain angle you just need to use some trigonometry to determine the lengths of your vectors.
Thanks for answers and guidance, +1 to you all.
MODE 7 was the way I choose to implement 3D transformation, but unfortunately I couldn't make drawTexturedPath to resize my scanlines... so I came down to simple drawImage.
Assuming you have a Bitmap inBmp (input texture), create new Bitmap outBmp (output texture).
Bitmap mInBmp = Bitmap.getBitmapResource("map.png");
int inHeight = mInBmp.getHeight();
int inWidth = mInBmp.getWidth();
int outHeight = 0;
int outWidth = 0;
int outDrawX = 0;
int outDrawY = 0;
Bitmap mOutBmp = null;
public Scr() {
super();
mOutBmp = getMode7YTransform();
outWidth = mOutBmp.getWidth();
outHeight = mOutBmp.getHeight();
outDrawX = (Display.getWidth() - outWidth) / 2;
outDrawY = Display.getHeight() - outHeight;
}
Somewhere in code create a Graphics outBmpGraphics for outBmp.
Then do following in iteration from start y to (texture height)* y transform factor:
1.create a Bitmap lineBmp = new Bitmap(width, 1) for one line
2.create a Graphics lineBmpGraphics from lineBmp
3.paint i line from texture to lineBmpGraphics
4.encode lineBmp to EncodedImage img
5.scale img according to MODE 7
6.paint img to outBmpGraphics
Note: Richard Puckett's PNGEncoder BB port used in my code
private Bitmap getMode7YTransform() {
Bitmap outBmp = new Bitmap(inWidth, inHeight / 2);
Graphics outBmpGraphics = new Graphics(outBmp);
for (int i = 0; i < inHeight / 2; i++) {
Bitmap lineBmp = new Bitmap(inWidth, 1);
Graphics lineBmpGraphics = new Graphics(lineBmp);
lineBmpGraphics.drawBitmap(0, 0, inWidth, 1, mInBmp, 0, 2 * i);
PNGEncoder encoder = new PNGEncoder(lineBmp, true);
byte[] data = null;
try {
data = encoder.encode(true);
} catch (IOException e) {
e.printStackTrace();
}
EncodedImage img = PNGEncodedImage.createEncodedImage(data,
0, -1);
float xScaleFactor = ((float) (inHeight / 2 + i))
/ (float) inHeight;
img = scaleImage(img, xScaleFactor, 1);
int startX = (inWidth - img.getScaledWidth()) / 2;
int imgHeight = img.getScaledHeight();
int imgWidth = img.getScaledWidth();
outBmpGraphics.drawImage(startX, i, imgWidth, imgHeight, img,
0, 0, 0);
}
return outBmp;
}
Then just draw it in paint()
protected void paint(Graphics graphics) {
graphics.drawBitmap(outDrawX, outDrawY, outWidth, outHeight, mOutBmp,
0, 0);
}
To scale, I've do something similar to method described in Resizing a Bitmap using .scaleImage32 instead of .setScale
private EncodedImage scaleImage(EncodedImage image, float ratioX,
float ratioY) {
int currentWidthFixed32 = Fixed32.toFP(image.getWidth());
int currentHeightFixed32 = Fixed32.toFP(image.getHeight());
double w = (double) image.getWidth() * ratioX;
double h = (double) image.getHeight() * ratioY;
int width = (int) w;
int height = (int) h;
int requiredWidthFixed32 = Fixed32.toFP(width);
int requiredHeightFixed32 = Fixed32.toFP(height);
int scaleXFixed32 = Fixed32.div(currentWidthFixed32,
requiredWidthFixed32);
int scaleYFixed32 = Fixed32.div(currentHeightFixed32,
requiredHeightFixed32);
EncodedImage result = image.scaleImage32(scaleXFixed32, scaleYFixed32);
return result;
}
See also
J2ME Mode 7 Floor Renderer - something much more detailed & exciting if you writing a 3D game!
You want to do texture mapping, and that function won't cut it. Maybe you can kludge your way around it but the better option is to use a texture mapping algorithm.
This involves, for each row of pixels, determining the edges of the shape and where on the shape those screen pixels map to (the texture pixels). It's not so hard actually but may take a bit of work. And you'll be drawing the pic only once.
GameDev has a bunch of articles with sourcecode here:
http://www.gamedev.net/reference/list.asp?categoryid=40#212
Wikipedia also has a nice article:
http://en.wikipedia.org/wiki/Texture_mapping
Another site with 3d tutorials:
http://tfpsly.free.fr/Docs/TomHammersley/index.html
In your place I'd seek out a simple demo program that did something close to what you want and use their sources as base to develop my own - or even find a portable source library, I´m sure there must be a few.

Resources