Lack of perspective/distance in 3d projection. What am I doing wrong? - graphics

transform3Dpoint2D(var/px, var/py, var/pz)
//perform the rotations around each axis
//rotation around x
var/xy = cx*py - sx*pz
var/xz = sx*py + cx*pz
//rotation around y
var/yz = cy*xz - sy*px
var/yx = sy*xz + cy*px
//rotation around z
var/zx = cz*yx - sz*xy
var/zy = sz*yx + cz*xy
//return variables: x, y, how close point is (for sorting)
var/scaleRatio = 300/(300 + yz)
return list(zx*scaleRatio, zy*scaleRatio, yz)
The variables (cameraxr, camerayr, camerazr are pitch, yaw, roll respectively):
//Setup projection variables
sx = sin(cameraxr)
cx = cos(cameraxr)
sy = sin(camerayr)
cy = cos(camerayr)
sz = sin(camerazr)
cz = cos(camerazr)
When I use this code to project 3d coordinates into 2d, it works fine, but there seems to be a lack of perspective entirely. The objects behind should be smaller, and the objects on the front should be more bigger. What am I missing here? The top should also be seeen when it's moved down. It looks like this when used:

Related

Find intersection point ray/triangle in a right-hand coordinate system

I would like to get the intersection point of a line (defined by a vector and origin) on a triangle.
My engine use right handed coordinate system, so X pointing forward, Y pointing left and Z pointing up.
---- Edit ----
With Antares's help, I convert my points to engine space with:
p0.x = -pt0.y;
p0.y = pt0.z;
p0.z = pt0.x;
But I don't know how to do the same with the direction vector.
I use the function from this stackoverflow question, original poster use this tutorial.
First we look for the distance t from origin to intersection point, in order to find its coordinates.
But I've got a negative t, and code return true when ray is outside the triangle. I set it outside visualy.
It return sometime false when I'm in the triangle.
Here is the fonction I use to get the intersection point, I already checked that it works, with 'classic' values, as in the original post.
float kEpsilon = 0.000001;
V3f crossProduct(V3f point1, V3f point2){
V3f vector;
vector.x = point1.y * point2.z - point2.y * point1.z;
vector.y = point2.x * point1.z - point1.x * point2.z;
vector.z = point1.x * point2.y - point1.y * point2.x;
return vector;
}
float dotProduct(V3f dot1, V3f dot2){
float dot = dot1.x * dot2.x + dot1.y * dot2.y + dot1.z * dot2.z;
return dot;
}
//orig: ray origin, dir: ray direction, Triangle vertices: p0, p1, p2.
bool rayTriangleIntersect(V3f orig, V3f dir, V3f p0, V3f p1, V3f p2){
// compute plane's normal
V3f p0p1, p0p2;
p0p1.x = p1.x - p0.x;
p0p1.y = p1.y - p0.y;
p0p1.z = p1.z - p0.z;
p0p2.x = p2.x - p0.x;
p0p2.y = p2.y - p0.y;
p0p2.z = p2.z - p0.z;
// no need to normalize
V3f N = crossProduct(p0p1, p0p2); // N
// Step 1: finding P
// check if ray and plane are parallel ?
float NdotRayDirection = dotProduct(N, dir); // if the result is 0, the function will return the value false (no intersection).
if (fabs(NdotRayDirection) < kEpsilon){ // almost 0
return false; // they are parallel so they don't intersect !
}
// compute d parameter using equation 2
float d = dotProduct(N, p0);
// compute t (equation P=O+tR P intersection point ray origin O and its direction R)
float t = -((dotProduct(N, orig) - d) / NdotRayDirection);
// check if the triangle is in behind the ray
//if (t < 0){ return false; } // the triangle is behind
// compute the intersection point using equation
V3f P;
P.x = orig.x + t * dir.x;
P.y = orig.y + t * dir.y;
P.z = orig.z + t * dir.z;
// Step 2: inside-outside test
V3f C; // vector perpendicular to triangle's plane
// edge 0
V3f edge0;
edge0.x = p1.x - p0.x;
edge0.y = p1.y - p0.y;
edge0.z = p1.z - p0.z;
V3f vp0;
vp0.x = P.x - p0.x;
vp0.y = P.y - p0.y;
vp0.z = P.z - p0.z;
C = crossProduct(edge0, vp0);
if (dotProduct(N, C) < 0) { return false; }// P is on the right side
// edge 1
V3f edge1;
edge1.x = p2.x - p1.x;
edge1.y = p2.y - p1.y;
edge1.z = p2.z - p1.z;
V3f vp1;
vp1.x = P.x - p1.x;
vp1.y = P.y - p1.y;
vp1.z = P.z - p1.z;
C = crossProduct(edge1, vp1);
if (dotProduct(N, C) < 0) { return false; } // P is on the right side
// edge 2
V3f edge2;
edge2.x = p0.x - p2.x;
edge2.y = p0.y - p2.y;
edge2.z = p0.z - p2.z;
V3f vp2;
vp2.x = P.x - p2.x;
vp2.y = P.y - p2.y;
vp2.z = P.z - p2.z;
C = crossProduct(edge2, vp2);
if (dotProduct(N, C) < 0) { return false; } // P is on the right side;
return true; // this ray hits the triangle
}
My problem is I get t: -52.603783
intersection point P : [-1143.477295, -1053.412842, 49.525799]
This give me, relative to a 640X480 texture, the uv point: [-658, 41].
Probably because my engine use Z pointing up?
My engine use right handed coordinate system, so X pointing forward, Y pointing left and Z pointing up.
You have a slightly incorrect idea of a right handed coordinate system... please check https://en.wikipedia.org/wiki/Cartesian_coordinate_system#In_three_dimensions.
As the name suggests, X is pointing right (right hand's thumb to the right), Y is pointing up (straight index finger) and Z (straight middle finger) is pointing "forward" (actually -Z is forward, and Z is backward in the camera coordinate system).
Actually... your coordinate components are right hand sided, but the interpretation as X is forward etc. is unusual.
If you suspect the problem could be with the coordinate system of your engine (OGRE maybe? plain OpenGL? Or something selfmade?), then you need to transform your point and direction coordinates into the coordinate system of your algorithm. The algorithm you presented works in camera coordinate system, if I am not mistaken. Of course you need to transform the resulting intersection point back to the interpretation you use in the engine.
To turn the direction of a vector component around (e.g. the Z coordinate) you can use multiplication with -1 to achieve the effect.
Edit:
One more thing: I realized that the algorithm uses directional vectors as well, not just points. The rearranging of components does only work for points, not directions, if I recall correctly. Maybe you have to do a matrix multiplication with the CameraView transformation matrix (or its inverse M^-1 or was it the transpose M^T, I am not sure). I can't help you there, I hope you can figure it out or just do trial&error.
My problem is I get t: -52.603783
intersection point P : [-1143.477295, -1053.412842, 49.525799] This give me, relative to a 640X480 texture, the uv point: [-658, 41]
I reckon you think your values are incorrect. Which values do you expect to get for t and UV coordinates? Which ones would be "correct" for your input?
Hope this gets you started. GL, HF with your project! :)
#GUNNM: Concerning your feedback that you do not know how to handle the direction vector, here are some ideas that might be useful to you.
As I said, there should be a matrix multiplication way. Look for key words like "transforming directional vector with a matrix" or "transforming normals (normal vectors) with a matrix". This should yield something like: "use the transpose of the used transformation matrix" or "the inverse of the matrix" or something like that.
A workaround could be: You can "convert" a directional vector to a point, by thinking of a direction as "two points" forming a vector: A starting point and another point which lies in the direction you want to point.
The starting point of your ray, you already have available. Now you need to make sure that your directional vector is interpreted as "second point" not as "directional vector".
If your engine handles a ray like in the first case you would have:
Here is my starting point (0,0,0) and here is my directional vector (5,6,-7) (I made those numbers up and take the origin as starting point to have a simple example). So this is just the usual "start + gaze direction" case.
In the second case you would have:
Here is my start at (0,0,0) and my second point is a point on my directional vector (5,6,-7), e.g. any t*direction. Which for t=1 should give exactly the point where your directional vector is pointing to if it is considered a vector (and the start point being the origin (0,0,0)).
Now you need to check how your algorithm is handling that direction. If it does somewhere ray=startpoint+direction, then it interprets it as point + vector, resulting in a movement shift of the starting point while keeping the orientation and direction of the vector.
If it does ray=startpoint-direction then it interprets it as two points from which a directional vector is formed by subtracting.
To make a directional vector from two points you usually just need to subtract them. This gives a "pure direction" though, without defined orientation (which can be +t or -t). So if you need this direction to be fixed, you may take the absolute of your "vector sliding value" t in later computations for example (may be not the best/fastest way of doing it).

Fill a closed path in easeljs

Is there a way to fill a closed drawn path in easeljs? I have along string of mt(x_t,y_t).lt(x_(t+1),y_(t+1)) that draws a wacky shape. the shape closes off, but I can't find a way to have it actually fill in the closed area. Any ideas?
T is how many coordinates there are to connect, [round.X, round.Y] is the Tx2 array of coordinate pairs, ghf is the graphics object. xline.y is just a the lowest y value.
for(var i=0;i<T;i++){
x0 = round.X[i];
y0 = round.Y[i];
// scale for drawing
px0 = Math.round(xscale * x0);
py0 = Math.round(yscale * y0) + xline.y;
if(x0>gp.xmin){ // if not first point ...
ghf.mt(prevx,prevy).lt(px0,py0); // draw line from prev point to this point
}
// set this point as prev point
prevx = px0;
prevy = py0;
}
// fill out thing
ghf.mt(prevx,prevy).lt(px0,xline.y);
ghf.mt(px0,xline.y).lt(0,xline.y);
x0 = round.X[0];
y0 = round.Y[0];
px0 = Math.round(xscale * x0);
py0 = Math.round(yscale * y0) + xline.y;
ghf.mt(0,xline.y).lt(px0,py0);
ghf.f('red');
Your code is not very helpful, but I think what you need is the beginFill method. See link.
You can use it like this:
var ball = new createjs.Shape();
ball.graphics.setStrokeStyle(5, 'round', 'round');
ball.graphics.beginStroke(('#000000'));
ball.graphics.beginFill("#FF0000").drawCircle(0,0,50);
ball.graphics.endStroke();
ball.graphics.endFill();
ball.graphics.setStrokeStyle(1, 'round', 'round');
ball.graphics.beginStroke(('#000000'));
ball.graphics.moveTo(0,0);
ball.graphics.lineTo(0,50);

Get coordinates of svg group on drag with snap.svg

I'm brand new to svg and thought I would try out snap svg. I have a group of circles that I am dragging around, and am looking to get the coordinates of the group. I am using getBBox() to do this, but it isn't working as I would expect. I would expect getBBox() to update its x and y coordinates but it does not seem to do that. It seems simple but I think I am missing something. Here's the code
var lx = 0,
ly = 0,
ox = 0,
oy = 0;
moveFnc = function(dx, dy, x, y) {
var thisBox = this.getBBox();
console.log(thisBox.x, thisBox.y, thisBox);
lx = dx + ox;
ly = dy + oy;
this.transform('t' + lx + ',' + ly);
}
startFnc = function(x, y, e) { }
endFnc = function() {
ox = lx;
oy = ly;
console.log(this.getBBox());
};
var s = Snap("#svg");
var tgroup = s.group();
tgroup.add(s.circle(100, 150, 70), s.circle(200, 150, 70));
tgroup.drag(moveFnc, startFnc, endFnc);
The jsfiddle is at http://jsfiddle.net/STpGe/2/
What am I missing? How would I get the coordinates of the group? Thanks.
As Robert says it won't change. However getBoundingClientRect may help.
this.node.getBoundingClientRect(); //from Raphael
Jsfiddle here showing the difference http://jsfiddle.net/STpGe/3/.
Edit: Actually I'd be tempted to go here first, I found this very useful Get bounding box of element accounting for its transform
Per the SVG specification, getBBox gets the bounding box after transforms have been applied so in the co-ordinate system established by the transform attribute the position is the same.
Imagine like you drew the shape on graph paper setting a transform moves the whole graph paper but when you look at position of the shape on the graph paper it hasn't changed, it's the graph paper that's moved but you're not measuring that.
Try to use the group.matrix object to get x and y coordinate of group object.
moveFnc = function(dx, dy, x, y, event) {
lx = this.matrix.e;
ly = this.matrix.f;
this.transform('translate(' + lx + ',' + ly+')');
}

Running cv::warpPerspective on points

I'm running the cv::warpPerspective() function on a image and what to get the position of the some points of result image the I get in the source image, here how far I came :
int main (){
cv::Point2f srcQuad[4],dstQuad[4];
cv::Mat warpMatrix;
cv::Mat src, dst,src2;
src = cv::imread("card.jpg",1);
srcQuad[0].x = 0; //src Top left
srcQuad[0].y = 0;
srcQuad[1].x = src.cols - 1; //src Top right
srcQuad[1].y = 0;
srcQuad[2].x = 0; //src Bottom left
srcQuad[2].y = src.rows - 1;
srcQuad[3].x = src.cols -1; //src Bot right
srcQuad[3].y = src.rows - 1;
dstQuad[0].x = src.cols*0.05; //dst Top left
dstQuad[0].y = src.rows*0.33;
dstQuad[1].x = src.cols*0.9; //dst Top right
dstQuad[1].y = src.rows*0.25;
dstQuad[2].x = src.cols*0.2; //dst Bottom left
dstQuad[2].y = src.rows*0.7;
dstQuad[3].x = src.cols*0.8; //dst Bot right
dstQuad[3].y = src.rows*0.9;
warpMatrix =cv::getPerspectiveTransform(srcQuad,dstQuad);
cv::warpPerspective(src,dst,warpMatrix,src.size());
cv::imshow("source", src);
cv::imshow("destination", dst);
cv::warpPerspective(dst,src2,warpMatrix,dst.size(),CV_WARP_INVERSE_MAP);
cv::imshow("srouce 2 " , src2);
cv::waitKey();
return 0;
my problem is that if I select a point from dst how can get its coordinates in ** src or src2 ** since the cv::warpPerspective function doesn't take cv::Point as parameter ??
You want perspectiveTransform (which works on a vector of Points) rather than warpPerspective.
Take the inverse of warpMatrix; you may have to tweak the final column.
vector<Point2f> dstPoints, srcPoints;
dstPoints.push_back(Point2f(1,1));
cv::perspectiveTransform(dstPoints,srcPoints,warpMatrix.inv());
A perspective transform relates two points in the following manner:
[x'] [m00 m01 m02] [x]
[y'] = [m10 m11 m12] [y]
[1] [m20 m21 m22] [1]
Where (x,y) are the original 2D point coordinates, and (x', y') are the transformed coordinates.
In your case, you know (x', y'), and want to know (x, y). This can be achieved by multiplying the known point by the inverse of the transformation matrix:
cv::Matx33f warp = warpMatrix; // cv::Matx is much more useful for math
cv::Point2f warped_point = dstQuad[3]; // I just use dstQuad as an example
cv::Point3f homogeneous = warp.inv() * warped_point;
cv::Point2f result(homogeneous.x, homogeneous.y); // Drop the z=1 to get out of homogeneous coordinates
// now, result == srcQuad[3], which is what you wanted

Need Algorithm for Tie Dye Pattern

I am looking for an algorithm or help developing one for creating a tie-dye pattern in a 2-dimensional canvas. I will be using HTML Canvas (via fabric.js) or SVG and JavaScript, but I'm open to examples in any 2D graphics package, like Processing.
I would draw concentric rings of different colors, and then go around radially and offset them. Here's some pseudo-code for drawing concentric rings:
const kRingWidth = 10;
const centerX = maxX / 2;
const centerY = maxY / 2;
for (y = 0; y < maxY; y++)
{
for (x = 0; x < maxX; x++)
{
// Get the color of a concentric ring - assume rings are 10 pixels wide
deltaX = x - centerX;
deltaY = y - centerY;
distance = sqrt (deltaX * deltaX + deltaY * deltaY);
whichRing = int(distance / kRingWidth);
setPixel(x, y, myColorTable [ whichRing ]); // set the pixel based on a color look-up table
}
}
Now, to get the offsets, you can perturb the distance based on the angle of (x, y) to the x axis. I'd generate a random noise table with, say 360 entries (one per degree - you could try more or fewer to see how it looks). So after calculating the distance, try something like this:
angle = atan2(y, x); // This is arctangent of y/x - be careful when x == 0
if (angle < 0) angle += 2.0 * PI; // Make it always positive
angle = int(angle * 180 / PI); // This converts from radians to degrees and to an integer
distance += noiseTable [ angle ]; // Every pixel at this angle will get offset by the same amount.

Resources