What will be the equation for the ray and ray origin when we are using parallel projection and how to derive that?
In traditional raytracing, you use a ray that starts at your eye point. For each pixel you calculate where it is on a virtual screen in front of the camera and shoot a ray through that pixel.
Let pO be the eye point, d be the direction of the camera, r to be a vector pointing to the right and u to be a vector pointing up. Let w be the number of pixels in the screen horizontally and h be the number of pixels vertically.
The parametric equation for a ray going through any pixel x, y is then:
ray = pO + t * normalize (d + (x - 0.5w)/0.5w * r + (y - 0.5h)/0.5h * u)
where t is the parameter.
For a parallel projection, move the virtual screen to the origin and calculate the x, y to be the origin of the ray then use the same direction d for each ray:
ray = (pO + (x - 0.5w)/0.5w * r + (y - 0.5h)/0.5h * u) + t*d
For a perspective projection, you have an eye origin, direction, right and up vectors. You then run a vector from the eye origin to each pixel in a virtual screen by scaling the right and up vectors.
In a parallel projection, you do the same calculation for the point on the screen, but your origin becomes that point and you use the same direction for each ray.
Related
I have cone->p (vertex of the cone), cone->orient (axis vector), cone->k (half-angle tangent), cone->minm and cone->maxm (2 height values, for cone caps). Also I have point intersection which is on the cone. How do I find the cone (side surface) normal vector at intersection point using only these parameters?
Сame up with simpler method:
Find distance Dis from intersection point I to base P
Make axis orientation vector of length
D = Dis * sqrt(1+k^2)
and make point on axis at this distance
A = P + Normalized(Orient) * D
Now
Normal = I - A
Old answer:
Make orthogonal projection of point I (intersection) onto cone axis using vector `IP = I - P' and scalar (dot) product:
AxProj = P + Orient * dot(IP, Orient) / dot(Orient, Orient)
Vector from AxPr to I (perpendicular to axis):
AxPerp = I - AxProj
Vector, tangent to cone surface, using vector product:
T = IP x AxPerp
Vector, normal to cone surface:
N = T x IP
If I is the intersection point on the cone's surface and you know its coordinates, and P is the vertex of the cone, whose coordinates you also know, then this is enough:
Normal = (axis x PI) x PI
Normal = Normal / norm(Normal)
where axis is the vector aligned with the axis of the cone.
Here some examples of twisted triangle prisms.
I want to know if a moving triangle will hit a certain point. That's why I need to solve this problem.
The idea is that a triangle with random coordinates becomes the other random triangle whose vertices all move between then
related: How to determine point/time of intersection for ray hitting a moving triangle?
One of my students made this little animation in Mathematica.
It shows the twisting of a prism to the Schönhardt polyhedron.
See the Wikipedia page for its significance.
It would be easy to determine if a particular point is inside the polyhedron.
But whether it is inside a particular smooth twisting, as in your image, depends on the details (the rate) of the twisting.
Let's bottom triangle lies in plane z=0, it has rotation angle 0, top triangle has rotation angle Fi. Height of twisted prism is Hgt.
Rotation angle linearly depends on height, so layer at height h has rotation angle
a(h) = Fi * h / Hgt
If point coordinates are (x,y,z), then shift point to z=0 and rotate (x,y) coordinates about rotation axis (rx, ry) by -a(z) angle
t = -a(z) = - Fi * z / Hgt
xn = rx + (x-rx) * Cos(t) - (y-ry) * Sin(t)
yn = ry + (x-rx) * Sin(t) - (y-ry) * Cos(t)
Then check whether (xn, yn) lies inside bottom triangle
I’ve completely stuck with camera in ray tracing. Please, take a look at my calculations and point me out where is the error. I’m using left handed coordinate system.
x,y // range [0..S) x [0..S) //pixels coordinates
Now, let’s transform pixels coordinates to parametric coordinates of camera plane:
xp = x/S * 2 – 1;
yp = y/S * 2 – 1;
xp, yp // range [-1..1] x [-1..1]
calculation of camera basis:
//eye - camera position
//up - camera up vector
//look_at - camera target point
vec3 w = normalize(look_at-eye);
vec3 u = cross(up,w);
vec3 v = cross(w,u);
so ray direction should have following coordinates:
vec3 dir = look_at – eye + xp*u + yp*v;
ray3 ray = {eye, normalize(dir)};
I think the mistake is here:
vec3 dir = look_at – eye + xp*u + yp*v;
The image plane should have a normal vector w, and either be between the eye and the look at point (the more common way in ray tracers), or be behind the eye (more closely models an actual pinhole camera). So let's create a scalar zoom_factor. A positive number will put the plane in front of the eye, and a negative one will put it behind the eye (and flip the image).
The center of the image plane is thus:
eye + zoom_factor*w
A point (xp, yp) on the image plane is thus:
eye + zoom_factor*w + xp*u + yp*v
Now you want the direction to be from the eye to this point on this image plane:
vec3 dir = eye + zoom_factor*w + xp*u + yp*v - eye;
The eyes cancel, so it simplifies to:
vec3 dir = zoom_factor*w + xp*u + yp*v
This assumes xp an yp are each in a range like (-0.5, 0.5). Note that (0, 0) is the middle of the image plane with this arrangement.
If i have a point (x,y,z) how to project it on to a sphere(x0,y0,z0,radius) (on its surface).
My input will be the coordinates of point and sphere.
The output should be the coordinates of the projected point on sphere.
Just convert from cartesian to spherical coordinates?
For the simplest projection (along the line connecting the point to the center of the sphere):
Write the point in a coordinate system centered at the center of the sphere (x0,y0,z0):
P = (x',y',z') = (x - x0, y - y0, z - z0)
Compute the length of this vector:
|P| = sqrt(x'^2 + y'^2 + z'^2)
Scale the vector so that it has length equal to the radius of the sphere:
Q = (radius/|P|)*P
And change back to your original coordinate system to get the projection:
R = Q + (x0,y0,z0)
Basically you want to construct a line going through the spheres centre and the point. Then you intersect this line with the sphere and you have your projection point.
In greater detail:
Let p be the point, s the sphere's centre and r the radius then x = s + r*(p-s)/(norm(p-s)) where x is the point you are looking for. The implementation is left to you.
I agree that the spherical coordinate approach will work as well but is computationally more demanding. In the above formula the only non-trivial operation is the square root for the norm.
It works if you set the coordinates of the center of the sphere as origin of the system (x0, y0, z0). So you will have the coordinates of the point referred to that origin (Xp', Yp', Zp'), and converting the coordinates to polar, you discard the radius (distance between the center of the sphere and the point) and the angles will define the projection.
I am rendering textured quads from an orthographic perspective and would like to simulate 'depth' by modifying UVs and the vertex positions of the quads four points (top left, top right, bottom left, bottom right).
I've found if I make the top left and bottom right corners y position be the same I don't get a linear 'skew' but rather a warped one where the texture covering the top triangle (which makes up the quad) seems to get squashed while the bottom triangles texture looks normal.
I can change UVs, any of the four points on the quad (but only in 2D space, it's orthographic projection anyway so 3D space won't matter much). So basically I'm trying to simulate perspective on a two dimensional quad in orthographic projection, any ideas? Is it even mathematically possible/feasible?
ideally what I'd like is a situation where I can set an x/y rotation as well as a virtual z 'position' (which simulates z depth) through a function and see it internally calclate the positions/uvs to create the 3D effect. It seems like this should all be mathematical where a set of 2D transforms can be applied to each corner of the quad to simulate depth, I just don't know how to make it happen. I'd guess it requires trigonometry or something, I'm trying to crunch the math but not making much progress.
here's what I mean:
Top left is just the card, center is the card with a y rotation of X degrees and right most is a card with an x and y rotation of different degrees.
To compute the 2D coordinates of the corners, just choose the coordinates in 3D and apply the 3D perspective equations :
Original card corner (x,y,z)
Apply a rotation ( by matrix multiplication ) you get ( x',y',z')
Apply a perspective projection ( choose some camera origin, direction and field of view )
For the most simple case it's :
x'' = x' / z
y'' = y' / z
The bigger problem now is the texturing used to get the texture coordinates from pixel coordinates :
The correct way for you is to use an homographic transformation of the form :
U(x,y) = ( ax + cy + e ) / (gx + hy + 1)
V(x,y) = ( bx + dy + f ) / (gx + hy + 1)
Which is fact is the result of the perpective equations applied to a plane.
a,b,c,d,e,f,g,h are computed so that ( with U,V in [0..1] ) :
U(top'',left'') = (0,0)
U(top'',right'') = (0,1)
U(bottom'',left'') = (1,0)
U(bottom'',right'') = (1,1)
But your 2D rendering framework probably uses instead a bilinear interpolation :
U( x , y ) = a + b * x + c * y + d * ( x * y )
V( x , y ) = e + f * x + g * y + h * ( x * y )
In that case you get a bad looking result.
And it is even worse if the renderer splits the quad in two triangles !
So I see only two options :
use a 3D renderer
compute the texturing yourself if you only need a few images and not a realtime animation.