Perspective projection can either be described by distance from viewing plane and angle (FOV) , or by distance from image plane and extents of image plane( Left, Right, Top , Bottom). My question is given extents of viewing (image) plane and distance from image plane, how to calculate the corresponding FOV?
The OpenGL FAQ has a section (9.085) that describes this:
fov*0.5 = arctan ((top-bottom)*0.5 / near)
or
fov = 2.0 * arctan ((top-bottom)*0.5 / near)
Note that the result will be in radians rather than degrees.
Related
I have a 4x4 camera matrix comprised of right, up, forward and position vectors.
I raytrace the scene with the following code that I found in a tutorial but don't really entirely understand it:
for (int i = 0; i < m_imageSize.width; ++i)
{
for (int j = 0; j < m_imageSize.height; ++j)
{
u = (i + .5f) / (float)(m_imageSize.width - 1) - .5f;
v = (m_imageSize.height - 1 - j + .5f) / (float)(m_imageSize.height - 1) - .5f;
Ray ray(cameraPosition, normalize(u*cameraRight + v*cameraUp + 1 / tanf(m_verticalFovAngleRadian) *cameraForward));
I have a couple of questions:
How can I find the focal length of my raytracing camera?
Where is my image plane?
Why cameraForward needs to be multiplied with this 1/tanf(m_verticalFovAngleRadian)?
Focal length is a property of lens systems. The camera model that this code uses, however, is a pinhole camera, which does not use lenses at all. So, strictly speaking, the camera does not really have a focal length. The corresponding optical properties are instead expressed as the field of view (the angle that the camera can observe; usually the vertical one). You could calculate the focal length of a camera that has an equivalent field of view with the following formula (see Wikipedia):
FOV = 2 * arctan (x / 2f)
FOV diagonal field of view
x diagonal of film; by convention 24x36 mm -> x=43.266 mm
f focal length
There is no unique image plane. Any plane that is perpendicular to the view direction can be seen as the image plane. In fact, the projected images differ only in their scale.
For your last question, let's take a closer look at the code:
u = (i + .5f) / (float)(m_imageSize.width - 1) - .5f;
v = (m_imageSize.height - 1 - j + .5f) / (float)(m_imageSize.height - 1) - .5f;
These formulas calculate u/v coordinates between -0.5 and 0.5 for every pixel, assuming that the entire image fits in the box between -0.5 and 0.5.
u*cameraRight + v*cameraUp
... is just placing the x/y coordinates of the ray on the pixel.
... + 1 / tanf(m_verticalFovAngleRadian) *cameraForward
... is defining the depth component of the ray and ultimately the depth of the image plane you are using. Basically, this is making the ray steeper or shallower. Assume that you have a very small field of view, then 1/tan(fov) is a very large number. So, the image plane is very far away, which produces exactly this small field of view (when keeping the size of the image plane constant since you already set the x/y components). On the other hand, if the field of view is large, the image plane moves closer. Note that this notion of image plane is only conceptual. As I said, all other image planes are equally valid and would produce the same image. Another way (and maybe a more intuitive one) to specify the ray would be
u * tanf(m_verticalFovAngleRadian) * cameraRight
+ v * tanf(m_verticalFovAngleRadian) * cameraUp
+ 1 * cameraForward));
As you see, this is exactly the same ray (just scaled). The idea here is to set the conceptual image plane to a depth of 1 and scale the x/y components to adapt the size of the image plane. tan(fov) (with fov being the half field of view) is exactly the size of the half image plane at a depth of 1. Just draw a triangle to verify that. Note that this code is only able to produce square image planes. If you want to allow rectangular ones, you need to take into account the ratio of the side lengths.
Here some examples of twisted triangle prisms.
I want to know if a moving triangle will hit a certain point. That's why I need to solve this problem.
The idea is that a triangle with random coordinates becomes the other random triangle whose vertices all move between then
related: How to determine point/time of intersection for ray hitting a moving triangle?
One of my students made this little animation in Mathematica.
It shows the twisting of a prism to the Schönhardt polyhedron.
See the Wikipedia page for its significance.
It would be easy to determine if a particular point is inside the polyhedron.
But whether it is inside a particular smooth twisting, as in your image, depends on the details (the rate) of the twisting.
Let's bottom triangle lies in plane z=0, it has rotation angle 0, top triangle has rotation angle Fi. Height of twisted prism is Hgt.
Rotation angle linearly depends on height, so layer at height h has rotation angle
a(h) = Fi * h / Hgt
If point coordinates are (x,y,z), then shift point to z=0 and rotate (x,y) coordinates about rotation axis (rx, ry) by -a(z) angle
t = -a(z) = - Fi * z / Hgt
xn = rx + (x-rx) * Cos(t) - (y-ry) * Sin(t)
yn = ry + (x-rx) * Sin(t) - (y-ry) * Cos(t)
Then check whether (xn, yn) lies inside bottom triangle
I'm trying to generate a mesh from a sphere of radius r. My goal is to create a UV sphere such that every point on the polyhedron has distance from the sphere smaller than tol.
The following code creates a grid of points on the sphere. How can I compute parallels_count and meridians_count so that all the point of the mesh are within tolerance?
for j in parallels_count:
parallel = PI * (j+1) / parallels_count
for i in meridians_count:
meridian = 2.0 * PI * i / meridians_count
return spherical_to_cartesian(meridian, parallel)
The code comes from here, and this is a picture of the UV sphere:
The distance between each face of the mesh and the sphere will be maximum around the center of the face.
So, for the distance between a face and the sphere to be smaller than tol it is not sufficient that the distances between the edges of the face and the corresponding circumferences are smaller than tol.
This picture is out of context but helps me explaining what I mean.
the biggest distance between points is on equator so use circle circumference to obtain angular step if I am not mistaken it should be...
dangle = tol/r; //[rad]
where r is sphere radius in the same units as tol you can use smaller step to be sure like dangle*=0.75; use this for both parallel and meridian angles.
If you want your counts instead then try:
meridians_count = (2.0*PI*r/tol)+1; // ceil or +1 just to be sure
parallels_count = 0.5*meridians_count;
It is still early here so I hope I did not make any silly math mistake (the easiest stuff is the worst for silly bugs).
Also take a look at few related QA's of mine:
Applying map of the earth texture a Sphere
Make a sphere with equidistant vertices
Sphere triangulation
[Edit1] well your new definition of tol changes everything
I see it like this:
sin(da/2) = (r-tol)/r
da = 2.0*asin((r-tol)/r)
If you convert to sphericalsurface than max difference is in center of uv grid cell which represents sqrt(2)*dadiagonal so try to use:
da = sqrt(2.0)*asin((r-tol)/r)
so your angle step should be a bit smaller than that ...
I'm working on a 3D mapping application, and I've got to do some work with things like figuring out the visible region of a sphere (Earth) from a given point in space for things like clipping mapped regions and such.
Several things get easier if I can project the outline of Earth into screen space, clip polygons there, and then project back to the surface of the Earth (lat/lon), but I'm lost as to how to do that.
Is there a reasonable way to compute the outline of a sphere after perspective projection, and then a reasonable way to project things back onto the sphere?
You can clip the polygons in 3D. The silhouette of the sphere - back-projected into 3D - will always be a circle on a plane. Perspective projection does not change that. Thus, you can clip all polygons at the plane.
Calculating the plane is not too hard. If you consider the sphere's center the origin, then the plane could be represented in normal form as:
dot(n, x) = d
n is the normal. This one is easy. It is just the unit direction vector from the sphere center to the observer.
d is the distance from the sphere center. This is a bit harder but not too hard. If l is the distance of the observer to the sphere center and r is the sphere radius, then
d = r^2 / l
This is the plane which you can use to clip your polygons in 3D. If you need the radius of the circle on it, you can use the following formula:
r_c = r / sqrt(1 - r^2/(l-d)^2)
Let us take a point on a sphere in spherical coordinates (cos(u)sin(v),sin(u)sin(v),cos(v)) and an arbitrary projection center (x,y,z).
We express that a projecting line is tangent to the sphere by the perpendicularity condition of the direction of the line and the vector from the origin of the sphere:
(x-cos(u)sin(v))cos(u)sin(v) + (y-sin(u)sinv))sin(u)sin(v) + (z-cos(v)) cos(v) = 0
This simplifies to
x cos(u)sin(v) + y sin(u)sin(v) + z cos(v) = 1
which is a curve in the longitude/latitude coordinates. You can solve u as a function of v or conversely.
I have a question regarding the projection of an image over a set of 3D points. The image is given to me as a JPG, together with position and attitude information of the camera relative to a cartesian coordinate system (Xc,Yc,Zc and yaw, pitch, roll), as well as the horizontal and vertical field of view (in degrees).
Points are given using solely their 3d position in the same coordinate system (Xp,Yp,Zp).
In my coordinate system, Z is up. To project the image onto the points, I
compute the vector from camera to each point
Vector3 c2p = (Xp,Yp,Zp)-(Xc,Yc,Zc);
rotate c2p according to my camera's attitude (quaternion):
Vector3 c2pCamFrame = getCamQuaternion().conjugate().rotate(c2p);
compute azimuth and elevation from the camera's "center ray" to the point:
float azimuth = atan2(c2pCamFrame.x(),c2pCamFrame.y()));
float elevation = atan2(c2pCamFrame.z(),sqrt(pow(c2pCamFrame.x(),2)+pow(c2pCamFrame.y(),2)));
if azimuth and elevation are within the field of view, I assign the color of the corresponding pixel to the point.
This works almost perfectly, and the "almost" motivates my question. Let me show you:
I cannot figure out why the elevation of the projection is distorted. In the bottom right of the image, you can see that points outside the frustum (exceeding the elevation) actually become colored - and this distortion is null at an azimuth of 0 degrees and peaks at the left and right edges of the image, creating the pillow distortion.
Why does this distortion appear? I'd love to understand this problem both in geometrical as well as mathematical terms. Thank you!
The field of view angles are only valid on the principal axes. But you can do it the other way around. I.e. calculate the x/y bounds from the angles:
maxX = tan(horizontal_fov / 2)
maxY = tan(vertical_fov / 2)
And check
if(abs(c2pCamFrame.x() / c2pCamFrame.z()) <= maxX
&& abs(c2pCamFrame.y() / c2pCamFrame.z()) <= maxY)
Additionally, you might want to check if the points are in front of the camera:
... && c2pCamFrame.z() > 0
This assumes a left-handed coordinate system.