Say we are rendering an image for an IKEA catalog that includes a mug of a smooth, mirror-like surface.
The mug will be illuminated by an environment map of a room interior with a window, a
directional light, and a ambient component.
The environment map is represented in spherical coordinates using φ and θ
(e.g. point (1, 0, 0) is (φ = 90◦, θ = 90◦); point (-1, 0, 0) is (φ = 90◦, θ = −90◦)).
The camera is positioned at (0, 0, 20), viewing in direction (0, 0, -1) with up direction (0, 1, 0). The mug is centered at the coordinates origin, with height 10 and radius 5. The mug’s axis is aligned
with the y axis. And the whole mug can be captured in the image.
For a nice product photo we’d like to see the window reflected in the side of the mug. Where
can the window be placed in the environment map where it will be reflected in the side of
the cylindrical mug? Compute the (φ, θ) coordinates of the corners of the region and of the highest and lowest phi and theta that will be reflected in the mug.
How do I approach this problem? Is there a specific equation I should be utilizing? Thanks in advance.
You can solve that by casting rays from the viewer to the mug and reflect them to the map. Say one ray per corner of the desired reflected quadrilateral on the mug.
Reflection is simply computed by the reflection law: the normal to the surface is the bissectrix of the incident and reflected rays.
First compute the incident ray from the viewer to one of the chosen corners. Then compute the normal at that point (it's perpendicular to the axis of rotation of the mug, in the direction of the radius to the point). From the incident vector and the normal, you will find the direction of the reflected vector.
Turning this vector to spherical coordinates will give you a corner of the quadrilateral in the environment map.
Related
I have a point on a sphere that needs to be rotated. I have 3 different degrees of rotation (roll, pitch, yaw). Are there any formulas I could use to calculate where the point would end up after applying each rotation? For simplicity sake, the sphere can be centered on the origin if that helps.
I've tried looking at different ways of rotation, but nothing quite matches what I am looking for. If I needed to just rotate the sphere, I could do that, but I need to know the position of a point based on the rotation of the sphere.
Using Unity for an example, this is outside of unity in a separate project so using their library is not possible:
If the original point is at (1, 0, 0)
And the sphere then gets rotated by [45, 30, 15]:
What is the new (x, y, z) of the point?
If you have a given rotation as a Quaternion q, then you can rotate your point (Vector3) p like this:
Vector3 pRotated = q * p;
And if you have your rotation in Euler Angles then you can always convert it to a Quaternion like this (where x, y and z are the rotations in degrees around those axes):
Quaternion q = Quaternion.Euler(x,y,z);
Note that Unity's euler angles are defined so that first the object is rotated around the z axis, then around the x axis and finally around the y axis - and that these axes are all the in the space of the parent transform, if any (not the object's local axes, which will move with each rotation).
So I suppose that the z-axis would be roll, the x-axis would be pitch and the y axis would be yaw.You might have to switch the signs on some axes to match the expected result - for example, a positive x rotation will tilt the object downwards (assuming that the object's notion of forward is in its positive z direction and that up is in its positive y direction).
I am trying to understand what it is the up vector in the ray tracing, but I'm not sure if I get it correctly. Is the up vector used to get the image plane? I will get the "right vector" making the cross product between the up vector and the forward vector (= target - origin). Does it have any other motivations? A good example could help me to understand it.
Let's revise how you construct these vectors:
First of all your forward-vector is very straightforward: vec3 forward = vec3(camPos - target) i.e. the opposite direction your camera is facing, where the target is a point in 3d space and camPos the current position of your camera.
Now you can't define a cartesian coordinate system with only one vector so we need two more to describe a coordinate system for the camera / the view of the ray-tracer. So let's find a vector that is perpendicular to the forward vector. Such vector can be found with: vec3 v = cross(anyVec, forward). In fact, you could use a random vector (except forward- and null-vector) to get the desired second direction but this is not convenient. We want that when looking along the z-axis (0, 0, 1) "right" is described as (1, 0, 0). This is true for vec3 right = cross(yAxis, forward) and vec3 yAxis = (0, 1, 0). If you would change your forward-vector so you aren't looking along the z-axis your right vector would change too, similar to how your "right" changes when you are changing your orientation.
So now only one vector is left to describe the orientation of our camera. This vector has to be perpendicular to the right- and forward-vector. Such vector can be found with: vec3 up = cross(forward, right). This vector describes what "up" is for the camera.
Please note that the forward-, right- and up-vectors need to be normalized.
If you would make a handstand your up-vector would be (0, -1, 0) and therefore would describe that you are seeing everything upsidedown while the other two vectors would be completely the same. If you look at the floor in a 90° angle your up-vector would be (0, 0, 1), your forward-vector (0, 1, 0) and your right-vector would stay at (1, 0, 0). So the function or "motivation" for the up-vector is that we need it to describe the orientation of your camera. You need it to "nod" the camera, i.e. adjust its pitch.
Is the up vector used to get the image plane?
The up-vector is used to describe the orientation of your camera or view into your scene. I assume you are using a view-matrix produced with the "look at method". When applying the view-matrix you are not rotating your camera. You are rotating all the objects/vertices. The camera is always facing the same direction (normally along the negative z-axis). For more information you can visit: https://www.scratchapixel.com/lessons/3d-basic-rendering/ray-tracing-generating-camera-rays/generating-camera-rays
The matrix that transforms world space coordinates to view space is called the view matrix. A look at matrix is a view matrix that is constructed from an eye point, a look at direction or target point and an up vector.
More details here
I have a polyhedron which has one face missing. I have a library function which can help close the border after sending in an array of points in CCW order:
class Mesh {
// points need to be in CCW order when see from the ouside of this polyhedron
void addAFace(std::vector<Point> points);
}
I have found the vertices on the border and have put them in an array one after the other. How can I know if the order of vertices in this array is in counter-clockwise or clockwise order when seeing from the outside of the polyhedron?
For example, the vertices should be in the order of 0, 1, 2, 3
The polyhedron may be non-convex.
For convex polyhedron: get any point Px in polyhedron not belonging to that face (for example, vertex 4, 5, 6 or 7 for your example) and check sign of triple product - should be negative for CCW order
((P1 - P0) .cross. (P2 - P1)) .dot. (Px - P1)
If there is no guarantee that P0-P3 are properly ordered, then it is needed to check signs of triple products for all face vertices triplets
If polygon is concave, you need some point near given face but lying inside polyhedron. For example, choose any triplet of consequent vertices with acute internal angle, get center of triangle and shift this center along normal by small distance. Check that shifted point is inside, make reflection otherwise.
I have a question regarding the projection of an image over a set of 3D points. The image is given to me as a JPG, together with position and attitude information of the camera relative to a cartesian coordinate system (Xc,Yc,Zc and yaw, pitch, roll), as well as the horizontal and vertical field of view (in degrees).
Points are given using solely their 3d position in the same coordinate system (Xp,Yp,Zp).
In my coordinate system, Z is up. To project the image onto the points, I
compute the vector from camera to each point
Vector3 c2p = (Xp,Yp,Zp)-(Xc,Yc,Zc);
rotate c2p according to my camera's attitude (quaternion):
Vector3 c2pCamFrame = getCamQuaternion().conjugate().rotate(c2p);
compute azimuth and elevation from the camera's "center ray" to the point:
float azimuth = atan2(c2pCamFrame.x(),c2pCamFrame.y()));
float elevation = atan2(c2pCamFrame.z(),sqrt(pow(c2pCamFrame.x(),2)+pow(c2pCamFrame.y(),2)));
if azimuth and elevation are within the field of view, I assign the color of the corresponding pixel to the point.
This works almost perfectly, and the "almost" motivates my question. Let me show you:
I cannot figure out why the elevation of the projection is distorted. In the bottom right of the image, you can see that points outside the frustum (exceeding the elevation) actually become colored - and this distortion is null at an azimuth of 0 degrees and peaks at the left and right edges of the image, creating the pillow distortion.
Why does this distortion appear? I'd love to understand this problem both in geometrical as well as mathematical terms. Thank you!
The field of view angles are only valid on the principal axes. But you can do it the other way around. I.e. calculate the x/y bounds from the angles:
maxX = tan(horizontal_fov / 2)
maxY = tan(vertical_fov / 2)
And check
if(abs(c2pCamFrame.x() / c2pCamFrame.z()) <= maxX
&& abs(c2pCamFrame.y() / c2pCamFrame.z()) <= maxY)
Additionally, you might want to check if the points are in front of the camera:
... && c2pCamFrame.z() > 0
This assumes a left-handed coordinate system.
I have gone through all available study resources in the internet as much as possible, which are in form of simple equations, vectors or trigonometric equations.
I couldn't find the way of doing following thing:
Assuming Y is up in a 3D world.
I need to draw two 2D trajectories orthogonally (not the projections) for a 3D trajectory, say XY-plane for side view of the trajectory w.r.t. the trajectory itself and XZ-plane for top view for the same.
I have all the 3D points of the 3D trajectory, initial velocity, both the angles can be calculated by vector mathematics.
How should I proceed further?
refer:
Below a curve in different angles, which can loose its significance if projected along XY-plane. All I want is to convert the red curve along itself, the green curve along green curve and so on. and further how would I map side view to a plane. Top view is comparatively easy and done just by taking X and Z ordinates of each points.
I mean this the requirement. :)
I don't think I understand the question, but I'll answer my interpretation anyway.
You have a 3D trajectory described by a sequence of points p0, ..., pN. We are given an angle v for a plane P parallel to the Y-axis, and wish to compute the 2D coordinates (di, hi) of the points pi projected onto that plane, where hi is the height coordinate in the direction Y and di is the distance coordinate in the direction v. Assume p0 = (0, 0, 0) or else subtract p0 from all vectors.
Let pi = (xi, yi, zi). The height coordinate is hi = yi. Assume the angle v is given relative to the Z-axis. The vector for the direction v is then r = (sin(v), 0, cos(v)), and the distance coordinates becomes di = dot(pi, r).