I have the orientation of an object stored as a unit quaternion, and I want to see what angle the object's local x axis makes with the global y axis. What's the easiest way to do that?
Thanks!
I was overthinking it... rotate the vector (1, 0, 0), the local x axis into the global frame. Dot it with the global y vector, and take the arcCos of it. Since I didn't care about the object being upside down, I took
acos(abs(rotateVector(myQuat, vector(1, 0, 0)), upVector))
Related
I have a point on a sphere that needs to be rotated. I have 3 different degrees of rotation (roll, pitch, yaw). Are there any formulas I could use to calculate where the point would end up after applying each rotation? For simplicity sake, the sphere can be centered on the origin if that helps.
I've tried looking at different ways of rotation, but nothing quite matches what I am looking for. If I needed to just rotate the sphere, I could do that, but I need to know the position of a point based on the rotation of the sphere.
Using Unity for an example, this is outside of unity in a separate project so using their library is not possible:
If the original point is at (1, 0, 0)
And the sphere then gets rotated by [45, 30, 15]:
What is the new (x, y, z) of the point?
If you have a given rotation as a Quaternion q, then you can rotate your point (Vector3) p like this:
Vector3 pRotated = q * p;
And if you have your rotation in Euler Angles then you can always convert it to a Quaternion like this (where x, y and z are the rotations in degrees around those axes):
Quaternion q = Quaternion.Euler(x,y,z);
Note that Unity's euler angles are defined so that first the object is rotated around the z axis, then around the x axis and finally around the y axis - and that these axes are all the in the space of the parent transform, if any (not the object's local axes, which will move with each rotation).
So I suppose that the z-axis would be roll, the x-axis would be pitch and the y axis would be yaw.You might have to switch the signs on some axes to match the expected result - for example, a positive x rotation will tilt the object downwards (assuming that the object's notion of forward is in its positive z direction and that up is in its positive y direction).
I am trying to understand what it is the up vector in the ray tracing, but I'm not sure if I get it correctly. Is the up vector used to get the image plane? I will get the "right vector" making the cross product between the up vector and the forward vector (= target - origin). Does it have any other motivations? A good example could help me to understand it.
Let's revise how you construct these vectors:
First of all your forward-vector is very straightforward: vec3 forward = vec3(camPos - target) i.e. the opposite direction your camera is facing, where the target is a point in 3d space and camPos the current position of your camera.
Now you can't define a cartesian coordinate system with only one vector so we need two more to describe a coordinate system for the camera / the view of the ray-tracer. So let's find a vector that is perpendicular to the forward vector. Such vector can be found with: vec3 v = cross(anyVec, forward). In fact, you could use a random vector (except forward- and null-vector) to get the desired second direction but this is not convenient. We want that when looking along the z-axis (0, 0, 1) "right" is described as (1, 0, 0). This is true for vec3 right = cross(yAxis, forward) and vec3 yAxis = (0, 1, 0). If you would change your forward-vector so you aren't looking along the z-axis your right vector would change too, similar to how your "right" changes when you are changing your orientation.
So now only one vector is left to describe the orientation of our camera. This vector has to be perpendicular to the right- and forward-vector. Such vector can be found with: vec3 up = cross(forward, right). This vector describes what "up" is for the camera.
Please note that the forward-, right- and up-vectors need to be normalized.
If you would make a handstand your up-vector would be (0, -1, 0) and therefore would describe that you are seeing everything upsidedown while the other two vectors would be completely the same. If you look at the floor in a 90° angle your up-vector would be (0, 0, 1), your forward-vector (0, 1, 0) and your right-vector would stay at (1, 0, 0). So the function or "motivation" for the up-vector is that we need it to describe the orientation of your camera. You need it to "nod" the camera, i.e. adjust its pitch.
Is the up vector used to get the image plane?
The up-vector is used to describe the orientation of your camera or view into your scene. I assume you are using a view-matrix produced with the "look at method". When applying the view-matrix you are not rotating your camera. You are rotating all the objects/vertices. The camera is always facing the same direction (normally along the negative z-axis). For more information you can visit: https://www.scratchapixel.com/lessons/3d-basic-rendering/ray-tracing-generating-camera-rays/generating-camera-rays
The matrix that transforms world space coordinates to view space is called the view matrix. A look at matrix is a view matrix that is constructed from an eye point, a look at direction or target point and an up vector.
More details here
I have a catmull-rom curve defined with a couple of control points as shown here:
I would like to animate an object moving along the curve, but be able to define the velocity of the object.
When iterating over the curve's points using the getPoint method, the object moves chordaly (in the image, at u=0, we are at p1, at u=0.25, we are at p2 etc). Using the getPointAt method, the object moves with uniform speed along the curve.
However what I would like to so is to have greater control over the animation, so that I can specify that the movement from p1 to p2 should take 0.5, from p2 to p3, 0.3, and from p3 to p4 0.2. Is this possible?
Thanks for the suggestions. The way I finally implemented this was to create a custom mapping between my time variable, an the u variable for three.js getPoint function.
I created a piecewise linear functionn using a javascript library called everpolate. This way I could map t to u such that:
At t = 0, u = 0, resulting in p1
At t = 0.5, u = 1/3, resulting in p2
At t = 0.8, u = 2/3, resulting in p3
At t = 1, u = 1, resulting in p4
T to U map picture
However what I would like to so is to have greater control over the animation, so that I can specify that the movement from p1 to p2 should take 0.5, from p2 to p3, 0.3, and from p3 to p4 0.2. Is this possible?
You can achieve this by using an animation library like tween.js. In this way, you can specify the start and end position of your object and the desired duration. It's also possible to customize the type of transition by using easing functions.
You have multiple options I will describe the theory and then one possible implementation.
Theory
You want to arclength parametrize your curve. Which means that an increment of 1 in the parameter results in a distance of movement along the curve of 1.
This parametrization will allow you to fully control the movement of your object at any speed you want, be it constan't linear, non-linear, piecewise...
Possible application
There are many numerical integration techniques that will allow you to arclength parametrize the curve.
A possible on is to precompute the values and put them on a table. Pick a small epsilon and starting at the first parameter value x_0, evaluate the function at x_0, x_0+ epsilon, x_0 + 2*epsilon...
As you do this take the linear distance between each sample and add it to an accumulator. i.e travelled_distance += length(sample[x], sample[x+1]).
Store the pair in a table.
Now when you are at x and want to move y units you can round x to the nearest x_n and linearly look for the first x_n value whose distance is greater than y and then return that x_n.
This algorithm is not the most efficient, but it is easy to understand and to code, so at least it can get you started.
If you need a more optimized version, look for arc length parametrization algorithms.
Say we are rendering an image for an IKEA catalog that includes a mug of a smooth, mirror-like surface.
The mug will be illuminated by an environment map of a room interior with a window, a
directional light, and a ambient component.
The environment map is represented in spherical coordinates using φ and θ
(e.g. point (1, 0, 0) is (φ = 90◦, θ = 90◦); point (-1, 0, 0) is (φ = 90◦, θ = −90◦)).
The camera is positioned at (0, 0, 20), viewing in direction (0, 0, -1) with up direction (0, 1, 0). The mug is centered at the coordinates origin, with height 10 and radius 5. The mug’s axis is aligned
with the y axis. And the whole mug can be captured in the image.
For a nice product photo we’d like to see the window reflected in the side of the mug. Where
can the window be placed in the environment map where it will be reflected in the side of
the cylindrical mug? Compute the (φ, θ) coordinates of the corners of the region and of the highest and lowest phi and theta that will be reflected in the mug.
How do I approach this problem? Is there a specific equation I should be utilizing? Thanks in advance.
You can solve that by casting rays from the viewer to the mug and reflect them to the map. Say one ray per corner of the desired reflected quadrilateral on the mug.
Reflection is simply computed by the reflection law: the normal to the surface is the bissectrix of the incident and reflected rays.
First compute the incident ray from the viewer to one of the chosen corners. Then compute the normal at that point (it's perpendicular to the axis of rotation of the mug, in the direction of the radius to the point). From the incident vector and the normal, you will find the direction of the reflected vector.
Turning this vector to spherical coordinates will give you a corner of the quadrilateral in the environment map.
So I need to map a texture to a sphere from within a pixel/fragment shader in Cg.
What I have as "input" in every pass are the Cartesian coordinates x, y, z for the point on the sphere where I want the texture to be sampled. I then transform those coordinates into Spherical coordinates and use the angles Phi and Theta as U and V coordinates, respectively, like this:
u = atan2(y, z)
v = acos(x/sqrt(x*x + y*y + z*z))
I know that this simple mapping will produce seams at the poles of the sphere but at the moment, my problem is that the texture repeats several times across the sphere. What I want and need is that the whole texture gets wrapped around the sphere exactly once.
I've fiddled with the shader and searched around for hours but I can't find a solution. I think I need to apply some sort of scaling somewhere but where? Or maybe I'm totally on the wrong track, I'm very new to Cg and shader programming in general... Thanks for any help!
Since the results of inverse trigonometric functions are angles, they will be in [-Pi, Pi] for u and [0, Pi] for v (though you can't have searched for hours with at least basic knowledge of trigonometrics, as acquired from school). So you just have to scale them appropriately. u /= 2*Pi and v /= Pi should do, assuming you have GL_REPEAT (or the D3D equivalent) as texture coordinate wrapping mode (which your description sounds like).