Getting normal point on a Cone - graphics

I'm trying to get cone primitives working in my ray tracer. I got intersections of a cone and ray working. However, I do not know how to get the normals of the cone from the way the cone is defined.
I define my cone with the following:
pos -- The vertex of the cone
size -- Height of the cone
direction -- A unit vector that defines the direction of the cone
angle -- The angle of the cone
(For more info I followed how Intersection of line and Cone as a reference on how it is defined).
From what I gather I can use two tangents of a point with the parametric eqn, and using their cross product get the normal. However I don't know how to get the parametric eqn given the way I defined my cone, and two tangents to the parametric eqn.
If somehow has another method to get the find the normals that would be great to.

I ended up applying the grad function to the cone equation (x*a+y*b+z*c)^2-(a^2+b^2+c^2)(x^2+y^2+z^2)cos(t)^2 where
{x,y,z} = 3d point (point of normal in question)
{a,b,c} = direction vector
t = angle of cone
Then using wolframalpha, this ends up giving me
x = (2 a (a x+b y+c z)-2 (a^2+b^2+c^2) x cos^2(t))
y = (2 b (a x+b y+c z)-2 (a^2+b^2+c^2) y cos^2(t))
z = (2 c (a x+b y+c z)-2 (a^2+b^2+c^2) z cos^2(t))
and normal={x,y,z}

Related

Cone normal vector

I have cone->p (vertex of the cone), cone->orient (axis vector), cone->k (half-angle tangent), cone->minm and cone->maxm (2 height values, for cone caps). Also I have point intersection which is on the cone. How do I find the cone (side surface) normal vector at intersection point using only these parameters?
Сame up with simpler method:
Find distance Dis from intersection point I to base P
Make axis orientation vector of length
D = Dis * sqrt(1+k^2)
and make point on axis at this distance
A = P + Normalized(Orient) * D
Now
Normal = I - A
Old answer:
Make orthogonal projection of point I (intersection) onto cone axis using vector `IP = I - P' and scalar (dot) product:
AxProj = P + Orient * dot(IP, Orient) / dot(Orient, Orient)
Vector from AxPr to I (perpendicular to axis):
AxPerp = I - AxProj
Vector, tangent to cone surface, using vector product:
T = IP x AxPerp
Vector, normal to cone surface:
N = T x IP
If I is the intersection point on the cone's surface and you know its coordinates, and P is the vertex of the cone, whose coordinates you also know, then this is enough:
Normal = (axis x PI) x PI
Normal = Normal / norm(Normal)
where axis is the vector aligned with the axis of the cone.

How to translate and rotate coordinates?

I have two 3D points (x,y,z), namely A and B and a bunch of other 3D points. Point A is at (0,0,0).
I would like to set point B to (0,0,0) so that all other points including A and B are translated and rotated in a way that is appropriate (so that A is no longer at (0,0,0)).
I know that there are some translations and rotations involved, but nothing more than that.
UPGRADE:
Point B is also constrained by three vectors: x', y', z' that represent x, y, and z axis of B's coordinate system. I think these should be somehow considered for the rotation part.
As you have given two points, one (A) at the origin and one (B) somewhere else, and you want to shift (translate) B to the origin, I don't see the necessity for any rotation.
If you don't have any other contraints, just shift all coordinates by the initial coordinates of B.
You can construct a transformation matrix as given, e.g., https://en.wikipedia.org/wiki/Transformation_matrix#Affine_transformations for 2D, but if you simply translate, R' = R + T, where R' is the vector after transformation, R the vector before and T the translation vector.
For more general transformations including rotations, you have to specify the rotation angle and axis. Then, you can come up with more general transformation, see above link.

Numerically finding the projected area of a bullet

Suppose I have a bullet as shown below where the measurements are in units of bullet diameters (this thing is 3 dimensional, so imagine rotating it about the x axis here)
If this bullet were to be tilted upwards by an angle θ, how could I numerically find its projected area?
I'm trying to find the area that such a bullet would present to the air as it moves through it and so if it is not tilted away from the direction of motion this area is simply a circle. I know for small tilts, it will simply present the projected area of a cylinder but I am unsure about how to deal with tilts large enough that one needs to care about the tip of the bullet for purposes of finding the area. Anyone have ideas about how to deal with this?
Hint:
The boundary curves of the bullet are the apparent outline of the inner surface of a self-intersecting torus. They can be found by expressing that the normal vector is parallel to the projection plane.
With z being the axis of the bullet, the parametric equation of the surface is
x= (R + r sinφ) cosΘ
y= (R + r sinφ) sinΘ
z= r cosφ
and the normal is obtained by setting R=0,
x= r sinφ cosΘ
y= r sinφ sinΘ
z= r cosφ
Now for some projection plane with a normal in direction (cosα, 0, sinα), the outline is such that
r sinφ cosΘ cosα + r cosφ sinα = 0.
From this you can draw Θ as a function of φ or conversely and construct points along the curve.
When α increases, the tip of the bullet starts entering the ellipse resulting from the projection of the basis of the cylindre. This ellipse corresponds to the angle φ such that z=0.
The surface is known as a lemon shape: http://mathworld.wolfram.com/Lemon.html

Perspective Projection: Proving that 1/z is Linear?

In 3D rendering (or geometry for that matter), in the rasterization algorithm, when you project the vertices of a triangle onto the screen and then find if a pixel overlaps the 2D triangle, you often need to find the depth or the z-coordinate of the triangle that the pixel overlaps. Generally, the method consists of computing the barycentric coordinates of the pixel in the 2D "projected" image of the triangle, and then use these coordinates to interpolate the triangle original vertices z-coordinates (before the vertices got projected).
Now it's written in all text books that you can't interpolate the vertices coordinates of the vertices directly but that you need to do this instead:
(sorry can't get Latex to work?)
1/z = w0 * 1/v0.z + w1 * 1/v1.z + w2 * 1/v2.z
Where w0, w1, and w2 are the barycentric coordinates of the "pixel" on the triangle.
Now, what I am looking after, are two things:
what would be the formal proof to show that interpolating z doesn't work?
what would be the formal proof to show that 1/z does the right thing?
To show this is not home work ;-) and that I have made some work on my own, I have found the following explanation for question 2.
Basically a triangle can be defined by a plane equation. Thus you can write:
Ax + By + Cz = D.
Then you isolate z to get z = (D - Ax - By)/C
Then you divide this formula by z as you would with a perspective divide and if you develop, regroup, etc. you get:
1/z = C/D + A/Dx/z + B/Dy/z.
Then we name C'=C/D B'=B/D and A'=A/D you get:
1/z = A'x/z + B'y/z + C'
It says that x/z and y/z are just the coordinates of the points on the triangles once projected on the screen and that the equation on the right is an "affine" function therefore 1/z is a linear function???
That doesn't seem like a demonstration to me? Or maybe it's the right idea, but can't really say how you can tell by just looking at the equation that this is an affine function. If you multiply all the terms you just get:
A'x + B'y + C'z = 1.
Which is just basically our original equations (just need to replace A' B' and C' with the proper term).
Not sure what you are trying to ask here, but if you look at:
1/z = A'x/z + B'y/z + C'
and rewrite it as:
1/z = A'u + B'v + C'
where (u,v) are screen coordinates of the triangle after perspective projection, you can see that the depth (z) of a point on the triangle is not linearly related to (u,v) but 1/depth is and that is what the textbooks are trying to teach you.

Texturing a sphere in a Cg shader

So I need to map a texture to a sphere from within a pixel/fragment shader in Cg.
What I have as "input" in every pass are the Cartesian coordinates x, y, z for the point on the sphere where I want the texture to be sampled. I then transform those coordinates into Spherical coordinates and use the angles Phi and Theta as U and V coordinates, respectively, like this:
u = atan2(y, z)
v = acos(x/sqrt(x*x + y*y + z*z))
I know that this simple mapping will produce seams at the poles of the sphere but at the moment, my problem is that the texture repeats several times across the sphere. What I want and need is that the whole texture gets wrapped around the sphere exactly once.
I've fiddled with the shader and searched around for hours but I can't find a solution. I think I need to apply some sort of scaling somewhere but where? Or maybe I'm totally on the wrong track, I'm very new to Cg and shader programming in general... Thanks for any help!
Since the results of inverse trigonometric functions are angles, they will be in [-Pi, Pi] for u and [0, Pi] for v (though you can't have searched for hours with at least basic knowledge of trigonometrics, as acquired from school). So you just have to scale them appropriately. u /= 2*Pi and v /= Pi should do, assuming you have GL_REPEAT (or the D3D equivalent) as texture coordinate wrapping mode (which your description sounds like).

Resources