Related
I am calculating the spherical coordinates of the 3D geometry. I used below formula to find coordinats.
r = np.sqrt(x*x + y*y + z*z) # Radial distance
theta = np.arctan2(y,x)*180/ pi # Polar angle to degrees
phi = np.arccos(z/r)*180/ pi # Azimuth angle to degrees
I got the list of theta values like below.
theta = [-145.75, -164.54, -155.10, -124.70, 146.31, 109.80, 101.56, 77.20, 56.61, 40.76, 11.69, -24.15, -47.82, -72.65, -105.71, -131.19, -52.43, 30.96, 68.20, -145.75, -164.54, -155.10, -124.70, 146.31, 109.80, 101.56, 77.20, 56.61, 40.76, 11.69, -24.15, -47.82, -72.65, -105.71, -131.19, -52.43, 30.96, 68.20]
From these values, I can see that, these values are in between -180 to +180 degree. And also some angles are more than 180 degree and obtained values shows -145.75.
Is there any need to convert the range and values of theta?? How can I do this?
And I have the list of Azimuth angle (Phi) also. May I know that, if there is any need to convert the range of that also?
phi = [99.49, 101.02, 107.10, 121.000000000000, 131.29, 109.67, 101.94, 101.89, 100.22, 103.93, 107.79, 106.64, 102.41, 105.82, 105.95, 102.59, 120.08, 129.17, 109.43, 80.51, 78.98, 72.90, 59.0000000000000, 48.71, 70.33, 78.06, 78.11, 79.78, 76.07, 72.21, 73.36, 77.59, 74.18, 74.05, 77.41, 59.92, 50.83, 70.57]
What I want to do and what is a motive behind asking this question?
I want to project the nodes (coordinate) of geometry to the spherical surface which is created around an geometry. Therefore I am calculating angles of each coordinate available in geometry.
In spherical surface, I have created equal points. I will calculate angles for each points also. My aim is to project geometrical nodes into spherical surface and then I will calculate the number of nodes presented in the each part of spherical surface. Therefore in order to compare the angles of geometry and points of spherical surface, I want polar angles.
I have a quadrotor which flies around and knows its x, y, z positions and angular displacement along the x, y, z axis. It captures a constant stream of images which are converted into depth maps (we can estimate the distance between each pixel and the camera).
How can one program an algorithm which converts this information into a 3D model of the environment? That is, how can we generate a virtual 3D map from this information?
Example: below is a picture that illustrates what the quadrotor captures (top) and what the image is converted into to feed into a 3D mapping algorithm (bottom)
Let's suppose this image was taken from a camera with x, y, z coordinates (10, 5, 1) in some units and angular displacement of 90, 0, 0 degrees about the x, y, z axes. What I want to do is take a bunch of these photo-coordinate tuples and convert them into a single 3D map of the area.
Edit 1 on 7/30: One obvious solution is to use the angle of the quadrotor wrt to x, y, and z axes with the distance map to figure out the Cartesian coordinates of any obstructions with trig. I figure I could probably write an algorithm which uses this approach with a probabilistic method to make a crude 3D map, possibly vectorizing it to make it faster.
However, I would like to know if there is any fundamentally different and hopefully faster approach to solving this?
Simply convert your data to Cartesian and store the result ... As you have known topology (spatial relation between data points) of the input data then this can be done to map directly to mesh/surface instead of to PCL (which would require triangulation or convex hull etc ...).
Your images suggest you have known topology (neighboring pixels are neighboring also in 3D ...) so you can construct mesh 3D surface directly:
align both RGB and Depth 2D maps.
In case this is not already done see:
Align already captured rgb and depth images
convert to Cartesian coordinate system.
First we compute the position of each pixel in camera local space:
so each pixel (x,y) in RGB map we find out the Depth distance to camera focal point and compute the 3D position relative to the camera focal point.For that we can use triangle similarity so:
x = camera_focus.x + (pixel.x-camera_focus.x)*depth(pixel.x,pixel.y)/focal_length
y = camera_focus.y + (pixel.y-camera_focus.y)*depth(pixel.x,pixel.y)/focal_length
z = camera_focus.z + depth(pixel.x,pixel.y)
where pixel is pixel 2D position, depth(x,y) is coresponding depth, and focal_length=znear is the fixed camera parameter (determining FOV). the camera_focus is the camera focal point position. Its usual that camera focal point is in the middle of the camera image and znear distant to the image (projection plane).
As this is taken from moving device you need to convert this into some global coordinate system (using your camera positon and orientation in space). For that are the best:
Understanding 4x4 homogenous transform matrices
construct mesh
as your input data are already spatially sorted we can construct QUAD grid directly. Simply for each pixel take its neighbors and form QUADS. So if 2D position in your data (x,y) is converted into 3D (x,y,z) with approach described in previous bullet we can write iot in form of function that returns 3D position:
(x,y,z) = 3D(x,y)
Then I can form QUADS like this:
QUAD( 3D(x,y),3D(x+1,y),3D(x+1,y+1),3D(x,y+1) )
we can use for loops:
for (x=0;x<xs-1;x++)
for (y=0;y<ys-1;y++)
QUAD( 3D(x,y),3D(x+1,y),3D(x+1,y+1),3D(x,y+1) )
where xs,ys is the resolution of your maps.
In case you do not know camera properties you can set the focal_length to any reasonable constant (resulting in fish eye effects and or scaled output) or infer it from input data like:
Transformation of 3D objects related to vanishing points and horizon line
I'm working on a 3D mapping application, and I've got to do some work with things like figuring out the visible region of a sphere (Earth) from a given point in space for things like clipping mapped regions and such.
Several things get easier if I can project the outline of Earth into screen space, clip polygons there, and then project back to the surface of the Earth (lat/lon), but I'm lost as to how to do that.
Is there a reasonable way to compute the outline of a sphere after perspective projection, and then a reasonable way to project things back onto the sphere?
You can clip the polygons in 3D. The silhouette of the sphere - back-projected into 3D - will always be a circle on a plane. Perspective projection does not change that. Thus, you can clip all polygons at the plane.
Calculating the plane is not too hard. If you consider the sphere's center the origin, then the plane could be represented in normal form as:
dot(n, x) = d
n is the normal. This one is easy. It is just the unit direction vector from the sphere center to the observer.
d is the distance from the sphere center. This is a bit harder but not too hard. If l is the distance of the observer to the sphere center and r is the sphere radius, then
d = r^2 / l
This is the plane which you can use to clip your polygons in 3D. If you need the radius of the circle on it, you can use the following formula:
r_c = r / sqrt(1 - r^2/(l-d)^2)
Let us take a point on a sphere in spherical coordinates (cos(u)sin(v),sin(u)sin(v),cos(v)) and an arbitrary projection center (x,y,z).
We express that a projecting line is tangent to the sphere by the perpendicularity condition of the direction of the line and the vector from the origin of the sphere:
(x-cos(u)sin(v))cos(u)sin(v) + (y-sin(u)sinv))sin(u)sin(v) + (z-cos(v)) cos(v) = 0
This simplifies to
x cos(u)sin(v) + y sin(u)sin(v) + z cos(v) = 1
which is a curve in the longitude/latitude coordinates. You can solve u as a function of v or conversely.
I'm reading Shaders for Game Programming and Artists. In Chapter 13 "Building Materials from Scratch", the author introduced some render techniques to simulate complex materials such as marble or wood by using Perlin noise. But I'm puzzled by the wood rendering.
To simulate the wood, we need a function gives a circular value along a specific plane so that we can create the rings in the woods. This is what the author said, "take the dot product of two axes along a plane, creating the circular value on that plane"
Circle = dot(noisetxr.xy, noisetxr.xy);
noisetxr is a float3, it's a texture coordinate to sample the noise texture, I can't understand why the dot product will gives a circular value
Here is the complete code(pixel shader in hlsl):
float persistance;
float4 wood_color; //a predefined value
sampler Texture0; // noise texture
float4 ps_main(float3 txr: TEXCOORD0) : COLOR
{
// Determine two set of coordinates, one for the noise
// and one for the wood rings
float3 noisetxr = txr;
txr = txr/8;
// Combine 3 octaves of noise together.
float final_noise = 0;
for(int i=0;i<2;i++)
final_noise += ((1.0/pow(persistance,i))*
((tex3D(Texture0, txr*pow(2,i))*2)-1));
// The wood is defined by a set of concentric rings in the XY
// plane. Those rings are pertubated by the computed noise.
final_noise = abs(final_noise);
float grain = cos(dot(noisetxr.xy,noisetxr.xy) + final_noise*4);//what is this ??
return wood_color - pow(grain,8)/2; //raising the cosine to higher power
}
I know that raising the cosine function to higher power will create sharper rings, but what does the dot product mean ? Why it can create a circle value ?
A dot-product of a vector with itself simply results in the squared length of the vector. So for each point in the xy-plane, dot(noisetxr.xy,noisetxr.xy) return the squared distance of the point to the origin. Now you're applying a cosinus-function on this distance, which means for all points on the plane, which have the same distance to the origin, it creates the same output value => a circle of equal values around the origin.
I have gone through all available study resources in the internet as much as possible, which are in form of simple equations, vectors or trigonometric equations.
I couldn't find the way of doing following thing:
Assuming Y is up in a 3D world.
I need to draw two 2D trajectories orthogonally (not the projections) for a 3D trajectory, say XY-plane for side view of the trajectory w.r.t. the trajectory itself and XZ-plane for top view for the same.
I have all the 3D points of the 3D trajectory, initial velocity, both the angles can be calculated by vector mathematics.
How should I proceed further?
refer:
Below a curve in different angles, which can loose its significance if projected along XY-plane. All I want is to convert the red curve along itself, the green curve along green curve and so on. and further how would I map side view to a plane. Top view is comparatively easy and done just by taking X and Z ordinates of each points.
I mean this the requirement. :)
I don't think I understand the question, but I'll answer my interpretation anyway.
You have a 3D trajectory described by a sequence of points p0, ..., pN. We are given an angle v for a plane P parallel to the Y-axis, and wish to compute the 2D coordinates (di, hi) of the points pi projected onto that plane, where hi is the height coordinate in the direction Y and di is the distance coordinate in the direction v. Assume p0 = (0, 0, 0) or else subtract p0 from all vectors.
Let pi = (xi, yi, zi). The height coordinate is hi = yi. Assume the angle v is given relative to the Z-axis. The vector for the direction v is then r = (sin(v), 0, cos(v)), and the distance coordinates becomes di = dot(pi, r).