How to calculate the mass of a inhomogenous sphere? - geometry

I want to calculated the mass of a sphere based on a threedimensional discret inhomogenous density distribution. Lets say a set of 3x3x3 cubes of different densities is inscribed by a sphere. What is the fastest way to sum up the partitioned masses using Python?
I tried to calculate the volume under the mathematical equation for a sphere: x^2 +y^2 +z^2 = R^2 for the range of one of the cubes using scipy.integrate.dblquad.
However, the result is only valid if the boundaries are smaller than the radius of the sphere and repetitive calculation for lets say 50'000 spheres with 27 cubes each would be quite slow.
On the other hand, the usual equation for CoM caluations could't be used in my opinion, due to the rather coarse and discrete mass distribution.

Timing Experiment
You didn't specify your timing constraints, so I've done a little experiment with a nice integration package.
Without optimization, each integral in spherical coordinates can be evaluated in 0.005 secs in a standard laptop if the cubes densities are straightforward functions.
Just as a reference, this is the program in Mathematica:
Clear#f;
(* Define a cuboid as density function *)
iP = IntegerPart;
f[{x_, y_, z_}, {lx_, ly_, lz_}] := iP[x - lx] + iP[y - ly] + iP[z - lz] /;
lx <= x <= lx + 3 && ly <= y <= ly + 3 && lz <= z <= lz + 3;
f[{x_, y_, z_}, {lx_, ly_, lz_}] := Break[] /; True;
Timing[Table[s = RandomReal[{0, 3}, 3]; (*sphere center random*)
sphereRadius = Min[Union[s, 3 - s]]; (*max radius inside cuboid *)
NIntegrate[(f[{x, y, z} - s, -s] /. (*integrate in spherical coords *)
{x -> r Cos#th Sin#phi,
y -> r Sin#th Sin#phi,
z -> r Cos#phi}) r^2 Sin#phi,
{r, 0, sphereRadius}, {th, 0, 2 Pi}, {phi, 0, Pi}],
{10000}]][[1]]
The result is 52 secs for 10^4 iterations.
So perhaps you don't need to optimize a lot ...

I cannot get your exact meaning of inscribed by a sphere. Also I havent tried the scipy.integrate. However, here are some though:
Set a 3x3x3 cube to unit density. Then take the integration for each cube respectively, so you should have the volume cube V_ijk here. Now for each of sphere, you can get the mass of each sphere by summing V_ijk*D_ijk, where the D_ijk is the density of the sphere.
It should be much faster because you do not need to do integration now.

You can obtain an analytic formula for the intersecting volume between a cube (or rectangular prism) and a sphere. It won't be easy, but it should be possible. I have done it for an arbitrary triangle and circle in 2D. The basic idea is to decompose the intersection into simpler pieces, like tetrahedra and volumetric spherical triangle sectors, for which relatively simple volume formulas are known. The main difficult is in considering all the possible cases of intersections. Luckily both objects are convex, so you are guaranteed a single convex intersection volume.
An approximate method might be to simply subdivide the cubes until your approximate numerical integration algorithm does work; this should still be relatively fast. Do you know about Pick's Theorem? That only works in 2D, but there are, I believe, 3D generalizations.

Related

Determine angle of triangle

Given following problem:
I have 2 solutions:
First is to calculate difference in absolute angles, then renormalize angle. bad idea, 2 x atan2() is slow, renormalisation is inefficient.
angle = clamp_to_range( atan2(P1.y, P1.x) - atan2(P0.y, P0.x));
Second is to calculate dot product, normalize, calculate arccos(). Also bad idea, because angle sign will be incorrect.
angle = acos( dot(P0, P1) / sqrt( dot(P0,P0) * dot(P1, P1) ) );
I feel, that there should be some formula. How to solve given problem efficiently?
It is possible to use only one atan2 but both cross product and scalar product of vectors:
angle = atan2(Cross(P0, P1), Dot(P0, P1);
Do you really need the angle in radians / degrees, instead of as a unit vector or rotation matrix?
An xy unit vector can represent angle instead of absolute direction; the angle is the angle between the vertical (or horizontal) axis and the unit vector. Trig functions are very slow compared to simple multiply / add / subtract, and still slow compared to div / sqrt, so representing angles as vectors is usually a good thing.
You can calculate its components using the Cross(P0, P1) and Dot(P0, P1), but then normalize them into an xy unit vector instead of using atan2 on them.
See also Rotate Object Towards Direction in 2D on gamedev.SE, and Is it better to track rotation with a vector or a float?
This is easy to vectorize with SIMD, much moreso than a SIMD atan2. rsqrtps exists mostly to speed up x *= 1.0 / sqrt(foo) (and reusing the same multiplier for a SIMD vector of y values) for normalization. But rsqrtps is very low accuracy so you often need a Newton Raphson iteration to refine. The most recent CPUs (Skylake) have good FP sqrt / div throughput, so you could just normalize the naive way with _mm_sqrt_ps and leave optimization for later. See Fast vectorized rsqrt and reciprocal with SSE/AVX depending on precision.

What is the fastest way to find the center of an irregular convex polygon?

I'm interested in a fast way to calculate the rotation-independent center of a simple, convex, (non-intersecting) 2D polygon.
The example below (on the left) shows the mean center (sum of all points divided by the total), and the desired result on the right.
Some options I've already considered.
bound-box center (depends on rotation, and ignores points based on their relation to the axis).
Straight skeleton - too slow to calculate.
I've found a way which works reasonably well, (weight the points by the edge-lengths) - but this means a square-root call for every edge - which I'd like to avoid.(Will post as an answer, even though I'm not entirely satisfied with it).
Note, I'm aware of this questions similarity with:What is the fastest way to find the "visual" center of an irregularly shaped polygon?
However having to handle convex polygons increases the complexity of the problem significantly.
The points of the polygon can be weighted by their edge length which compensates for un-even point distribution.
This works for convex polygons too but in that case the center point isn't guaranteed to be inside the polygon.
Psudo-code:
def poly_center(poly):
sum_center = (0, 0)
sum_weight = 0.0
for point in poly:
weight = ((point - point.next).length +
(point - point.prev).length)
sum_center += point * weight
sum_weight += weight
return sum_center / sum_weight
Note, we can pre-calculate all edge lengths to halve the number of length calculations, or reuse the previous edge-length for half+1 length calculations. This is just written as an example to show the logic.
Including this answer for completeness since its the best method I've found so far.
There is no much better way than the accumulation of coordinates weighted by the edge length, which indeed takes N square roots.
If you accept an approximation, it is possible to skip some of the vertices by curve simplification, as follows:
decide of a deviation tolerance;
start from vertex 0 and jump to vertex M (say M=N/2);
check if the deviation along the polyline from 0 to M exceeds the tolerance (for this, compute the height of the triangle formed by the vertices 0, M/2, M);
if the deviation is exceeded, repeat recursively with 0, M/4, M/2 and M/2, 3M/4, M;
if the deviation is not exceeded, assume that the shape is straight between 0 and M.
continue until the end of the polygon.
Where the points are dense (like the left edge on your example), you should get some speedup.
I think its easiest to do something with the center of masses of the delaunay triangulation of the polygon points. i.e.
def _centroid_poly(poly):
T = spatial.Delaunay(poly).simplices
n = T.shape[0]
W = np.zeros(n)
C = 0
for m in range(n):
sp = poly[T[m,:],:]
W[m] = spatial.ConvexHull(sp).volume
C += W[m] +np.mean(sp, axis = 0)
return C / np.sum(W)
This works well for me!

finding value of a point between measured points on a 2D plane

I'm trying to find the best way to calculate this. On a 2D plane I have fixed points all with an instantaneous measurement value. The coordinates of these points is known. I want to predict the value of a movable point between these fixed points. The movable point coodinates will be known. So the distance betwwen the points is known as well.
This could be comparable to temperature readings or elevation on topography. I this case I'm wanting to predict ionospheric TEC of the mobile point from the fixed point measurements. The fixed point measurements are smoothed over time however I do not want to have to store previous values of the mobile point estimate in RAM.
Would some sort of gradient function be the way to go here?
This is the same algorithm for interpolating the height of a point from a triangle.
In your case you don't have z values for heights, but some other float value for each triangle vertex, but it's the same concept, still 3D points.
Where you have 3D triangle points p, q, r and test point pt, then pseudo code from the above mathgem is something like this:
Vector3 v1 = q - p;
Vector3 v2 = r - p;
Vector3 n = v1.CrossProduct(v2);
if n.z is not zero
return ((n.x * (pt.x - p.x) + n.y * (pt.y - p.y)) / -n.z) + p.z
As you indicate in your comment to #Phpdevpad, you do have 3 fixed points so this will work.
You can try contour plots especially contour lines. Simply use a delaunay triangulation of the points and a linear transformation along the edges. You can try my PHP implementations https://contourplot.codeplex.com for geographic maps. Another algorithm is conrec algorithm from Paul Bourke.

Texturing a sphere in a Cg shader

So I need to map a texture to a sphere from within a pixel/fragment shader in Cg.
What I have as "input" in every pass are the Cartesian coordinates x, y, z for the point on the sphere where I want the texture to be sampled. I then transform those coordinates into Spherical coordinates and use the angles Phi and Theta as U and V coordinates, respectively, like this:
u = atan2(y, z)
v = acos(x/sqrt(x*x + y*y + z*z))
I know that this simple mapping will produce seams at the poles of the sphere but at the moment, my problem is that the texture repeats several times across the sphere. What I want and need is that the whole texture gets wrapped around the sphere exactly once.
I've fiddled with the shader and searched around for hours but I can't find a solution. I think I need to apply some sort of scaling somewhere but where? Or maybe I'm totally on the wrong track, I'm very new to Cg and shader programming in general... Thanks for any help!
Since the results of inverse trigonometric functions are angles, they will be in [-Pi, Pi] for u and [0, Pi] for v (though you can't have searched for hours with at least basic knowledge of trigonometrics, as acquired from school). So you just have to scale them appropriately. u /= 2*Pi and v /= Pi should do, assuming you have GL_REPEAT (or the D3D equivalent) as texture coordinate wrapping mode (which your description sounds like).

Rotating 3D cube perspective problem

Since I was 13 and playing around with AMOS 3D I've been wanting to learn how to code 3D graphics. Now, 10 years later, I finally think I have have accumulated enough maths to give it a go.
I have followed various tutorials, and defined screenX (and screenY, equivalently) as
screenX = (pointX * cameraX) / distance
(Plus offsets and scaling.)
My problem is with what the distance variable actually refers to. I have seen distance being defined as the difference in z between the camera and the point. However, that cannot be completely right though, since x and y have the same effect as z on the actual distance from the camera to the point. I implemented distance as the actual distance, but the result gives a somewhat skewed perspective, as if it had "too much" perspective.
My "actual distance" implementation was along the lines of:
distance = new Vector(pointX, pointY, cameraZ - pointZ).magnitude()
Playing around with the code, I added an extra variable to my equation, a perspectiveCoefficient as follows:
distance = new Vector(pointX * perspectiveCoefficient,
pointY * perspectiveCoefficient, cameraZ - pointZ).magnitude()
For some reason, that is beyond me, I tend to get the best result setting the perspectiveCoefficient to 1/sqrt(2).
My 3D test cube is at http://vega.soi.city.ac.uk/~abdv866/3dcubetest/3dtest.svg. (Tested in Safari and FF.) It prompts you for a perspectiveCoefficient, where 0 gives a perspective without taking x/y distance into consideration, and 1 gives you a perspective where x, y and z distance is equally considered. It defaults to 1/sqrt(2). The cube can be rotated about x and y using the arrow keys. (For anyone interested, the relevant code is in update() in the View.js file.)
Grateful for any ideas on this.
Usually, projection is done on the Z=0 plane from an eye position behind this plane. The projected point is the intersection of the line (Pt,Eye) with the Z=0 plane. At the end you get something like:
screenX = scaling * pointX / (1 + pointZ/eyeDist)
screenY = scaling * pointY / (1 + pointZ/eyeDist)
I assume here the camera is at (0,0,0) and eye at (0,0,-eyeDist). If eyeDist becomes infinite, you obtain a parallel projection.

Resources