How can I compute a surface integration from data? - python-3.x

Suppose I have a Nx3 array, where N is a giant number (the amount of points describing a surface) and 3 stands for the 3-dimensional space. This data describes a well-shaped good-looking surface with no discontinuities or singularities. How can I compute this surface area from the data?
(keep in mind that although the data provides a high-resolution surface, the spacing between consecutive points is not regular)

That is not possible because points alone don't uniquely describe a surface. For example, in 2D:
You will need the surface cells.

Related

How to approximate low-res 3D density map to smooth models?

3D Density maps of course can be plotted as heatmap, but when data itself is homogeneous (near 0) except for a small part (2D cross section for example):
This should give a letter 'E' shape as 2D "model". The original data is not saved as point-cloud however.
A naive approach would be to use the pixels that are more than a certain value, and then smooth the border. However this does not take into account of the border pixels being small.
Another would be to use some point-cloud based algorithms that come with modeling softwares, but then the point-cloud's probability function would still be discontinuous on pixel border, and not taking into account that only one side have signal.
Is there any tested solution to this (the example is 2D, the actual case is many 2D slices that compose a low-res 3D density map)? I was thinking of making border pixels have area proportional to signal data, and border should be defined from gradient? Any suggestions?
I was thinking of model visualization results similar to this (seems to be based on established point-cloud algorithm):

Fitting a transition + circle + transition curve to a set of measured points

I am dealing with a reverse-engineering problem regarding road geometry and estimation of design conditions.
Suppose you have a set of points obtained from the measurement of positions of a road. This road has straight sections as well as curve sections. Straight sections are, of course, represented by lines, and curves are represented by circles of unknown center and radius. There are, as well, transition sections, which may be clothoids / Euler spirals or any other usual track transition curve. A representation of the track may look like this:
We know in advance that the road / track was designed taking this transition + circle + transition principle into account for every curve, yet we only have the measurement points, and the goal is to find the parameters describing every curve on the track, this is, the transition parameters as well as the circle's center and radius.
I have written some code using a nonlinear optimization algorithm, where a user can select start and end points and fit a circle that to the arc section between them, as it shows in next figure:
However, I don't find a suitable way to take the transition into account. After giving it some thought I came to think that this s because, given a set of discrete points -with their measurement error- representing a full curve, it is not entirely clear where to consider it "begins" and where it "ends" and, moreover, it is less clear where to consider the transition, the proper circle and the exit transition "begin" and "end".
Is there any work on this subject which I may have missed? is there a proper way to fit the whole transition + curve + transition structure into the set of points?
As far as I know, there's no method to fit a sequence clothoid1-circle-clothoid2 into a given set of points.
Basic facts are that two points define a straight, and three points define a unique circle.
The clothoid is far more complex, because you need: The parameter A, the final radius Rf, an initial point px,py, the radius Ri at that point, and the tangent T (angle with X-axis) at that point.
These are 5 data you may use to find the solution.
Due to clothoid coords are calculated by expanded Fresnel integrals (see https://math.stackexchange.com/a/3359006/688039 a little explanation), and then apply a translation & rotation, there's no an easy way to fit this spiral into a set of given points.
When I've had to deal with your issue, what I've done is:
Calculate the radius for triplets of consecutive points: p1p2p3, p2p3p4, p3p4p5, etc
Observe the sequence of radius. Similar values mean a circle, increasing/decreasing values mean a clothoid; Big values would mean a straight.
For each basic element (line, circle) find the most probably characteristics (angles, vertices, radius) by hand or by some regression method. Many times the common sense is the best.
For a spiral you may start with aproximated values, taken from the adjacent elements. These values may very well be the initial angle and point, and the initial and final radius. Then you need to iterate, playing with Fresnel and 'space change' until you find a "good" parameter A. Then repeat with small differences in the other values, those you took from adjacents.
Make the changes you consider as good. For example, many values (A, radius) use to be integers, without decimals, just because it was easier for the designer to type.
If you can make a small applet to do these steps then it's enough. Using a typical roads software helps, but doesn't avoid you the iteration process.
If the points are dense compared to the effective radii of curvature, estimate the local curvature by least square fitting of a circle on a small number of points, taking into account that the curvature is most of the time zero.
You will obtain a plot with constant values and ramps that connect them. You can use an estimate of the slope at the inflection points to figure out the transition points.

Uniform spatial bins on surface of a sphere

Is there a spatial lookup grid or binning system that works on the surface of a (3D) sphere? I have the requirements that
The bins must be uniform (so you can look up in constant time if there exists a point r distance away from any spot on the sphere, given constant r.)†
The number of bins must be at most linear with the surface area of the sphere. (Alternatively, increasing the surface resolution of the grid shouldn’t make it grow faster than the area it maps.)
I’ve already considered
Spherical coordinates: not good because the cells created are extremely nonuniform making it useless for proximity testing.
Cube meshes: Less distortion than spherical coordinates, but still very difficult to determine which cells to search for a given query.
3D voxel binning: Wastes the entire interior volume of the sphere with empty bins that will never be used (as well as the empty bins at the 6 corners of the bounding cube). Space requirements grow with O(n sqrt(n)) with increasing sphere surface area.
kd-Trees: perform poorly in 3D and are technically logarithmic complexity, not constant per query.
My best idea for a solution involves using the 3D voxel binning method, but somehow excluding the voxels that the sphere will never intersect. However I have no idea how to determine which voxels to exclude, nor how to calculate an index into such a structure given a query location on the sphere.
† For what it’s worth the points have a minimum spacing so a good grid really would guarantee constant lookup.
My suggestion would be a variant of the spherical coordinates, such that the polar angle is not sampled uniformly but instead the sine of this angle is sampled uniformly. This way, the element of area sinφ dφ dΘ is kept constant, leading to tiles of the same area (though variable aspect ratio).
At the poles, merge all tiles in a single disk-like polygon.
Another possibility is to project a regular icosahedron onto the sphere and to triangulate the spherical triangles so obtained. This takes a little of spherical trigonometry.
I had a similar problem and used "sparse" 3D voxel binning. Basically, my spatial index is a hash map from (x, y, z) coordinates to bins.
Because I also had a minimum distance constraint on my points, I chose the bin size such that a bin can contain at most one point. This is accomplished if the edge of the (cubic) bins is at most d / sqrt(3), where d is the minimum separation of two points on the sphere. The advantage is that you can represent a full bin as a single point, and an empty bin can just be absent from the hash map.
My only query was for points within a radius d (the same d), which then requires scanning the surrounding 125 bins (a 5×5×5 cube). You could technically leave off the 8 corners to get this down to 117, but I didn't bother.
An alternative for the bin size is to optimize it for queries rather than storage size and simplicity, and choose it such that you always have to scan at most 27 bins (a 3×3×3 cube). That would require a bin edge length of d. I think (but haven't thought hard about it) that a bin could contain up to 4 points in that case. You could represent these with a fixed-size array to save one pointer indirection.
In either case, the memory usage of your spatial index will be O(n) for n points, so it doesn't get any better than that.

Why is a quadrilateral insufficient to determine projection/rotation/etc.?

Suppose I have a photograph, and four pixel coordinates representing the corners of a rectangular sheet of paper. My goal is to determine the rotation, translation, and projection which maps from the 3D scene containing the sheet of paper on a plane to the 2D image.
I understand there are augmented reality libraries for this, like ARToolkit. However, they all require additional information, namely the parameters of the camera used to take the photograph. My question is, how come having the rectangle's four corner points (in addition to knowing the rectangle's real-world dimensions) is insufficient information to extrapolate 3D information?
It makes sense mathematically since there are so many more unknown variables that bring us from 3D coordinates to 2D screen space, but I'm having a hard time grounding that concept in what I see.
Thanks!
Does it help for you to count degrees of freedom?
There are 3 degrees of freedom involved in deciding where in space to put the camera. 3 more degrees of freedom to decide how to turn it. 1 degree of freedom to figure out how much the picture it took had been enlarged, and finally 2 degrees of freedom to fix where on the resulting flat image we're looking.
That makes 9 degrees of freedom in total. However, knowing the location of four points in the final cropped image gives us only 8 continuously varying variables. Therefore there must be a way to slide the camera, zoom level and translation parameters around such that those four points stay in the same place on the screen (while everything else distorts subtly).
If we know even one of these nine parameters, such as the camera's focal length (in pixels!), then there's some hope of getting an unambiguous answer.

What does 'Polygon' mean in terms of 3D Graphics?

An old Direct3D book says
"...you can achieve an acceptable frame
rate with hardware acceleration while
displaying between 2000 and 4000
polygons per frame..."
What is one polygon in Direct3D? Do they mean one primitive (indexed or otherwise) or one triangle?
That book means triangles. Otherwise, what if I wanted 1000-sided polygons? Could I still achieve 2000-4000 such shapes per frame?
In practice, the only thing you'll want it to be is a triangle because if a polygon is not a triangle it's generally tessellated to be one anyway. (Eg, a quad consists of two triangles, et cetera). A basic triangulation (tessellation) algorithm for that is really simple; you just loop though the vertices and turn every three vertices into a triangle.
Here, a "polygon" refers to a triangle. All . However, as you point out, there are many more variables than just the number of triangles which determine performance.
Key issues that matter are:
The format of storage (indexed or not; list, fan, or strip)
The location of storage (host-memory vertex arrays, host-memory vertex buffers, or GPU-memory vertex buffers)
The mode of rendering (is the draw primitive command issued fully from the host, or via instancing)
Triangle size
Together, those variables can create much greater than a 2x variation in performance.
Similarly, the hardware on which the application is running may vary 10x or more in performance in the real world: a GPU (or integrated graphics processor) that was low-end in 2005 will perform 10-100x slower in any meaningful metric than a current top-of-the-line GPU.
All told, any recommendation that you use 2-4000 triangles is so ridiculously outdated that it should be entirely ignored today. Even low-end hardware today can easily push 100,000 triangles in a frame under reasonable conditions. Further, most visually interesting applications today are dominated by pixel shading performance, not triangle count.
General rules of thumb for achieving good triangle throughput today:
Use [indexed] triangle (or quad) lists
Store data in GPU-memory vertex buffers
Draw large batches with each draw primitives call (thousands of primitives)
Use triangles mostly >= 16 pixels on screen
Don't use the Geometry Shader (especially for geometry amplification)
Do all of those things, and any machine today should be able to render tens or hundreds of thousands of triangles with ease.
According to this page, a polygon is n-sided in Direct3d.
In C#:
public static Mesh Polygon(
Device device,
float length,
int sides
)
As others already said, polygons here means triangles.
Main advantage of triangles is that, since 3 points define a plane, triangles are coplanar by definition. This means that every point within the triangle is exactly defined as a linear combination of polygon points. More vertices aren't necessarily coplanar, and they don't define a unique curved plane.
An advantage more in mechanical modeling than in graphics is that triangles are also undeformable.

Resources