What is maximum of vanishing points for perspective projection? - graphics

I am trying to better understand the following regarding vanishing points for perspective projections:
Is it possible to have 4 vanishing points or more?
Is it possible to have infinite points?
Why they can have only their maximum.

Related

Performance considerations of ECEF vs. Polar coordinates in a modern Earth scale simulation

I am sketching out a new simulation that will involve thousands of ships moving around on Earth's oceans and interacting over long periods of time. So, lots of "intersection detection" for sensor and communications ranges, as well as region detection for various environmental conditions. We'll assume a spherical earth, not WGS84. This is an event-step simulation that spits out metrics, not a real time game or anything like that.
A question is to use Cartesian coordinates (Earth-Centered, Earth-Fixed) or Geodic/polar coordinates. With polar coordinates a ship's track would be internally represented as a series of lat/lon waypoints with times and a great circle paths between them. With a Cartesian representation the waypoints would be connected with polyline renderings of the great circle between them.
The reason this is a question is I suspect that by sticking to a Cartesian data model it becomes possible to use various geometry libraries that are performance tuned, and even offer up SIMD/GPU performance advantages. The polar coordinates would probably be the more natural way to proceed if writing everything from scratch. But I suspect that by keeping things Cartesian I will have greater access to better and faster libraries. Is this an invalid line of thought? Another consideration is that I know polar coordinate calculations tend to get really screwy when near the poles.
Just curious if somebody with experience could save me a whole lot of time prototyping some scenarios both ways.
It often works well to represent directions as unit vectors instead of angles. Rotation of a vector by another angle becomes a 2x2 or 3x3 matmul (efficient with SIMD, but still more expensive than an FP add of two numbers in radians), but you very rarely need sin/cos.
You may occasionally want atan2 to get an angle, but usually not inside tight loops.
Intersection-detection can be very efficient (with SIMD) for XYZ coordinates given another XYZ + range. I'm not sure how efficiently you could check which lat/lon pairs were within range of a given point, not a problem I've looked at.
IDK what kind of stuff you'd find in existing libraries, or what you'd want to do with it.

Orthographic projection of point to line in WGS84

I need to find points (from a rather small dataset) which are close enough to a polyline. All coordinates are WGS84.
I think of some r-tree thing to reduce the data to just a few candidates which then have to be checked in more detail.
While i managed to do this using "great circle" arithmetic, i am sure this is too pedantic for the following reasons:
The segmentation of those polylines is quite high. A single segment of a polyline can be considered to be no longer than 10 km.
The points in question are not more than a few hundred meters away from segments.
The area in question is Europe, so the algorithm does not need to be valid for extreme (near pole?) conditions. Again: points don't need to be checked agains the whole polyline (which could be hundrets of kilometers). Only the "nearby" segments need to be considured.
Do i need to transform the WGS84 coordinates to
some local cartesian reference system
to a mercator system
Or can i even just calculate with "angle differences"? I know that this is just a matter of accuracy: I can accept an error which is below ~50 meters.
I highly appreciate your suggestions!
On how to measure distance from point to polyline:
you have to measure distances from all your points to all segments of a polyline.
See Distance from a point to a polygon
You can do without converting coordinates to cartesian (especially if the area is rather small, you don't mind 50 meters error and you don't need exact distances, just relative) See https://en.wikipedia.org/wiki/Decimal_degrees.

Choosing step for Bezier curve drawing

Bezier curve is a parametric curve, meaning that there is a paramater t at which one can evaluate the polynomials in order to find out the positions of points laying on the curve.
Polynomials for some common cases can be found at en.wikipedia.org/wiki/B%C3%A9zier_curve#Specific_cases
To draw a Bezier curve on screen, one could evaluate the polynomials from 0 to 1 at ever increasing t by tiny little steps. However, that would very wasteful because, in general, the parameter "space" does not correspond to screen "space", i.e. several little steps may fall onto just one pixel.
My question is: how to find the smallest step which increases Cartesian distance from previous point at least by 1 pixel?
To put it in other way: I would like to draw a Bezier curve on screen. How to choose the (uniform) step by which t should grow so that I never draw at one pixel more the once? I don't mind the "holes" when the t grows too quickly, I just don't want to redraw already drawn pixels.
Edit
By "how to find" I mean O(1). Yes, I could use De Casteljau's algorithm but I was hoping there is a way to "guess" the optimal t step quickly.
The comment above (jozxyqk) gives you a hint. I would give it a try with a recursive binary division of the spline drawing.
Lets say you start with a coarse resolution of the parameter space (delta_t = 0.1), that gives you 11 points on the spline curve s , s(t=0), s(t=0.1), ..., s(t=0.9), s(t=1).
Calculate the distance between s(t_i) and s(t_i+1). If it is >1 than make a binary subdivision between those two points. And so on...
But honestly, I guess it is faster to calculate all points a a higher resolution without any recursive loops or subdivisions. Especially if you are using multithread programming.

Average and Measure of Spread of 3D Rotations

I've seen several similar questions, and have some ideas of what I might try, but I don't remember seeing anything about spread.
So: I am working on a measurement system, ultimately computer vision based.
I take N captures, and process them using a library which outputs pose estimations in the form of 4x4 affine transformation matrices of translation and rotation.
There's some noise in these pose estimations. The standard deviation in Euler angles for each axis of rotation is less than 2.5 degrees, so all orientations are pretty close to each other (for a case where all Euler angles are close to 0 or 180). Standard errors of less than 0.25 degrees are important to me. But I have already run into the problems endemic to Euler angles.
I want to average all these pretty-close-together pose estimates to get a single final pose estimate. And I also want to find some measure of spread so that I can estimate accuracy.
I'm aware that "average" isn't actually well defined for rotations.
(For the record, my code is in Numpy-heavy Python.)
I also may want to weight this average, since some captures (and some axes) are known to be more accurate than others.
My impression is that I can just take the mean and standard deviation of the translation vector, and that for the rotation I can convert to quaternions, take the mean, and re-normalize with OK accuracy since these quaternions are pretty close together.
I've also heard mentions of least-squares across all the quaternions, but most of my research into how this would be implemented has been a dismal failure.
Is this workable? Is there a reasonably well-defined measure of spread in this context?
Without more info about your geometry setup is hard to answer. Anyway for rotations I would:
create 3 unit vectors
x=(1,0,0),y=(0,1,0),z=(0,0,1)
and apply the rotation on them and call the output
x(i),y(i),z(i)
it is just applying the matrix(i) with position at (0,0,0)
do this for all measurements you have
now average all vectors
X=avg(x(1),x(2),...x(n))
Y=avg(y(1),y(2),...y(n))
Z=avg(z(1),z(2),...z(n))
correct the vector values
so make each of the X,Y,Z unit vectors again and take the axis which is more closest to the rotation axis as main axis. It will stay as is and recompute the remaining two axises as cross product of main axis and the other vector to ensure orthogonality. Beware of the multiplication order (wrong order of operands will negate the output)
construct averaged transform matrix
see transform matrix anatomy as origin you can use averaged origin of the measurement matrices
Moakher wrote a paper that explains there are basically two ways to take an average of Rotation matrices. The first is a weighted average followed by a projection back to SO(3) using the SVD. The second is the Riemannian center of mass. That one is a closer notion to the geometric mean, and its more complicated to compute.

How do I check if a set of plane polygones create a watertight polyhedra

I am currently wondering if there is a common algorithm to check whether a set of plane polygones, not nescessarily triangles, contruct a watertight polyhedra. Each polygon has an oriantation (normal vector). A simple solution would just be to say yes or no. A more advanced version would be to point out the edges, where the polyhedron is "open". I am not really interesed on how to close to polyhedra.
I would like to point out, that my "holes" are not nescessarily small, e.g., one face of a cube might be missing. Thus, the "undersampling correction" algorithms dont seem to be the correct approach. Furthermore, I am talking of about 100 - 1000, not 1000000 polygons, so computation time should not really be a problem.
Any hints or tips?
kind regards,
curator
I believe you can use a simple topological test -- count the number of times each edge appears in the full list of polygons.
If the set of polygons define the surface of a closed volume, each edge should have count>=2, indicating that each edge is shared by (at least) two adjacent polygons. If the surface is manifold count==2 exactly.
Edges with count==1 indicate open regions of the surface.
The above answer does not cover many cases. A more correct (but not necessarily complete: I wouldn't know) algorithm is to ensure that every edge of every polygon (or of the mesh/polyhedron) has an even number of faces connected to it. Consider the following mesh:
The segment (line) between the closest vertex and the one below is attached to 3 faces (one one of the outer triangle and two of the inner triangle), which is greater than two faces. However this is clearly not closed.

Resources