Getting edge points from a 3D model (mesh) - graphics

I want to find the sharp edges of any given mesh. For reference, here is a mesh of a chair:
It's clear that the border of the back of the chair is an edge, as are the four curves that outline the arms and legs, etc.
I would like to sample points along these edges. Is there a known algorithm for doing this?
Couple approaches I thought of:
Triangle edge detection
Consider every pair of connected points in the mesh. Each of these segments should be part of two triangles. If the angle between the surface normals of the two triangles is wide enough, that segment should be considered an edge.
Point cloud edge detection
With open3d, I can easily convert the mesh into a point cloud, where each point has a surface normal. I could potentially search the point cloud for sudden changes in the surface normals. Though I think that could get fairly complex, as I'd have to find the nearest neighbors of every point.

Related

Algorithm for regularizing or normalizing a polygon which is supposed to be rectangular

My app captures the shape of a room by having the user point a camera at floor corners, and then doing a bunch of math, eventually ending up with a polygon.
The assumption is that the walls are straight (not curved). The majority of the corners are formed by walls at right angles to each other, but in some cases might not be.
Depending on how accurately the user points the camera, the (x,y) coordinates I derive for the corner might be beyond the actual corner, or in front of the actual camera, or, less likely, to the left or right. Obviously, in this case, when I connect the dots, I get weird parallelogram or rhomboid shapes. See example.
I am looking for a program or algorithm to normalize or regularize these shapes, provided we know which corners are supposed to be right angles.
My initial attempt involved finding segments which had angles which were "close" to each other, adjust them all to the same angle, and then recalculate the vertices. However, this algorithm proved to be unstable.
My current thinking is to find angles which are most obtuse (as would be caused by a point mistakenly placed beyond the actual corner), or most acute (as would be caused by a point mistakenly placed in front of the actual corner), and find the corner point which would make it a right angle. The problem, however, is that such as adjustment could have side-effects on other corners, such as making them even further away from right angles. I sense I need some kind of algorithm which takes all the information and optimizes/solves it at once--is this a kind of linear programming problem?--but I am stuck.
There is not a unique solution.
For example, take the perpendicular from the middle point of an edge to the two neighboring edges. This will give you two new corners.
Or take the perpendicular from the end point of an edge to other edges.
Or compute the average of angles in the end points of an edge. Use this average and the middle point of the edge to compute new corners.
Or...
To get the most faithful compliance, capture (or calculate) distances from each corner to the other three. Build triangles with those distances. Then use the average of the coordinates you compute for a corner from 2 or 3 triangles.
Resulting angles will not be exactly 90 degrees, but the polygon will represent the room fairly.

Algorithm to calculate and display a ribbon on a 3D triangle mesh

I am looking for an algorithm for the following problem:
Given:
A 3D triangle mesh. The mesh represents a part of the surface of the earth.
A polyline (a connected series of line segments) whose vertices are always on an edge or on a vertex of a triangle of the mesh. The polyline represents the centerline of a road on the surface of the earth.
I need to calculate and display the road i.e. add half of the road's width on each side of the center line, calculate the resulting vertices in the corresponding triangles of the mesh, fill the area of the road and outline the sides of the road.
What is the simplest and/or most effective strategy to do this? How do I store the data of the road most efficiently?
I see 2 options here:
render thick polyline with road texture
While rendering polyline you need TBN matrix so use
polyline tangent as tangent
surface normal as normal
binormal=tangent x normal
shift actual point p position to
p0=p+d*binormal
p1=p-d*binormal
and render textured line (p0,p1). This approach is not precise match to surface mesh so you need to disable depth or use some sort of blending. Also on sharp turns it could miss some parts of a curve (in that case you can render rectangle or disc instead of line.
create the mesh by shifting polyline to sides by half road size
This produces mesh accurate road fit, but due to your limitations the shape of the road can be very distorted without mesh re-triangulation in some cases. I see it like this:
for each segment of road cast 2 lines shifted by half of road size (green,brown)
find their intersection (aqua dots) with shared edge of mesh with the current road control point (red dot)
obtain the average point (magenta dot) from the intersections and use that as road mesh vertex. In case one of the point is outside shared mesh ignore it. In case both intersections are outside shared edge find closest intersection with different edge.
As you can see this can lead to serious road thickness distortions in some cases (big differences between intersection points, or one of the intersection points is outside surface mesh edge).
If you need accurate road thickness then use the intersection of the casted lines as a road control point instead. To make it possible either use blending or disabling Depth while rendering or add this point to mesh of the surface by re-triangulating the surface mesh. Of coarse such action will also affect the road mesh and you need to iterate few times ...
Another way is use of blended texture for road (like sprites) and compute the texture coordinate for the control points. If the road is too thick then thin it by shifting the texture coordinate ... To make this work you need to select the most far intersection point instead of average ... Compute the real half size of the road and from that compute texture coordinate.
If you get rid of the limitation (for road mesh) that road vertex points are at surface mesh segments or vertexes then you can simply use the intersection of shifted lines alone. That will get rid of the thickness artifacts and simplify things a lot.

Which stage of pipeline should I do culling and clipping and How should I reconstruct triangles after clipping

I'm trying to implement graphic pipeline in software level. I have some problems with clipping and culling now.
Basically, there are two main concerns:
When should back-face culling take place? Eye coordinate, clipping coordinate or window coordinate? I initially made culling process in eye coordinate, thinking this way could relieve the burden of clipping process since many back-facing vertices have already been discarded. But later I realized that in this way vertices need to take 2 matrix multiplications , namely left multiply model-view matrix --> culling --> left multiply perspective matrix, which increases the overhead to some extent.
How do I do clipping and reconstruct triangle? As far as I know, clipping happens in clipping coordinate(after perspective transformation), in another word homogeneous coordinate in which every vertex is being determined whether no not it should be discarded by comparing its x, y, z components with w component. So far so good, right? But after that I need to reconstruct those triangles which have one or two vertices been discarded. I googled that Liang-Barsky algorithm would be helpful in this case, but in clipping coordinate what clipping plane should I use? Should I just record clipped triangles and reconstruct them in NDC?
Any idea will be helpful. Thanks.
(1)
Back-face culling can occur wherever you want.
On the 3dfx hardware, and probably the other cards that rasterised only, it was implemented in window coordinates. As you say that leaves you processing some vertices you don't ever use but you need to weigh that up against your other costs.
You can also cull in world coordinates; you know the location of the camera so you know a vector from the camera to the face — just go to any of the edge vertices. So you can test the dot product of that against the normal.
When I was implementing a software rasteriser for a z80-based micro I went a step beyond that and transformed the camera into model space. So you get the inverse of the model matrix (which was cheap in this case because they were guaranteed to be orthonormal, so the transpose would do), apply that to the camera and then cull from there. It's still a vector difference and a dot product but if you're using the surface normals only for culling then it saves having to transform each and every one of them for the benefit of the camera. For that particular renderer I was then able to work forward from which faces are visible to determine which vertices are visible and transform only those to window coordinates.
(2)
A variant on Sutherland-Cohen is the thing I remember seeing most often. You'd do a forward scan around the outside of the polygon checking each edge in turn and adjusting appropriately.
So e.g. you start with the convex polygon between points (V1, V2, V3). For each clipping plane in turn you'd do something like:
for(Vn in input vertices)
{
if(Vn is on the good side of the plane)
add Vn to output vertices
if(edge from Vn to Vn+1 intersects plane) // or from Vn to 0 if this is the last edge
{
find point of intersection, I
add I to output vertices
}
}
And repeat for each plane. If you're worried about repeated costs then you either need to adopt a structure with an extra level of indirection between faces and edges or just keep a cache. You'd probably do something like dash round the vertices once marking them as in or out, then cache the point of intersection per edge, looked up via the key (v1, v2). If you've set yourself up with the extra level of indirection then store the result in the edge object.

Detecting arbitrary shapes

Greetings,
We have a set of points which represent an intersection of a 3d body and a horizontal plane. We would like to detect the 2D shapes that represent the cross sections of the body. There can be one or more such shapes. We found articles that discuss how to operate on images using Hough Transform, but we may have thousands of such points, so converting to an image is very wasteful. Is there a simpler way to do this?
Thank you
In converting your 3D model to a set of points, you have thrown away the information required to find the intersection shapes. Walk the edge-face connectivity graph of your 3D model to find the edge-plane intersection points in order.
Assuming you have, or can construct, the 3d model topography (some number of vertices, edges between vertices, faces bound by edges):
Iterate through the edge list until you find one that intersects the test plane, add it to a list
Pick one of the faces that share this edge
Iterate through the other edges of that face to find the next intersection, add it to the list
Repeat for the other face that shares that edge until you arrive back at the starting edge
You've built an ordered list of edges that intersect the plane - it's trivial to linearly interpolate each edge to find the intersection points, in order, that form the intersection shape. Note that this process assumes that the face polygons are convex, which in your case they are.
If your volume is concave you'll have multiple discrete intersection shapes, and so you need to repeat this process until all edges have been examined.
There's some java code that does this here
The algorithm / code from the accepted answer does not work for complex special cases, when the plane intersects some vertices of a concave surface. In this case "walking" the edge-face connectivity graph greedily could close some of the polygons before time.
What happens is, that because the plane intersects a vertex, at one point when walking the graph there are two possibilities for the next edge, and it does matter which one is chosen.
A possible solution is to implement a graph traversal algorithm (for instance depth-first search), and choose the longest loop which contains the starting edge.
It looks like you wanted to combine intersection points back into connected figures using some detection or Hough Transform.
Much simpler and more robust way is to immediately get not just intersection points, but contours of 3D body, where the plane cuts it.
To construct contours on the body given by triangular mesh, define the value in each mesh vertex equal to signed distance from the plane (positive on one side of the plane and negative on the other side). The marching squares algorithm for isovalue=0 can be then applied to extract the segments of the contours:
This algorithm works well even when the plane passes through a vertex or an edge of the mesh.
To better understand what is the result of plane section, please take a look at this short video. Following the links there, one can find the implementation as well.

Minimize Polygon Vertices

What is a good algorithm for reducing the number of vertices in a polygon without changing the way it looks very much?
Input: A polygon, represented as a list of points, with way too many verticies: raw input from the mouse, for example.
Output: A polygon with much fewer verticies that still looks a lot like the original: something usable for collision detection, for example (not necessarily convex).
Edit: The solution to this would be similar to finding a multi-segmented line of best fit on a graph. It's called Segmented Least Squares in my algorithms book.
Edit2: The Douglas Peucker Algorithm is what I really want.
Edit: Oh look, Simplifying Polygons
You mentioned collision detection. You could go really simple and calculate a bounding convex hull around it.
If you cared about the concave areas, you can calculate a concave hull by taking the centroid of your polygon, and choosing a point to start. From the starting point rotate around the centroid, finding each vertex you want to keep, and assigning that as the next vertex in the bounding hull. The complexity of the algorithm would come in how you determined which vertices to keep, but I'm sure you thought of that already. You can throw all your vertices into buckets based on their location relative to the centroid. When a bucket gets more than an arbitrary number of vertices full, you can split it. Then take the mean of the vertices in that bucket as the vertex to use in your bounding hull. Or, forget the buckets, and when you're moving around the centroid, only choose a point if its more than a given distance from the last point.
Actually, you could probably just use all the vertices in your polygon as "cloud of points" and calculate the concave hull around that. I'll look for an algorithm link. Worst case on this would be a completely convex polygon.
Another alternative is to start with a bounding rectangle. For each vertex on the rectangle, find the distance from the point to the polygon. For the farthest vertex, split it into two more vertices and move them in some. Repeat until some proportion of either vertices or area is met. I'd have to think about the details of this one a little more.
If you care about the polygon actually looking similar, even in the case of a self-intersecting polygon, then another approach would be required, but it doesn't sound like thats necessary since you asked about collision detection.
This post has some details about the convex hull part.
There's a lot of material out there. Just google for things like "mesh reduction", "mesh simplification", "mesh optimization", etc.

Resources