Binary space partition tree for 3D map - graphics

I have a project which takes a picture of topographic map and makes it a 3D object.
When I draw the 3D rectangles of the object, it works very slowly. I read about BSP trees and I didn't really understand it. Can someone please explain how to use BSP in 3D (maybe give an example)? and how to use it in my case, when some mountains in the map cover other parts so I need to organize the rectangles in order to draw them well?

In n-D a BSP tree is a spatial partitioning data structure that recursively splits the space into cells using splitting n-D hyperplanes (or even n-D hypersurfaces).
In 2D, the whole space is recursively split with 2D lines (into (possibly infinite) convex polygons).
In 3D, the whole space is recursively split with 3D planes (into (possibly infinite) convex polytopes).
How to build a BSP tree in 3D (from a model)
The model is made of a list of primitives (triangles or quads which is I believe what you call rectangles).
Start with an initial root node in the BSP tree that represents a cell covering the whole 3D space and initially holding all the primitives of your model.
Compute an optimal splitting plane for the considered primitives.
The goal of this step is to find a plane that will split the primitives into two groups of primitives of approximately the same size (either the same spatial extents or the same count of primitives).
A simple splitting strategy could be to chose a direction at random (which will be the normal of your plane) for the splitting. Then sort all the primitives spatially along this axis. And traverse the sorted list of primitives to find the position that will split the primitives into two groups of roughly equal size (i.e. this simply finds the median position from the primitives along this axis). With this direction and this position, the splitting plane is defined.
One typically used splitting strategy is however:
Compute the centroid of all the considered primitives.
Compute the covariance matrix of all the considered primitives.
The centroid gives the position of the splitting plane.
The eigenvector for the largest eigenvalue of the covariance matrix gives the normal of the splitting plane, which is the direction where the primitives are the most spread (and where the current cell should be split).
Split the current node, create two child nodes and assign primitives to each of them or to the current node.
Having found a suitable splitting plane in 1., the 3D space can be now be divided into two half-spaces: one positive, pointed to by the plane normal, and one negative (on the other side of the splitting plane). The goal of this step is to cut in half the considered primitives by assigning the primitives to the half-space where they belong.
Test each primitive of the current node against the splitting plane and assign it to either the left or right child node depending on whether it in the positive half-space or in the negative half-space.
Some primitives may intersect the splitting plane. They can be clipped by the plane into smaller primitives (and maybe also triangulated) so that these smaller primitives are fully inside one of the half-spaces and only belong to one of the cells corresponding to the child nodes. Another option is to simply attach the overlapping primitives to the current node.
Apply recursively this splitting strategy to the created child nodes (and their respective child nodes), until some criterion to stop splitting is met (typically not having enough primitives in the current node).
How to use a BSP tree in 3D
In all use cases, the hierarchical structure of the BSP tree is used to discard irrelevant part of the model for the query.
Locating a point
Traverse the BSP tree with your query point. At each node, go left or right depending on where the query point is located w.r.t. to the splitting plane of the node.
Compute a ray / model intersection
To find all the triangles of your model intersecting a ray (you may need this for picking your map), do something similar to 1.. Traverse the BSP tree with your query ray. At each node, compute the intersection of the ray with the splitting plane. Also check the primitives stored at the node (if any) and report the ones that intersect the ray. Continue traversing the children of this node that whose cell intersect your ray.
Discarding invisible data
Another possible use is to discard pieces of your model that lie outside the view frustum of your camera (that's probably what you are interested in here). The view frustum is exactly bounded by six planes and has 6 quad faces. Like in 1. and 2., you can traverse the BSP tree, check recursively which cell overlaps with the view frustum and completely discard the ones (and the corresponding pieces of your model) that don't. For the plane / view frustum intersection test, you could check whether any of the 6 quads of the view frustum intersect the plane, or you could conservatively approximate the view frustum with a bounding volume (sphere / axis-aligned bounding box / oriented bounding box) or even do a combination of both.
That being said, the solution to your slow rendering problem might be elsewhere (you may not be able to discard a lot of data with a 3D BSP tree for your model):
62K squares is not that big: if you're using OpenGL, you should however not draw these squares individually or continously stream the geometry to the GPU. You can put all the vertices in a single static vertex buffer and draw the quads by preparing a static index buffer containing the list of indices for the squares with either triangles or (better) triangle strips primitives to draw the corresponding squares in a single draw call.
Your data is highly structured (a regular grid with elevation). If you happen to have much larger data sets (that don't even fit in memory anymore), then you need not only spatial partitioning (that exploits the 2.5D structure of your data and its regularity, like a quadtree) but perhaps LOD techniques as well (to replace pieces of your data by a cheaper representation instead of simply discarding the data). You should then investigate LOD techniques for terrain rendering. This page lists a few resources (papers + implementations). A simplified Chunked LOD could be used as a starting point.

Related

Which stage of pipeline should I do culling and clipping and How should I reconstruct triangles after clipping

I'm trying to implement graphic pipeline in software level. I have some problems with clipping and culling now.
Basically, there are two main concerns:
When should back-face culling take place? Eye coordinate, clipping coordinate or window coordinate? I initially made culling process in eye coordinate, thinking this way could relieve the burden of clipping process since many back-facing vertices have already been discarded. But later I realized that in this way vertices need to take 2 matrix multiplications , namely left multiply model-view matrix --> culling --> left multiply perspective matrix, which increases the overhead to some extent.
How do I do clipping and reconstruct triangle? As far as I know, clipping happens in clipping coordinate(after perspective transformation), in another word homogeneous coordinate in which every vertex is being determined whether no not it should be discarded by comparing its x, y, z components with w component. So far so good, right? But after that I need to reconstruct those triangles which have one or two vertices been discarded. I googled that Liang-Barsky algorithm would be helpful in this case, but in clipping coordinate what clipping plane should I use? Should I just record clipped triangles and reconstruct them in NDC?
Any idea will be helpful. Thanks.
(1)
Back-face culling can occur wherever you want.
On the 3dfx hardware, and probably the other cards that rasterised only, it was implemented in window coordinates. As you say that leaves you processing some vertices you don't ever use but you need to weigh that up against your other costs.
You can also cull in world coordinates; you know the location of the camera so you know a vector from the camera to the face — just go to any of the edge vertices. So you can test the dot product of that against the normal.
When I was implementing a software rasteriser for a z80-based micro I went a step beyond that and transformed the camera into model space. So you get the inverse of the model matrix (which was cheap in this case because they were guaranteed to be orthonormal, so the transpose would do), apply that to the camera and then cull from there. It's still a vector difference and a dot product but if you're using the surface normals only for culling then it saves having to transform each and every one of them for the benefit of the camera. For that particular renderer I was then able to work forward from which faces are visible to determine which vertices are visible and transform only those to window coordinates.
(2)
A variant on Sutherland-Cohen is the thing I remember seeing most often. You'd do a forward scan around the outside of the polygon checking each edge in turn and adjusting appropriately.
So e.g. you start with the convex polygon between points (V1, V2, V3). For each clipping plane in turn you'd do something like:
for(Vn in input vertices)
{
if(Vn is on the good side of the plane)
add Vn to output vertices
if(edge from Vn to Vn+1 intersects plane) // or from Vn to 0 if this is the last edge
{
find point of intersection, I
add I to output vertices
}
}
And repeat for each plane. If you're worried about repeated costs then you either need to adopt a structure with an extra level of indirection between faces and edges or just keep a cache. You'd probably do something like dash round the vertices once marking them as in or out, then cache the point of intersection per edge, looked up via the key (v1, v2). If you've set yourself up with the extra level of indirection then store the result in the edge object.

Given an irregular polygon's vertex list, how to create internal triangles to build a flat 3D mesh efficiently?

I'm using Unity, but the solution should be generic.
I will get user input from mouse clicks, which define the vertex list of a closed irregular polygon.
That vertices will define the outer edges of a flat 3D mesh.
To procedurally generate a mesh in Unity, I have to specify all the vertices and how they are connected to form triangles.
So, for convex polygons it's trivial, I'd just make triangles with vertices 1,2,3 then 1,3,4 etc. forming something like a Peacock tail.
But for concave polygons it's not so simple.
Is there an efficient algorithm to find the internal triangles?
You could make use of a constrained Delaunay triangulation (which is not trivial to implement!). Good library implementations are available within Triangle and CGAL, providing efficient O(n*log(n)) implementations.
If the vertex set is small, the ear-clipping algorithm is also a possibility, although it wont necessarily give you a Delaunay triangulation (it will typically produce sub-optimal triangles) and runs in O(n^2). It is pretty easy to implement yourself though.
Since the input vertices exist on a flat plane in 3d space, you could obtain a 2d problem by projecting onto the plane, computing the triangulation in 2d and then applying the same mesh topology to your 3d vertex set.
I've implemented the ear clipping algorithm as follows:
Iterate over the vertices until a convex vertex, v is found
Check whether any point on the polygon lies within the triangle (v-1,v,v+1). If there are, then you need to partition the polygon along the vertices v, and the point which is farthest away from the line (v-1, v+1). Recursively evaluate both partitions.
If the triangle around vertex v contains no other vertices, add the triangle to your output list and remove vertex v, repeat until done.
Notes:
This is inherently a 2D operation even when working on 3D faces. To consider the problem in 2D, simply ignore the vector coordinate of the face's normal which has the largest absolute value. (This is how you "project" the 3D face into 2D coordinates). For example, if the face had normal (0,1,0), you would ignore the y coordinate and work in the x,z plane.
To determine which vertices are convex, you first need to know the polygon's winding. You can determine this by finding the leftmost (smallest x coordinate) vertex in the polygon (break ties by finding the smallest y). Such a vertex is always convex, so the winding of this vertex gives you the winding of the polygon.
You determine winding and/or convexity with the signed triangle area equation. See: http://softsurfer.com/Archive/algorithm_0101/algorithm_0101.htm. Depending on your polygon's winding, all convex triangles with either have positive area (counterclockwise winding), or negative area (clockwise winding).
The point-in-triangle formula is constructed from the signed-triangle-area formula. See: How to determine if a point is in a 2D triangle?.
In step 2 where you need to determine which vertex (v) is farthest away from the line, you can do so by forming the triangles (L0, v, L1), and checking which one has the largest area (absolute value, unless you're assuming a specific winding direction)
This algorithm is not well defined for self-intersecting polygons, and due to the nature of floating point precision, you will likely encounter such a case. Some safeguards can be implemented for stability: - A point should not be considered to be inside your triangle unless it is a concave point. (Such a case indicates self-intersection and you should not partition your set along this vertex). You may encounter a situation where a partition is entirely concave (i.e. it's wound differently to the original polygon's winding). This partition should be discarded.
Because the algorithm is cyclic and involves partitioning the sets, it is highly efficient to use a bidirectional link list structure with an array for storage. You can then partition the sets in 0(1), however the algorithm still has an average O(n^2) runtime. The best case running time is actually a set where you need to partition many times, as this rapidly reduces the number of comparisons.
There is a community script for triangulating concave polygons but I've not personally used it. The author claims it works on 3D points as well as 2D.
One hack I've used in the past if I want to constrain the problem to 2D is to use principal component analysis to find the 2 axes of greatest change in my 3D data and making these my "X" and "Y".

polygons from BSP

I have a 3d volume given by a binary space partition tree. Usually these are made from polygon models, and the splitted polygons already stored inside the tree nodes.
But mine is not, so I have no polygons. Every node has nothing but it's cut plane (given by normal and origin distance for example). Thus the tree still represent a solid 3d volume, defined by all the cuts made. However, for visualisation I need a polygonal mesh of this volume. How can that be reconstructed efficiently?
The crude method would be to convert the infinite half spaces of the leaves to large enough polhedrons (eg. cubes) and push every single one of them upwards the tree, cutting it by every node's plane it passes. That seems extremely costly, as the tree may be unbalanced (eg. if stupidly made from a convex polyhedra). Is there any classic solution?
In order to recover the polygonal surface you need to intersect the planes. Where each vertex of a polygon is generated by an intersection of three planes and each edge by an intersection of 2 planes. But making this efficient and numerical stable is no trivial task. So i propose to use qhalf that is part of qhull. A documentation of the input and ouput of qhalf can be found here. Of course you can use qhull (and the functionality from qhalf) as a library.

Detecting arbitrary shapes

Greetings,
We have a set of points which represent an intersection of a 3d body and a horizontal plane. We would like to detect the 2D shapes that represent the cross sections of the body. There can be one or more such shapes. We found articles that discuss how to operate on images using Hough Transform, but we may have thousands of such points, so converting to an image is very wasteful. Is there a simpler way to do this?
Thank you
In converting your 3D model to a set of points, you have thrown away the information required to find the intersection shapes. Walk the edge-face connectivity graph of your 3D model to find the edge-plane intersection points in order.
Assuming you have, or can construct, the 3d model topography (some number of vertices, edges between vertices, faces bound by edges):
Iterate through the edge list until you find one that intersects the test plane, add it to a list
Pick one of the faces that share this edge
Iterate through the other edges of that face to find the next intersection, add it to the list
Repeat for the other face that shares that edge until you arrive back at the starting edge
You've built an ordered list of edges that intersect the plane - it's trivial to linearly interpolate each edge to find the intersection points, in order, that form the intersection shape. Note that this process assumes that the face polygons are convex, which in your case they are.
If your volume is concave you'll have multiple discrete intersection shapes, and so you need to repeat this process until all edges have been examined.
There's some java code that does this here
The algorithm / code from the accepted answer does not work for complex special cases, when the plane intersects some vertices of a concave surface. In this case "walking" the edge-face connectivity graph greedily could close some of the polygons before time.
What happens is, that because the plane intersects a vertex, at one point when walking the graph there are two possibilities for the next edge, and it does matter which one is chosen.
A possible solution is to implement a graph traversal algorithm (for instance depth-first search), and choose the longest loop which contains the starting edge.
It looks like you wanted to combine intersection points back into connected figures using some detection or Hough Transform.
Much simpler and more robust way is to immediately get not just intersection points, but contours of 3D body, where the plane cuts it.
To construct contours on the body given by triangular mesh, define the value in each mesh vertex equal to signed distance from the plane (positive on one side of the plane and negative on the other side). The marching squares algorithm for isovalue=0 can be then applied to extract the segments of the contours:
This algorithm works well even when the plane passes through a vertex or an edge of the mesh.
To better understand what is the result of plane section, please take a look at this short video. Following the links there, one can find the implementation as well.

What does 'Polygon' mean in terms of 3D Graphics?

An old Direct3D book says
"...you can achieve an acceptable frame
rate with hardware acceleration while
displaying between 2000 and 4000
polygons per frame..."
What is one polygon in Direct3D? Do they mean one primitive (indexed or otherwise) or one triangle?
That book means triangles. Otherwise, what if I wanted 1000-sided polygons? Could I still achieve 2000-4000 such shapes per frame?
In practice, the only thing you'll want it to be is a triangle because if a polygon is not a triangle it's generally tessellated to be one anyway. (Eg, a quad consists of two triangles, et cetera). A basic triangulation (tessellation) algorithm for that is really simple; you just loop though the vertices and turn every three vertices into a triangle.
Here, a "polygon" refers to a triangle. All . However, as you point out, there are many more variables than just the number of triangles which determine performance.
Key issues that matter are:
The format of storage (indexed or not; list, fan, or strip)
The location of storage (host-memory vertex arrays, host-memory vertex buffers, or GPU-memory vertex buffers)
The mode of rendering (is the draw primitive command issued fully from the host, or via instancing)
Triangle size
Together, those variables can create much greater than a 2x variation in performance.
Similarly, the hardware on which the application is running may vary 10x or more in performance in the real world: a GPU (or integrated graphics processor) that was low-end in 2005 will perform 10-100x slower in any meaningful metric than a current top-of-the-line GPU.
All told, any recommendation that you use 2-4000 triangles is so ridiculously outdated that it should be entirely ignored today. Even low-end hardware today can easily push 100,000 triangles in a frame under reasonable conditions. Further, most visually interesting applications today are dominated by pixel shading performance, not triangle count.
General rules of thumb for achieving good triangle throughput today:
Use [indexed] triangle (or quad) lists
Store data in GPU-memory vertex buffers
Draw large batches with each draw primitives call (thousands of primitives)
Use triangles mostly >= 16 pixels on screen
Don't use the Geometry Shader (especially for geometry amplification)
Do all of those things, and any machine today should be able to render tens or hundreds of thousands of triangles with ease.
According to this page, a polygon is n-sided in Direct3d.
In C#:
public static Mesh Polygon(
Device device,
float length,
int sides
)
As others already said, polygons here means triangles.
Main advantage of triangles is that, since 3 points define a plane, triangles are coplanar by definition. This means that every point within the triangle is exactly defined as a linear combination of polygon points. More vertices aren't necessarily coplanar, and they don't define a unique curved plane.
An advantage more in mechanical modeling than in graphics is that triangles are also undeformable.

Resources