Finding vertex arrays for common objects - graphics

Where can I find 3d vertex data of some common objects? Consider the teapot for example. If possible, model with vertex colors are favoured.
This question does not strictly belong here. If necessary, move it.

Related

Does an algorithm exist to calculate the union of two watertight meshes?

I have two watertight models (meshes). I would like to generate a mesh that represents the intersection of these two models.
Does an algorithm exist for calculating the mesh that represents the intersection of two models? If so, can you provide (high level) details of the algorithm or a reference?
See this answer to a related problem.
For each mesh, an oracle function can be constructed that determines whether a query line segment intersects the surface (and where) as well as the location of the segment endpoints (inside / outside the solid). The two oracle functions can then be combined together to construct an oracle function for the intersection of the two solids bound by the meshes. This new oracle function can then be fed to surface meshing algorithms like Marching Cubes variants or Delaunay-based approaches (see 3D Surface Mesh Generation in the CGAL documentation) to reconstruct a mesh representation of the intersection.

Algorithm for cutting a mesh using another mesh

I am looking for an algorithm that given two meshes could clip one using another.
The simplest form of this is clipping a mesh using a plane. I've already implemented that by following something similar to what is described here.
What it does is basically inspecting all mesh vertices and triangles with respect to the plane (the plane's normal and point are given). If the triangle is completely above the plane, it is left untouched. If it falls completely below the plane, it is discarded. If some of the edges of the triangle intersect with the plane, the intersecting points with the plane are calculated and added as the new vertices. Finally a cap is generated for the hole on the place the mesh was cut.
The problem is that the algorithm assumes that the plane is unlimited, therefore whatever is in its path is clipped. In the simplest form, I need an extension of this without the assumption of a plane of "infinite" size.
To clarify, imagine that we have a 3D model of a desk with 2 boxes on it. The boxes are adjacent (but not touching or stacked). The user will define a cutting plane of a limited width and height underneath the first box and performs the cut. We end up with a desk model (mesh) with a box on it and another box (mesh) that can be freely moved around/manipulated.
In the general form, I'd like the user to be able to define a bounding box for the box he/she wants to separate from the desk model and perform the cut using that bounding box.
If I could extend the algorithm I already have to an algorithm with limited-sized planes, that would be great for now.
What you're looking for are constructive solid geometry/boolean algorithms with arbitrary meshes. It's considerably more complex than slicing meshes by an infinite plane.
Among the earliest and simplest research in this area, and a good starting point, is Constructive Solid Geometry for Polyhedral Objects by Trumbore and Hughes.
http://cs.brown.edu/~jfh/papers/Laidlaw-CSG-1986/main.htm
From the original paper:
More elaborate solutions extend upon this subject with a variety of data structures.
The real complexity of the operation lies in the slicing algorithm to slice one triangle against another. The nightmare of implementing robust CSG lies in numerical precision. It's easy when you involve objects far more complex than a cube to run into cases where a slice is made just barely next to a vertex (at which point you have the tough decision of merging the new split vertex or not prior to carrying out more splits), where polygons are coplanar (or almost), etc.
So I suggest initially erring on the side of using very high-precision floating point numbers, possibly even higher than double precision to focus on getting something working correctly and robustly. You can optimize later (first pass should be to use an accelerator like an octree/kd-tree/bvh), but you'll avoid many headaches this way in your first iteration.
This is vastly simpler to implement at render time if you're focusing on a raytracer rather than a modeling software, e.g. With raytracers, all you have to do to do this kind of arbitrary clipping is pretend that an object used to subtract from another has its polygons flipped in the culling process, e.g. It's easy to solve robustly at the ray level, but quite a bit harder to do robustly at the geometric level.
Another thing you can do to make your life so much easier if you can afford it is to voxelize your object, find subtractions/additions/unions of voxels, and then translate the voxels back into a mesh. This is so much easier to make robust, but harder to do efficiently and the voxel->polygon conversion can get quite involved if you want better results than what marching cubes provide.
It's a really tough area to do extremely well and requires perseverance, and thus the reason for the existence of things like this: http://carve-csg.com/about.
If someone is interested, currently there is a solution for this problem in CGAL library. It allows clipping one triangular mesh using another mesh as bounding volume. The usage example can be found here.

Binary space partition tree for 3D map

I have a project which takes a picture of topographic map and makes it a 3D object.
When I draw the 3D rectangles of the object, it works very slowly. I read about BSP trees and I didn't really understand it. Can someone please explain how to use BSP in 3D (maybe give an example)? and how to use it in my case, when some mountains in the map cover other parts so I need to organize the rectangles in order to draw them well?
In n-D a BSP tree is a spatial partitioning data structure that recursively splits the space into cells using splitting n-D hyperplanes (or even n-D hypersurfaces).
In 2D, the whole space is recursively split with 2D lines (into (possibly infinite) convex polygons).
In 3D, the whole space is recursively split with 3D planes (into (possibly infinite) convex polytopes).
How to build a BSP tree in 3D (from a model)
The model is made of a list of primitives (triangles or quads which is I believe what you call rectangles).
Start with an initial root node in the BSP tree that represents a cell covering the whole 3D space and initially holding all the primitives of your model.
Compute an optimal splitting plane for the considered primitives.
The goal of this step is to find a plane that will split the primitives into two groups of primitives of approximately the same size (either the same spatial extents or the same count of primitives).
A simple splitting strategy could be to chose a direction at random (which will be the normal of your plane) for the splitting. Then sort all the primitives spatially along this axis. And traverse the sorted list of primitives to find the position that will split the primitives into two groups of roughly equal size (i.e. this simply finds the median position from the primitives along this axis). With this direction and this position, the splitting plane is defined.
One typically used splitting strategy is however:
Compute the centroid of all the considered primitives.
Compute the covariance matrix of all the considered primitives.
The centroid gives the position of the splitting plane.
The eigenvector for the largest eigenvalue of the covariance matrix gives the normal of the splitting plane, which is the direction where the primitives are the most spread (and where the current cell should be split).
Split the current node, create two child nodes and assign primitives to each of them or to the current node.
Having found a suitable splitting plane in 1., the 3D space can be now be divided into two half-spaces: one positive, pointed to by the plane normal, and one negative (on the other side of the splitting plane). The goal of this step is to cut in half the considered primitives by assigning the primitives to the half-space where they belong.
Test each primitive of the current node against the splitting plane and assign it to either the left or right child node depending on whether it in the positive half-space or in the negative half-space.
Some primitives may intersect the splitting plane. They can be clipped by the plane into smaller primitives (and maybe also triangulated) so that these smaller primitives are fully inside one of the half-spaces and only belong to one of the cells corresponding to the child nodes. Another option is to simply attach the overlapping primitives to the current node.
Apply recursively this splitting strategy to the created child nodes (and their respective child nodes), until some criterion to stop splitting is met (typically not having enough primitives in the current node).
How to use a BSP tree in 3D
In all use cases, the hierarchical structure of the BSP tree is used to discard irrelevant part of the model for the query.
Locating a point
Traverse the BSP tree with your query point. At each node, go left or right depending on where the query point is located w.r.t. to the splitting plane of the node.
Compute a ray / model intersection
To find all the triangles of your model intersecting a ray (you may need this for picking your map), do something similar to 1.. Traverse the BSP tree with your query ray. At each node, compute the intersection of the ray with the splitting plane. Also check the primitives stored at the node (if any) and report the ones that intersect the ray. Continue traversing the children of this node that whose cell intersect your ray.
Discarding invisible data
Another possible use is to discard pieces of your model that lie outside the view frustum of your camera (that's probably what you are interested in here). The view frustum is exactly bounded by six planes and has 6 quad faces. Like in 1. and 2., you can traverse the BSP tree, check recursively which cell overlaps with the view frustum and completely discard the ones (and the corresponding pieces of your model) that don't. For the plane / view frustum intersection test, you could check whether any of the 6 quads of the view frustum intersect the plane, or you could conservatively approximate the view frustum with a bounding volume (sphere / axis-aligned bounding box / oriented bounding box) or even do a combination of both.
That being said, the solution to your slow rendering problem might be elsewhere (you may not be able to discard a lot of data with a 3D BSP tree for your model):
62K squares is not that big: if you're using OpenGL, you should however not draw these squares individually or continously stream the geometry to the GPU. You can put all the vertices in a single static vertex buffer and draw the quads by preparing a static index buffer containing the list of indices for the squares with either triangles or (better) triangle strips primitives to draw the corresponding squares in a single draw call.
Your data is highly structured (a regular grid with elevation). If you happen to have much larger data sets (that don't even fit in memory anymore), then you need not only spatial partitioning (that exploits the 2.5D structure of your data and its regularity, like a quadtree) but perhaps LOD techniques as well (to replace pieces of your data by a cheaper representation instead of simply discarding the data). You should then investigate LOD techniques for terrain rendering. This page lists a few resources (papers + implementations). A simplified Chunked LOD could be used as a starting point.

To training a Decision Tree model, what is the better way to deal with attributes represented by a vector?

In most of instruction discussing Decision Tree, the attributes are represented by a single value, and then these values are concatenated as a feature vector. It makes sense since normally the attributes are independent to each other.
However, in practice, some attributes can only represented as vector or matrix, for example, a GPS coordinate (x,y) in 2D map. If x and y are correlative, (nonlinear dependence e.g.), it is not a good a solution to concatenate them with other attributes simply. I wonder if there are some better techniques to deal with them?
thanks

Graphics-Related Question: Mesh and Geometry

What's the difference between mesh and geometry? Aren't they the same? i.e. collection of vertices that form triangles?
A point is geometry, but it is not a mesh. A curve is geometry, but it is not a mesh. An iso-surface is geometry, but it is not... enfin you get the point by now.
Meshes are geometry, not the other way around.
Geometry in the context of computing is far more limited that geometry as a branch of mathematics. There are only a few types of geometry typically used in computer graphics. Sprites are used when rendering points (particles), line segments are used when rendering curves and meshes are used when rendering surface-like geometry.
A mesh is typically a collection of polygons/geometric objects. For instance triangles, quads or a mixture of various polygons. A mesh is simply a more complex shape.
From Wikipedia:
Geometry is a part of mathematics
concerned with questions of size,
shape, and relative position of
figures and with properties of space
IMO a mesh falls under that criteria.
In the context implied by your question:
A mesh is a collection of polygons arranged in such a way that each polygon shares at least one vertex with another polygon in that collection. You can reach any polygon in a mesh from any other polygon in that mesh by traversing the edges and vertices that define those polygons.
Geometry refers to any object in space whose properties may be described according to the principles of the branch of mathematics known as geometry.
That the term "geometry" has different meanings mathematically and in rendering. In rendering it usually denotes what is static in a scene (walls, etc.) What is widely called a "mesh" is a group of geometrical objects (basically triangles) that describe or form an "object" in the scene - pretty much like envalid said it, but usually a mesh forms a single object or entity in a scene. Very often that is how rendering engines use the term: The geometrical data of each scene element (object, entity) composes that element's mesh.
Although this is tagged in "graphics", I think the answer connects with the interpretation from computational physics. There, we usually think of the geometry as an abstraction of the system that is to be represented/simulated, while the mesh is an approximation of the geometry - a compromise we usually have to make to be able to represent the spatial domain within the finite memory of the machine.
You can think of them basically as regular or unstructured sets of points "sprayed" on a surface or within a volume in space.
To be able to do visualization/simulation, it is also necessary to determine the neighbors of each point - for example using Delaunay triangulation which allows you to group sets of points into elements (for which you can solve algebraic versions of the equations describing your system).
In the context of surface representation in computer graphics, I think all major APIs (e.g. OpenGL) have functions which can display these primitives (which can be triangles as given by Delaunay, quads or maybe some other elements).

Resources