I am currently trying to construct the area covered by a device over an operating period.
The first step in this process appears to be constructing a polygon of the covered area.
Since the pattern is not a standard shape, convex hulls overstate the covered area by jumping to the largest coverage area possible.
I have found a paper that appears to cover the concept of non-convex hull generation, but no discussions on how to implement this within a high level language.
http://www.geosensor.net/papers/duckham08.PR.pdf
Has anyone seen a straight forward algorithm for constructing a non-convex hull or concave hull or perhaps any python code to achieve the same result?
I have tried convex hulls mainly qhull, with a limited edge size with limited success.
Also I have noticed some licensed libraries that will not be able to be distributed, so unfortunately thats off the table.
Any better ideas or cookbooks?
You might try looking into Alpha Shapes. The CGAL library can compute them.
Edit: I see that the paper you linked references alpha shapes, and also has an algorithm listing. Is that not high level enough for you? Since you listed python as a tag, I'm sure there are Delaunay triangulation libraries in Python, which I think is the hardest part of implementing the algorithm; you just need to make sure you can modify the resulting triangulation output. The boundary query functions can probably be implemented with associative arrays.
Related
I am looking for an algorithm that given two meshes could clip one using another.
The simplest form of this is clipping a mesh using a plane. I've already implemented that by following something similar to what is described here.
What it does is basically inspecting all mesh vertices and triangles with respect to the plane (the plane's normal and point are given). If the triangle is completely above the plane, it is left untouched. If it falls completely below the plane, it is discarded. If some of the edges of the triangle intersect with the plane, the intersecting points with the plane are calculated and added as the new vertices. Finally a cap is generated for the hole on the place the mesh was cut.
The problem is that the algorithm assumes that the plane is unlimited, therefore whatever is in its path is clipped. In the simplest form, I need an extension of this without the assumption of a plane of "infinite" size.
To clarify, imagine that we have a 3D model of a desk with 2 boxes on it. The boxes are adjacent (but not touching or stacked). The user will define a cutting plane of a limited width and height underneath the first box and performs the cut. We end up with a desk model (mesh) with a box on it and another box (mesh) that can be freely moved around/manipulated.
In the general form, I'd like the user to be able to define a bounding box for the box he/she wants to separate from the desk model and perform the cut using that bounding box.
If I could extend the algorithm I already have to an algorithm with limited-sized planes, that would be great for now.
What you're looking for are constructive solid geometry/boolean algorithms with arbitrary meshes. It's considerably more complex than slicing meshes by an infinite plane.
Among the earliest and simplest research in this area, and a good starting point, is Constructive Solid Geometry for Polyhedral Objects by Trumbore and Hughes.
http://cs.brown.edu/~jfh/papers/Laidlaw-CSG-1986/main.htm
From the original paper:
More elaborate solutions extend upon this subject with a variety of data structures.
The real complexity of the operation lies in the slicing algorithm to slice one triangle against another. The nightmare of implementing robust CSG lies in numerical precision. It's easy when you involve objects far more complex than a cube to run into cases where a slice is made just barely next to a vertex (at which point you have the tough decision of merging the new split vertex or not prior to carrying out more splits), where polygons are coplanar (or almost), etc.
So I suggest initially erring on the side of using very high-precision floating point numbers, possibly even higher than double precision to focus on getting something working correctly and robustly. You can optimize later (first pass should be to use an accelerator like an octree/kd-tree/bvh), but you'll avoid many headaches this way in your first iteration.
This is vastly simpler to implement at render time if you're focusing on a raytracer rather than a modeling software, e.g. With raytracers, all you have to do to do this kind of arbitrary clipping is pretend that an object used to subtract from another has its polygons flipped in the culling process, e.g. It's easy to solve robustly at the ray level, but quite a bit harder to do robustly at the geometric level.
Another thing you can do to make your life so much easier if you can afford it is to voxelize your object, find subtractions/additions/unions of voxels, and then translate the voxels back into a mesh. This is so much easier to make robust, but harder to do efficiently and the voxel->polygon conversion can get quite involved if you want better results than what marching cubes provide.
It's a really tough area to do extremely well and requires perseverance, and thus the reason for the existence of things like this: http://carve-csg.com/about.
If someone is interested, currently there is a solution for this problem in CGAL library. It allows clipping one triangular mesh using another mesh as bounding volume. The usage example can be found here.
Currently I am looking for an efficient algorithm to compute the intersection of two triangle meshes. I have searched over the internet, but haven't found valuable materials. The book Real-Time Collision Detection is a helpful book but is too complex for my task. I also found the post:Triangle to triangle collision detection in 3D. However I hope to find a detailed description about the algorithm.
Regards
Jogging
Well it depends on meshes size, testing each triangle in each mesh against the other is only valid in small meshes since it has n^2 complexity.
To work around that most algorithms use
Spatial portioning
first to subdivide the space into smaller ones and then tackles each one separately.
For spatial portioning most algorithms use
OcTrees
or BSPTrees however if you don't need to complicate things you can just subdivide the space into n boxes then check triangle triangle intersection in each box
A naive approach is to find, for each edge in the polygon, the point on that edge closest to the given point, and then take the one that's closest. Is there a faster algorithm? My goal is to implement a 2D Super Mario Galaxy-style platformer.
Apparently this can be done with Voronoi regions, as in this video: http://www.youtube.com/watch?v=Ldh2YKobuWo
However, I can't find any Voronoi algorithms that deal with edges as well as points. Ideas?
Calculate the point-line distance for each of the edges, then pick the shortest one. There is no shortcut. This site has a good explanation and even implementations in various languages.
However, finding "the point on that edge closest to the given point" is a computationally unnecessary intermediate result.
If the polygon is convex, then the overhead of the voronoi calculation far exceeds that of the naive approach.
If this is run many times, and each time the point changes slightly, you only need to check 3 segments (think about it: as you move around, assuming many checks, then the closest edge will only change to an adjacent edge)
What is the correct way to sort polygons so that they blend properly? The basic concept I think is to render the furthest polygon first back to closest in order. But what about cases of intersecting polygons?
What is the correct way to sort
polygons so that they blend properly?
Sort them back to front.
But what about cases of intersecting polygons?
Simply ... don't do it. If you have no choice then you will have to split the polygons along their intersection as you go.
Edit: Its worth bearing in mind that finding the intersections and splitting will be very slow. You could use some sort of acceleration structure to aid you.
Its quite common to use a BSP to sort and split static transparent polys.
Frankly I would go for depth peeling and compositing passes. I have seen some implementation of this algorithm and most of the time peeling 2 or 3 layers is "good enough".
This will also save you from having to managed different data structures for storing your meshes. One drawback is that it can be a bit performance intensive, for example if you have a lot of transparent meshes.
Rasterisation (triangles) and ray tracing are the only methods I've ever come across to render a 3D scene. Are there any others? Also, I'd love to know of any other really "out there" ways of doing 3D, such as not using polygons.
Aagh! These answers are very uninformed!
Of course, it doesn't help that the question is imprecise.
OK, "rendering" is a really wide topic. One issue within rendering is camera visibility or "hidden surface algorithms" -- figuring out what objects are seen in each pixel. There are various categorizations of visibility algorithms. That's probably what the poster was asking about (given that they thought of it as a dichotomy between "rasterization" and "ray tracing").
A classic (though now somewhat dated) categorization reference is Sutherland et al "A Characterization of Ten Hidden-Surface Algorithms", ACM Computer Surveys 1974. It's very outdated, but it's still excellent for providing a framework for thinking about how to categorize such algorithms.
One class of hidden surface algorithms involves "ray casting", which is computing the intersection of the line from the camera through each pixel with objects (which can have various representations, including triangles, algebraic surfaces, NURBS, etc.).
Other classes of hidden surface algorithms include "z-buffer", "scanline techniques", "list priority algorithms", and so on. They were pretty darned creative with algorithms back in the days when there weren't many compute cycles and not enough memory to store a z-buffer.
These days, both compute and memory are cheap, and so three techniques have pretty much won out: (1) dicing everything into triangles and using a z-buffer; (2) ray casting; (3) Reyes-like algorithms that uses an extended z-buffer to handle transparency and the like. Modern graphics cards do #1; high-end software rendering usually does #2 or #3 or a combination. Though various ray tracing hardware has been proposed, and sometimes built, but never caught on, and also modern GPUs are now programmable enough to actually ray trace, though at a severe speed disadvantage to their hard-coded rasterization techniques. Other more exotic algorithms have mostly fallen by the wayside over the years. (Although various sorting/splatting algorithms can be used for volume rendering or other special purposes.)
"Rasterizing" really just means "figuring out which pixels an object lies on." Convention dictates that it excludes ray tracing, but this is shaky. I suppose you could justify that rasterization answers "which pixels does this shape overlap" whereas ray tracing answers "which object is behind this pixel", if you see the difference.
Now then, hidden surface removal is not the only problem to be solved in the field of "rendering." Knowing what object is visible in each pixel is only a start; you also need to know what color it is, which means having some method of computing how light propagates around the scene. There are a whole bunch of techniques, usually broken down into dealing with shadows, reflections, and "global illumination" (that which bounces between objects, as opposed to coming directly from lights).
"Ray tracing" means applying the ray casting technique to also determine visibility for shadows, reflections, global illumination, etc. It's possible to use ray tracing for everything, or to use various rasterization methods for camera visibility and ray tracing for shadows, reflections, and GI. "Photon mapping" and "path tracing" are techniques for calculating certain kinds of light propagation (using ray tracing, so it's just wrong to say they are somehow fundamentally a different rendering technique). There are also global illumination techniques that don't use ray tracing, such as "radiosity" methods (which is a finite element approach to solving global light propagation, but in most parts of the field have fallen out of favor lately). But using radiosity or photon mapping for light propagation STILL requires you to make a final picture somehow, generally with one of the standard techniques (ray casting, z buffer/rasterization, etc.).
People who mention specific shape representations (NURBS, volumes, triangles) are also a little confused. This is an orthogonal problem to ray trace vs rasterization. For example, you can ray trace nurbs directly, or you can dice the nurbs into triangles and trace them. You can directly rasterize triangles into a z-buffer, but you can also directly rasterize high-order parametric surfaces in scanline order (c.f. Lane/Carpenter/etc CACM 1980).
There's a technique called photon mapping that is actually quite similar to ray tracing, but provides various advantages in complex scenes. In fact, it's the only method (at least of which I know) that provides truly realistic (i.e. all the laws of optics are obeyed) rendering if done properly. It's a technique that's used sparingly as far as I know, since it's performance is hugely worse than even ray tracing (given that it effectively does the opposite and simulates the paths taken by photons from the light sources to the camera) - yet this is it's only disadvantage. It's certainly an interesting algorithm, though you're not going to see it in widescale use until well after ray tracing (if ever).
The Rendering article on Wikipedia covers various techniques.
Intro paragraph:
Many rendering algorithms have been
researched, and software used for
rendering may employ a number of
different techniques to obtain a final
image.
Tracing every ray of light in a scene
is impractical and would take an
enormous amount of time. Even tracing
a portion large enough to produce an
image takes an inordinate amount of
time if the sampling is not
intelligently restricted.
Therefore, four loose families of
more-efficient light transport
modelling techniques have emerged:
rasterisation, including scanline
rendering, geometrically projects
objects in the scene to an image
plane, without advanced optical
effects; ray casting considers the
scene as observed from a specific
point-of-view, calculating the
observed image based only on geometry
and very basic optical laws of
reflection intensity, and perhaps
using Monte Carlo techniques to reduce
artifacts; radiosity uses finite
element mathematics to simulate
diffuse spreading of light from
surfaces; and ray tracing is similar
to ray casting, but employs more
advanced optical simulation, and
usually uses Monte Carlo techniques to
obtain more realistic results at a
speed that is often orders of
magnitude slower.
Most advanced software combines two or
more of the techniques to obtain
good-enough results at reasonable
cost.
Another distinction is between image
order algorithms, which iterate over
pixels of the image plane, and object
order algorithms, which iterate over
objects in the scene. Generally object
order is more efficient, as there are
usually fewer objects in a scene than
pixels.
From those descriptions, only radiosity seems different in concept to me.