Merging little triangles into bigger ones in Kinect scan - graphics

I'm not sure if this problem has solution but ill ask anyway ill be glad also for some literature for studying, and some keywords to search.
Lets say I have 3d scan made using kinect.
On the scan there is only a single wall with a door in it. Output from kinect is composed from hundreds of little triangles.
What i want to achieve is that I recognize where is the wall and where is the door and merge wall triangles into lets say few and the same thing with the door triangles.

You want to look at mesh simplification algorithms, but before you do you should probably familiarise yourself with the fundamental concepts of 3D graphics like vertices, matrices and meshes. GDAL and PCL are two libraries that can help you with achieving what you want.
There's also a software called MeshLab, which implements many of the mentioned algorithms and could be of some help to you.

Related

How do I create a real-time rendering window from scratch?

I've been studying 3D graphics on my own for a while now and I want to get a greater understanding of just how everything works. What I would like to do is to create a simple game without using DirectX or OpenGL. I understand most of the math I believe, but the problem I am running up against is I do not know how to get control of the pixels being displayed in a window.
How do I specify what color I want each pixel in my window to be?
I understand I will probably run into issues with buffers and image shearing and probably terrible efficiency problems, but I want to create my own program so that I could see from the very lowest level, of the high level language, how the rendering process works. I really have no idea where to start though. I've figured out how to output BMPs, but I would like to have a running program spitting out 20+ frames per second. How do I accomplish this?
You could pick a environment that allows you to fill an array with values for pixels and display it as a bitmap. This way you come closest to poking RGB values in video memory. WPF, Silverlight, HTML5/Javascript can do this. If you do not make it full screen these technologies should suffice for now.
In WPF and Silverlight, use the WriteableBitmap.
In HTML5, use the canvas
Then it is up to you to implement the logic to draw lines, circles, bezier curves, 3D projections.
This is a lot of fun and you will learn a lot.
I'm reading between the lines that you're more interested in having full control over the rendering process from a low level, rather than having a specific interest in how to achieve that on one specific platform.
If that's the case then you will probably get a good bang for your buck looking at a library like SDL which provides you with a frame buffer that you can render to directly but abstracts away a lot of the platform specifics issues. It has been around for quite a while and there are some good tutorials to give you an idea of whether it's the kind of thing you're looking for - see this tutorial and the subsequent one in the same series, which should be enough to get you up and running.
You say you want to create some kind of a rendering engine, meaning desinging you own Pipeline and matrice classes. Which you are to use to transform 3D coordinates to 2D points.
When you have got the 2D points you've been looking for. You can use say for instance on windows, you can select a brush and draw you triangle values while coloring them at the same time.
I do not know why you would need Bitmaps, but if you want to practice say Texturing you can also do that yourself although off course on a weak computer this might take your frames per second significantly.
If you aim is to understand how rendering works on the lowest level. This is with no doubt a good practice.
Jt Schwinschwiga

Texture coordinates: how to calculate them?

Is there a formula that i can use to calculate texture coordinates for a complex object not something like cube or sphere?
The texture coordinates are usually set manually by whoever creates the model, using the modelling package.
There are ways of automating the whole process, to a great extent. The results may not be much use if somebody is going to draw the texture based on the UV coordinates, and if you ask the impossible (e.g., mapping a sphere exactly, with no distortion and no seams) then you may not get perfect results -- but for processes such as light mapping this is a common approach.
Levy's LSCM is one approach, as used in Blender, for example. See http://alice.loria.fr/index.php/publications.html?Paper=lscm#2002
Direct3D9 has a UV unwrap tool in its D3DX library; I'm not sure what algorithm it uses, and the documentation isn't amazing, but it does work. See
http://msdn.microsoft.com/en-us/library/bb206321(VS.85).aspx
(Most 3D modelling packages have some kind of automated UV unwrap, too, but in general they never seem to have had too much time spent on them. Presumably the expectation is that somebody will want to go through and fix it up by hand afterwards anyway.)

Graphics/Vision Interesting Topics

I would like to do an interesting project for a computer graphics course. I know that there is a lot of literature out there (i.e. SIGGRAPH conference papers). I have a very large range of interest with regard to computer graphics (i.e. image processing, 3D modeling, rendering, animation). However, I've only taken computer vision/graphics for 2 semesters and thus don't have too much background experience, except for the class projects that I had to do.
I've been looking through SIGGRAPH papers trying to see if there is anything that will be of interest to me but the literature is extremely vast. I was wondering if anyone has any topic suggestions, anything interesting that you ran across that you could recommend. I would prefer to do something fun yet slightly challenging (not really interested in making a shooter game).
If this question does not belong here, I apologize and please let me know where I should move it.
Thanks!
Image Drawing animator. (the name is kind of misleading, but I didn't care much about it)
Anyway, the software does the following:
Takes an image say a JPEG or BMP as input.
Extract the lines from the image. (I used Matlab and Laplace transformations)
Convert the static lines to Vector paths.
Simulate drawing the image using the extracted paths.
In summary, you should give an image, for example a city scape, the program extract all lines and start drawing the buildings, streets and sunset lines, then finally add the colors one by one, until the full image is done.
Real time hand(s) detector.
You'll have plenty of interesting and fun applications with this.

Collision detection with hardware generated primitives

There's a lot of literature on collision detection, and I've read at least a big enough portion of it to be fairly familiar with most techniques. However, there's something that has eluded me for a while, and I figured, since StackOverflow provides access to a large group of brilliant minds at once, I'd ask here first before digging around in the bookshelf.
In this day and age, more and more work is being delegated to GPU rather than CPU, and in a lot of cases this is a good thing. For example, there are geometry shaders to create new geometry, or (slightly less new, but still quite fascinating) vertex shaders to which you can through a bunch of vertexes at, and something elegant will come out of it. What I was considering though, as these primitives exists only on the graphics hardware, how would you perform reliable collision detection with these primitives? Let's assume I have some kind of extremely simplified mesh which is displaced in a vertex shader (I don't have a concrete problem, I'm more playing with the idea, and I haven't gotten very deep into geometry shaders yet).
What I've considered so far is separate 'rendering' passes from suitable angles with shading more or less turned off, and perhaps lower resolution mesh, rendering the inside (with faces flipped inward) of my second primitive along with the mesh I want to test against, and executing an occlusion query for the mesh. If the mesh is completely occluded there'd be no intersection. This would of course require that my second primitive is convex.
Somehow I get the feeling that this kind of test will be extremely expensive as the number of primitives increase (even if a large portion can be culled directly). Does anyone else have another idea or technique? I'm more familiar with opengl and cg than directx, but if you have some examples or so in directx, I guess I'll be able to figure out the opengl counterparts.
All ideas are appreciated, so please brainstorm. :)
It sounds like Dan Horn's article “Stream Reduction Operations for GPGPU Applications” in GPU Gems 2 is exactly what you want. Like all chapters, it's freely available online.

Polygon Triangulation with Holes

I am looking for an algorithm or library (better) to break down a polygon into triangles. I will be using these triangles in a Direct3D application. What are the best available options?
Here is what I have found so far:
Ben Discoe's notes
FIST: Fast Industrial-Strength Triangulation of Polygons
I know that CGAL provides triangulation but am not sure if it supports holes.
I would really appreciate some opinions from people with prior experience in this area.
Edit: This is a 2D polygon.
To give you some more choices of libraries out there:
Polyboolean. I never tried this one, but it looks promising: http://www.complex-a5.ru/polyboolean/index.html
General Polygon Clipper. This one works very well in practice and does triangulation as well as clipping and holes holes: http://www.cs.man.ac.uk/~toby/alan/software/
My personal recommendation: Use the tesselation from the GLU (OpenGL Utility Library). The code is rock solid, faster than GPC and generates less triangles. You don't need an initialized OpenGL-Handle or anything like this to use the lib.
If you don't like the idea to include OpenGL system libs in a DirectX application there is a solution as well: Just download the SGI OpenGL reference implementation code and lift the triangulator from it. It just uses the OpenGL-Typedef names and a hand full of enums. That's it. You can extract the code and make a stand alone lib in an hour or two.
In general my advice would be to use something that alreay works and don't start to write your own triangulation.
It is tempting to roll your own if you have read about the ear-clipping or sweep-line algorithm, but fact is that computational geometry algorithms are incredible hard to write in a way that they work stable, never crash and always return a meaningful result. Numerical roundoff errors will accumulate and kill you in the end.
I wrote a triangulation algorithm in C for the company I work with. Getting the core algorithm working took two days. Getting it working with all kinds of degenerated inputs took another two years (I wasn't working fulltime on it, but trust me - I spent more time on it than I should have).
Jonathan Shewchuk's Triangle library is phenomenal; I've used it for automating triangulation in the past. You can ask it to attempt to avoid small/narrow triangles, etc., so you come up with "good" triangulations instead of just any triangulation.
CGAL has the tool you need:
Constrained Triangulations
You can simply provide boundaries of your polygon (incuding the boundaries of the holes) as constraints (the best would be that you insert all vertices, and then specify the constraints as pairs of Vertex_handles).
You can then tag the triangles of the triangulation by any traversal algorithm: start with a triangle incident to the infinite vertex and tag it as being outside, and each time you cross a constraint, switch to the opposite tag (inside if you were previously tagging the triangles as outsider, outside if you were tagging triangles as insider before).
I have found the poly2tri library to be exactly what I needed for triangulation. It produces a much cleaner mesh than other libraries I've tried (including libtess), and it does support holes as well. It's been converted to a bunch of languages. The license is New BSD, so you can use it in any project.
Poly2tri library on Google Code
try libtess2
https://code.google.com/p/libtess2/downloads/list
based on the original SGI GLU tesselator (with liberal licensing). Solves some memory management issues around lots of small mallocs.
You can add the holes relatively easily yourself. Basically triangulate to the convex hull of the input points, as per CGAL, and then delete any triangle whose incentre lies inside any of the hole polygons (or outside any of the external boundaries). When dealing with lots of holes in a large dataset, masking techniques may be used to significantly speed this process up.
edit: A common extension to this technique is to weed weak triangles on the hull, where the longest edge or smallest internal angle exceeds a given value. This will form a better concave hull.
I have implemented a 3D polygon triangulator in C# using the ear clipping method. It is easy to use, supports holes, is numerically robust, and supports aribtrary (not self-intersecting) convex/non-convex polygons.
This is a common problem in finite element analysis. It's called "automatic mesh generation". Google found this site with links to commercial and open source software. They usually presume some kind of CAD representation of the geometry to start.
Another option (with a very flexible license) is to port the algorithm from VTK:
vtkDelaunay2D
This algorithm works fairly well. Using it directly is possible, but requires links to VTK, which may have more overhead than you want (although it has many other nice features, as well).
It supports constraints (holes/boundaries/etc), as well as triangulating a surface that isn't necessarily in the XY plane. It also supports some features I haven't seen elsewhere (see the notes on Alpha values).

Resources