How to detect edge loops from set of faces - graphics

Given a set of selected faces (triangular only) from single mesh, how can we detect following:
a) edge loops that selected faces create.
b) fast grouping of faces based on edge loop they are contained in.
This method is required for doing face extrusion when more than one face is selected and you want to extrude multiple regions but using average normals as direction instead of extruding individual faces.

Here's a paper that talks about detecting edge loops in a mesh, and partitioning:
Punctuated Simplification of Man-Made Objects
Hope this helps.

Related

How to collapse a face in a mesh in Computer Graphics ( How to deal with the surrounding vertices)?

I can collapse a edge , but i do not know how to collapse a face . How to deal with the surrounding vertices)?
Removing a face does not affect your vertices in the general case. By removing a face you just remove the information that 3 vertices form a face. These vertices can still take part in other faces.
If you actually want to remove a face from your mesh a hole will be generated in your mesh. Consider the following mesh patch consisting of 3 faces.
If you decide to remove F2 then the final patch will consist of 2 faces as seen below.
Note that you do not have to rename your faces. I did that to emphasize that there will be 2 faces left.
After releasing the above the way to implement this depends on how you have represented your mesh,but in general you would do something like that:
//the following is pseudo code
if(faceToRemove.isBoarderFace()){
completelyRemoveEdgesThatOnlyBelongedToThisFace(); //if two edge get removed one vertex needs to get removed
}
setTheRemainingEdgesAsBoarderEdges();
removeFaceFromFaceList();
If your mesh does not have holes prior to removing the face than you will not need to remove any vertices from your mesh.
Also take a look at CGAL's graphical explanation .

How to increase polygens in 3dsmax

I am beginner in modelling. Can I increase object's polygen in 3dsmax? I want have smooth object that have not low polygens.
You can increase the polygon count of you model so many ways:
Use subdivide modifier.
Use the turbo smooth modifier.
Use tessellate modifier.
You can use TurboSmooth modifier for this purpose, but before use it, you should make sure your model have enough edges and check if the edges are correct. for example, make sure your vertex not connect to odd number edges( like 3 or 5 edges ) always try to keep it in even numbers, 4 edges is standard, check image below :
both of them used odd numbers ( 3 and 5 ) which is not correct.
try to add more edges to your model and Chamfer them if necessery, before applying turbosmooth or other smooth Modifier's.
You can smooth it by adding a smooth or turbo smooth modifier, then convert it to polygon edit again - or collapse the object class from modify tab - this will reproduce polygon evenly all over the object faces.
Alternately you can select a ring or loop of faces then use the connect tool to add more resolution (polygons) to these selected polygons only
As for the fish model in your image, you can select one of the height faces, then from the polygon edit section in the modify panel on the right, click ring it'll select all the height faces all around the fish, then scroll down on the modify panel click on connect and set the number to add more resolution
UPDATE:
Please take in consideration that there's nothing like infinite segments even spheres (balls) or cylinders in 3D just have a higher number of segments - usually 32+), you can double or triple it but not more, increasing objects segments for very high values can bring your computer down to its knees while modeling or rendering
New versions of 3dsmax (2015, 2016) have a new subdivision modifier called OpenSubDiv modifier. This can be used to subdivide your model, and give it more polygons. There is also the Turbosmooth, Meshsmooth, Tessellate, and Subdivide modifiers available. All of these will add more geometry to your model, based on different algorithms.

How to create holes in objects without modifying the mesh structure in WebGL?

I'm new to WebGL and for an assignment I'm trying to write a function which takes as argument an object, let's say "objectA". ObjectA will not be rendered but if it overlaps with another object in the scene, let’s say “objectB”, the part of objectB which is inside objectA will disappear. So the effect is that there is a hole in ObjectB without modifying its mesh structure.
I've managed to let it work on my own render engine, based on ray tracing, which gives the following effect:
image initial scene:
image with objectA removed:
In the first image, the green sphere is "objectA" and the blue cube is "objectB".
So now I'm trying to program it in WebGL, but I'm a bit stuck. Because WebGL is based on rasterization rather than ray tracing, it has to be calculated in another way. A possibility could be to modify the Z-buffer algorithm, where the fragments with a z-value lying inside objectA will be ignored.
The algorithm that I have in mind works as follows: normally only the fragment with the smallest z-value will be stored at a particular pixel containing the colour and z-value. A first modification is that at a particular pixel, a list of all fragments belonging to that pixel is maintained. No fragments will be discarded. Secondly per fragment an extra parameter is stored containing the object where it belongs to. Next the fragments are sorted in increasing order according to their z-value.
Then, if the first fragment belongs to objectA, it will be ignored. If the next one belongs to objectB, it will be ignored as well. If the third one belongs to objectA and the fourth one to objectB, the fourth one will be chosen because it lies outside objectA.
So the first fragment belonging to objectB will be chosen with the constraint that the amount of previous fragments belonging to objectA is even. If it is uneven, the fragment will lie inside objectA and will be ignored.
Is this somehow possible in WebGL? I've also tried to implement it via a stencil buffer, based on this blog:
WebGL : How do make part of an object transparent?
But this is written for OpenGL. I transformed the code instructions to WebGL code instructions, but it didn't work at all. But I'm not sure whether it will work with a 3D object instead of a 2D triangle.
Thanks a lot in advance!
Why wouldn't you write raytracer inside the fragment shader (aka pixel shader)?
So you would need to render a fullscreen quad (two triangles) and then the fragment shader would be responsible for raytracing. There are plenty of resources to read/learn from.
This links might be useful:
Distance functions - by iq
How shadertoy works
Simple webgl raytracer
EDIT:
Raytracing and SDFs (signed distance functions aka constructive solid geometry (CSGs)) are good way to handle what you need and how is generally achieved to intersect objects. Intersections, and boolean operators in general, for mesh geometry (i.e. made of polygons) is not done during the rendering, rahter it uses special algorithms that do all the processing ahead of rendering, so the resulting mesh actually exists in the memory and its topology is actually calculated and then just rendered.
Depending on the specific scenario that you have, you might be able to achieve the effect under some requirements and restrictions.
There are few important things to take into account: depth peeling (i.e. storing depth values of multiple fragments per single pixel, triangle orientation (CW or CCW) and polygon face orientation (front-facing or back-facing).
Say, for example, that both of your polygons are convex, then rendering backfacing polygons of ObjectA, then of ObjectB, then frontfacing polygons of A, then of B might achieve the desired effect (I'm not including full calculations for all cases of overlaps that can exist).
Under some other sets of restrictions you might be able to achieve the effect.
In your specific example in question, you have shown frontfacing faces of the cube, then in the second image you can see the backface of the cube. That already implies that you have at least two depth values per pixel stored somehow.
There is also a distinction between intersecting in screen-space, or volumes, or faces. Your example works with faces and is the hardest (there are two cases: the one you've shown where mesh A's pixels who are inside mesh B are simply discarded (i.e. you drilled a hole inside its surface), and there is a case where you do boolean operation where you never put a hole in the surface, but in the volume) and is usually done with algorithm that computes output mesh. SDFs are great for volumes. Screen-space is achieved by simply using depth test to discard some fragments.
Again, too many scenarios and depends on what you're trying to achieve and what are the constraints that you're working with.

How do I check if a set of plane polygones create a watertight polyhedra

I am currently wondering if there is a common algorithm to check whether a set of plane polygones, not nescessarily triangles, contruct a watertight polyhedra. Each polygon has an oriantation (normal vector). A simple solution would just be to say yes or no. A more advanced version would be to point out the edges, where the polyhedron is "open". I am not really interesed on how to close to polyhedra.
I would like to point out, that my "holes" are not nescessarily small, e.g., one face of a cube might be missing. Thus, the "undersampling correction" algorithms dont seem to be the correct approach. Furthermore, I am talking of about 100 - 1000, not 1000000 polygons, so computation time should not really be a problem.
Any hints or tips?
kind regards,
curator
I believe you can use a simple topological test -- count the number of times each edge appears in the full list of polygons.
If the set of polygons define the surface of a closed volume, each edge should have count>=2, indicating that each edge is shared by (at least) two adjacent polygons. If the surface is manifold count==2 exactly.
Edges with count==1 indicate open regions of the surface.
The above answer does not cover many cases. A more correct (but not necessarily complete: I wouldn't know) algorithm is to ensure that every edge of every polygon (or of the mesh/polyhedron) has an even number of faces connected to it. Consider the following mesh:
The segment (line) between the closest vertex and the one below is attached to 3 faces (one one of the outer triangle and two of the inner triangle), which is greater than two faces. However this is clearly not closed.

Three.js ParticleSystem flickering with large data

Back story: I'm creating a Three.js based 3D graphing library. Similar to sigma.js, but 3D. It's called graphosaurus and the source can be found here. I'm using Three.js and using a single particle representing a single node in the graph.
This was the first task I had to deal with: given an arbitrary set of points (that each contain X,Y,Z coordinates), determine the optimal camera position (X,Y,Z) that can view all the points in the graph.
My initial solution (which we'll call Solution 1) involved calculating the bounding sphere of all the points and then scale the sphere to be a sphere of radius 5 around the point 0,0,0. Since the points will be guaranteed to always fall in that area, I can set a static position for the camera (assuming the FOV is static) and the data will always be visible. This works well, but it either requires changing the point coordinates the user specified, or duplicating all the points, neither of which are great.
My new solution (which we'll call Solution 2) involves not touching the coordinates of the inputted data, but instead just positioning the camera to match the data. I encountered a problem with this solution. For some reason, when dealing with really large data, the particles seem to flicker when positioned in front/behind of other particles.
Here are examples of both solutions. Make sure to move the graph around to see the effects:
Solution 1
Solution 2
You can see the diff for the code here
Let me know if you have any insight on how to get rid of the flickering. Thanks!
It turns out that my near value for the camera was too low and the far value was too high, resulting in "z-fighting". By narrowing these values on my dataset, the problem went away. Since my dataset is user dependent, I need to determine an algorithm to generate these values dynamically.
I noticed that in the sol#2 the flickering only occurs when the camera is moving. One possible reason can be that, when the camera position is changing rapidly, different transforms get applied to different particles. So if a camera moves from X to X + DELTAX during a time step, one set of particles get the camera transform for X while the others get the transform for X + DELTAX.
If you separate your rendering from the user interaction, that should fix the issue, assuming this is the issue. That means that you should apply the same transform to all the particles and the edges connecting them, by locking (not updating ) the transform matrix until the rendering loop is done.

Resources