How to choose a certain edge in Geometry Node (Blender 3.4.0 alpha) - geometry

two questions about Geometry Nodes.
To retopologize certain mesh, I'd like to subdivide a certain edge that both neighbors are triangle.
Questions:(https://i.stack.imgur.com/kG3Us.png)
1.How to find the edge with Geometry Node?
2.Is there a general strategy to find a specific elements(index of vertex, edge, and face)?
History:
I want to retopo the shape(=A) of being after Convex Hull Node because this is messy.
To do so, I chose a way that is of shrinking a simple shape onto A.
Bounding Box > Subdivide > Set Position is the order of the Nodes, but large areas still remain.
To fit the shape more precisely, I am trying to subdivide adittionaly only on the areas and then finally Set Position Node again to fit to the original messy A.
After I have tried some ideas (bellow), now I am trying to do a way of extruding the face, scaling the top selection to zero, merge the face and the set these new 'vertex' to the messy A.
And I find the edge between the face remain.🤣
This is my question above. How to fit these edges on to A?
Ideas I have tried:
Separate the large areas>Subdivide>Join Geometry>Set Position makes holes.
Separate the large areas>Subdivide>Convex Hull>Boolean Mesh makes messy topology
The way of not subdividing the large area, such as scaling the bounding box up enough to disappear the large area, will result overstretch to other mesh, which looks more difficult to solve, so I prefer to solve large flatten area If I can.
(https://i.stack.imgur.com/QzHKa.jpg)
I want to do retopology. I want to fit a new shape that has a clean topology onto the original messy shape.

Related

In computer graphics, faces of a polygon

In computer graphics, why do we need to know that backward face and forward face of a polygon are different?
There are several reasons why a triangle's face might be important.
Face Culling
If you draw a cube, you can only ever see at most 3 sides of it. The front three sides will block your view of the back 3 sides. And while depth testing will prevent drawing the fragments corresponding to the back sides... why bother? In order to do depth testing, you have to rasterize those triangles. That's a lot of work for triangles that won't be seen.
Therefore, we have a way to cull triangles based on their facing, before performing rasterization on them. While vertex processing will still be done on those triangles, they will be discarded before doing heavy-weight operations like rasterization.
Through face culling, you can eliminate approximately half of the triangles in a closed mesh. That's a pretty decent performance savings.
Two-Sided Rendering
A leaf is a thin object, so you might render it as one flat polygon, without face culling. However, a leaf does not look the same on both sides. The top side is usually quite a bit darker than the bottom side.
You can achieve this effect by sending two colors when rendering the leaf; one meant for the top side and one for the bottom. In your fragment shader, you can detect which side of the polygon that fragment was generated from, by looking at the built-in variable gl_FrontFacing. That boolean can be used to select which color to use.
It could even be used to select which texture to sample from, if you want to do more complex two-sided rendering.

Efficient data structure for nearest neighbour search in a tiled context

I am looking for a datastructure to store irregular elevation data {xi,yi,zi} that facilitates fast look-up of points within a xy range.
From what I gather a kd tree should be suitable for this? And also fairly simple to implement?
However the number of points in the elevation dataset may be enormous. It may therefore not be possible to process all points in one go. Instead I aim to divide the xy region into tiles and process each tile separately:
The points within the green rectangle are those needed for tile 1. When I move into tile 2 I will need the points within a green rectangle centered around tile 2. The 2 rightmost point in the green rectangle around tile 1 will still be needed. The other points could be swapped out of memory if needed. In addition 4 more points will be needed for tile 2.
A kd tree may therefore not be optimal since this would require me to rebuild the complete tree for each new tile? Would a R-tree be a better choice?
The point themselves should be stored on disk in some clever format and read into memory just before they are needed. Before I start processing tile 1, I could tell the data structure maintaining the points, that next I will be needing tile 2 and it could then begin to read the necessary points from disk in a separate thread.
I was considering using smaller tiles for loading points into the datastructure. For instance the points in the figure could be divided into 16x16 tiles.
Are there any libraries in C/C++ that implement this functionality?

polygons from BSP

I have a 3d volume given by a binary space partition tree. Usually these are made from polygon models, and the splitted polygons already stored inside the tree nodes.
But mine is not, so I have no polygons. Every node has nothing but it's cut plane (given by normal and origin distance for example). Thus the tree still represent a solid 3d volume, defined by all the cuts made. However, for visualisation I need a polygonal mesh of this volume. How can that be reconstructed efficiently?
The crude method would be to convert the infinite half spaces of the leaves to large enough polhedrons (eg. cubes) and push every single one of them upwards the tree, cutting it by every node's plane it passes. That seems extremely costly, as the tree may be unbalanced (eg. if stupidly made from a convex polyhedra). Is there any classic solution?
In order to recover the polygonal surface you need to intersect the planes. Where each vertex of a polygon is generated by an intersection of three planes and each edge by an intersection of 2 planes. But making this efficient and numerical stable is no trivial task. So i propose to use qhalf that is part of qhull. A documentation of the input and ouput of qhalf can be found here. Of course you can use qhull (and the functionality from qhalf) as a library.

circle drawing algorithm for n-pixel border

I know the Bresenham and related algorithms, and I found a good algorithm to draw a circle with a 1-pixel wide border. Is there any 'standard' algorithm to draw a circle with an n-pixel wide border, without restoring to drawing n circles?
Drawing the pixel and n2 surrounding pixels might be a solution, but it draws many more pixels than needed.
I am writing a graphics library for an embedded system, so I am not looking for a way to do this using an existing library, although a library that does this function and is open source might be a lead.
Compute the points for a single octant for both radii at the same time and simultaneously replicate it eight ways, which is how Bresenham circles are usually drawn anyway. To avoid overdrawing (e.g., for XOR drawing), the second octant should be constrained to draw outside the first octant's x-extents.
Note that this approach breaks down if the line is very thick compared to the radius.
Treat it as a rasterization problem:
Take the bounding box of your annulus.
Consider the image rows falling in the bounding box.
For each row, compute the intersection with the 2 circles (ie solve x^2+y^2=r^2, so x=sqrt(r^2-y^2) for each, for x,y relative to the circle centres.
Fill in the spans. Repeat for next row.
This approach generalizes to all sorts of shapes, can produce sub-pixel coordinates useful for anti-aliasing and scales better with increasing resolution than hacky solutions involving multiple shifted draws.
If the sqrt looks scary for an embedded system, bear in mind there are fast approximate algorithms which would probably be good enough, especially if you're rounding off to the nearest pixel.

Smooth transitions between two intersecting polygons (interesting problem)

I have an interesting problem that I've been trying to solve for a while. There is no "right" solution to this, as there is no strict criteria for success. What I want to accomplish is a smooth transition between two simple polygons, from polygon A to polygon B. Polygon A is completely contained within polygon B.
My criteria for this transition are:
The transition is continuous in time and space
The area that is being "filled" from polygon A into polygon B should be filled in as if there was a liquid in A that was pouring out into the shape of B
It is important that this animation can be calculated either on the fly, or be defined by a set of parameters that require little space, say less than a few Kb.
Cheating is perfectly fine, any way to solve this so that it looks good is a possible solution.
Solutions I've considered, and mostly ruled out:
Pairing up vertices in A and B and simply interpolate. Will not look good and does not work in the case of concave polygons.
Dividing the area B-A into convex polygons, perhaps a Voronoi diagram, and calculate the discrete states of the polygon by doing a BFS on the smaller convex polygons. Then I interpolate between the discrete states. Note: If polygon B-A is convex, the transition is fairly trivial. I didn't go with this solution because dividing B-A into equally sized small convex polygons was surprisingly difficult
Simulation: Subdivide polygon A. Move each vertex along the polygon line normal (outwards) in discrete but small steps. For each step, check if vertex is still inside B. If not, then move back to previous position. Repeat until A equals B. I don't like this solution because the check to see whether a vertex is inside a polygon is slow.
Does anybody have any different ideas?
If you want to keep this simple and somewhat fast, you could go ahead with your last idea where you consider scaling polygon A so that it gradually fills polygon B. You don't necessarily have to check if the scaled-outward vertices are still inside polygon B. Depending on what your code environment and API is like, you could mask the pixels of the expanding polygon A with the outline of polygon B.
In modern OpenGL, you could do this inside a fragment shader. You would have to render polygon B to a texture, send that texture to the shader, and then use that texture to look up if the current fragment being rendered maps to a texture value that has been set by polygon B. If it is not, the fragment gets discarded. You would need to have the texture be as large as the screen. If not, you would need to include some camera calculations in your shaders so you can "render" the fragment-to-test into the texture in the same way you rendered polygon B into that texture.

Resources