Suppose we have drawn a large graph, in which many edges intersect. Now we apply a graph algorithm which removes some edges. We would like to visualize the algorithm by erasing the edges from the screen as the algorithm is doing it.
Is there a graphics technique that allows erasing an edge without re-drawing all the edges intersected by that edge (such re-drawing might slow down the visualization a lot if many edges are intersected)?
Related
two questions about Geometry Nodes.
To retopologize certain mesh, I'd like to subdivide a certain edge that both neighbors are triangle.
Questions:(https://i.stack.imgur.com/kG3Us.png)
1.How to find the edge with Geometry Node?
2.Is there a general strategy to find a specific elements(index of vertex, edge, and face)?
History:
I want to retopo the shape(=A) of being after Convex Hull Node because this is messy.
To do so, I chose a way that is of shrinking a simple shape onto A.
Bounding Box > Subdivide > Set Position is the order of the Nodes, but large areas still remain.
To fit the shape more precisely, I am trying to subdivide adittionaly only on the areas and then finally Set Position Node again to fit to the original messy A.
After I have tried some ideas (bellow), now I am trying to do a way of extruding the face, scaling the top selection to zero, merge the face and the set these new 'vertex' to the messy A.
And I find the edge between the face remain.🤣
This is my question above. How to fit these edges on to A?
Ideas I have tried:
Separate the large areas>Subdivide>Join Geometry>Set Position makes holes.
Separate the large areas>Subdivide>Convex Hull>Boolean Mesh makes messy topology
The way of not subdividing the large area, such as scaling the bounding box up enough to disappear the large area, will result overstretch to other mesh, which looks more difficult to solve, so I prefer to solve large flatten area If I can.
(https://i.stack.imgur.com/QzHKa.jpg)
I want to do retopology. I want to fit a new shape that has a clean topology onto the original messy shape.
I want to find the sharp edges of any given mesh. For reference, here is a mesh of a chair:
It's clear that the border of the back of the chair is an edge, as are the four curves that outline the arms and legs, etc.
I would like to sample points along these edges. Is there a known algorithm for doing this?
Couple approaches I thought of:
Triangle edge detection
Consider every pair of connected points in the mesh. Each of these segments should be part of two triangles. If the angle between the surface normals of the two triangles is wide enough, that segment should be considered an edge.
Point cloud edge detection
With open3d, I can easily convert the mesh into a point cloud, where each point has a surface normal. I could potentially search the point cloud for sudden changes in the surface normals. Though I think that could get fairly complex, as I'd have to find the nearest neighbors of every point.
I'm trying to write a 3D renderer for a scientific application which is based on vectors rather than pixels. The idea is to be able to output to vector formats, such as SVG, so I would like to keep everything as vector objects, rather than pixelizing. The objects will often have transparency.
At the moment I decompose everything into 3D triangles and line segments and split where there are overlaps. The scene is then projected and painted with depth-sorting (painter's algorithm). I'm sorting by the minimum depth of the triangle (secondarily sorting by the maximum for ties). This fails when you have long thin triangles behind bigger triangles which can rise in the order.
The scene is only drawn once for the same set of objects. I can't obviously use z-buffering because of the vectorization and transparency. Is there a robust and reasonably fast method of drawing the triangles in the correct z order? Typically there could be a few 1000 triangles.
I'm trying to implement graphic pipeline in software level. I have some problems with clipping and culling now.
Basically, there are two main concerns:
When should back-face culling take place? Eye coordinate, clipping coordinate or window coordinate? I initially made culling process in eye coordinate, thinking this way could relieve the burden of clipping process since many back-facing vertices have already been discarded. But later I realized that in this way vertices need to take 2 matrix multiplications , namely left multiply model-view matrix --> culling --> left multiply perspective matrix, which increases the overhead to some extent.
How do I do clipping and reconstruct triangle? As far as I know, clipping happens in clipping coordinate(after perspective transformation), in another word homogeneous coordinate in which every vertex is being determined whether no not it should be discarded by comparing its x, y, z components with w component. So far so good, right? But after that I need to reconstruct those triangles which have one or two vertices been discarded. I googled that Liang-Barsky algorithm would be helpful in this case, but in clipping coordinate what clipping plane should I use? Should I just record clipped triangles and reconstruct them in NDC?
Any idea will be helpful. Thanks.
(1)
Back-face culling can occur wherever you want.
On the 3dfx hardware, and probably the other cards that rasterised only, it was implemented in window coordinates. As you say that leaves you processing some vertices you don't ever use but you need to weigh that up against your other costs.
You can also cull in world coordinates; you know the location of the camera so you know a vector from the camera to the face — just go to any of the edge vertices. So you can test the dot product of that against the normal.
When I was implementing a software rasteriser for a z80-based micro I went a step beyond that and transformed the camera into model space. So you get the inverse of the model matrix (which was cheap in this case because they were guaranteed to be orthonormal, so the transpose would do), apply that to the camera and then cull from there. It's still a vector difference and a dot product but if you're using the surface normals only for culling then it saves having to transform each and every one of them for the benefit of the camera. For that particular renderer I was then able to work forward from which faces are visible to determine which vertices are visible and transform only those to window coordinates.
(2)
A variant on Sutherland-Cohen is the thing I remember seeing most often. You'd do a forward scan around the outside of the polygon checking each edge in turn and adjusting appropriately.
So e.g. you start with the convex polygon between points (V1, V2, V3). For each clipping plane in turn you'd do something like:
for(Vn in input vertices)
{
if(Vn is on the good side of the plane)
add Vn to output vertices
if(edge from Vn to Vn+1 intersects plane) // or from Vn to 0 if this is the last edge
{
find point of intersection, I
add I to output vertices
}
}
And repeat for each plane. If you're worried about repeated costs then you either need to adopt a structure with an extra level of indirection between faces and edges or just keep a cache. You'd probably do something like dash round the vertices once marking them as in or out, then cache the point of intersection per edge, looked up via the key (v1, v2). If you've set yourself up with the extra level of indirection then store the result in the edge object.
I have a 3d volume given by a binary space partition tree. Usually these are made from polygon models, and the splitted polygons already stored inside the tree nodes.
But mine is not, so I have no polygons. Every node has nothing but it's cut plane (given by normal and origin distance for example). Thus the tree still represent a solid 3d volume, defined by all the cuts made. However, for visualisation I need a polygonal mesh of this volume. How can that be reconstructed efficiently?
The crude method would be to convert the infinite half spaces of the leaves to large enough polhedrons (eg. cubes) and push every single one of them upwards the tree, cutting it by every node's plane it passes. That seems extremely costly, as the tree may be unbalanced (eg. if stupidly made from a convex polyhedra). Is there any classic solution?
In order to recover the polygonal surface you need to intersect the planes. Where each vertex of a polygon is generated by an intersection of three planes and each edge by an intersection of 2 planes. But making this efficient and numerical stable is no trivial task. So i propose to use qhalf that is part of qhull. A documentation of the input and ouput of qhalf can be found here. Of course you can use qhull (and the functionality from qhalf) as a library.