Finding a graph's centroid in D3.js - svg

I am trying to allow an entire graph (such as a collapsible dendrogram (http://mbostock.github.com/d3/talk/20111018/tree.html) to be moved in the visualization by using the centroid of the entire graph and do not know what the best course of action would be to treat this graph as a discrete node and to use the centroid coordinates for manipulation. How would the centroid of a graph be calculated, allowing for the graph to be dynamically sized to allow for new nodes, thereby recalculating the centroid?

It sounds like in your case an easier course of action would be to get the container of the graph (i.e. the SVG) and calculate its center point. Assuming that the margins of the graph you're drawing are not too big and/or unbalanced, the behaviour would be very similar to computing the centroid of the graph, but much easier to achieve.
The code would look something like this.
var centerx = svg.getBBox().x + svg.getBBox().width/2,
centery = svg.getBBox().y + svg.getBBox().height/2;

Perhaps this is helpful: http://graphstream-project.org/doc/Algorithms/Centroid_1.0/. I know you are asking in the context of D3.js. But there may be something to leverage from the centroid in GraphStream.

Related

Consistent normal calculation of a point cloud

Is there a library in python or c++ that is capable of estimating normals of point clouds in a consistent way?
In a consistent way I mean that the orientation of the normals is globally preserved over the surface.
For example, when I use python open3d package:
downpcd.estimate_normals(search_param=o3d.geometry.KDTreeSearchParamHybrid(
radius=4, max_nn=300))
I get an inconsistent results, where some of the normals point inside while the rest point outside.
many thanks
UPDATE: GOOD NEWS!
The tangent plane algorithm is now implemented in Open3D!
The source code and the documentation.
You can just call pcd.orient_normals_consistent_tangent_plane(k=15).
And k is the knn graph parameter.
Original answer:
Like Mark said, if your point cloud comes from multiple depth images, then you can call open3d.geometry.orient_normals_towards_camera_location(pcd, camera_loc) before concatenating them together (assuming you're using python version of Open3D).
However, if you don't have that information, you can use the tangent plane algorithm:
Build knn-graph for your point cloud.
The graph nodes are the points. Two points are connected if one is the other's k-nearest-neighbor.
Assign weights to the edges in the graph.
The weight associated with edge (i, j) is computed as 1 - |ni ⋅ nj|
Generate the minimal spanning tree of the resulting graph.
Rooting the tree at an initial node,
traverse the tree in depth-first order, assigning each node an
orientation that is consistent with that of its parent.
Actually the above algorithm comes from Section 3.3 of Hoppe's 1992
SIGGRAPH paper Surface Reconstruction from Unorganized Points. The algorithm is also open sourced.
AFAIK the algorithm does not guarantee a perfect orientation, but it should be good enough.
If you know the viewpoint from where each point was captured, it can be used to orient the normals.
I assume that this not the case - so given your situation, which seems rather watertight and uniformly sampled, mesh reconstruction is promising.
PCL library offers many alternatives in the surface module. For the sake of normal estimation, I would start with either:
ConcaveHull
Greedy projection triangulation
Although simple, they should be enough to produce a single coherent mesh.
Once you have a mesh, each triangle defines a normal (the cross product). It is important to note that a mesh isn't just a collection of independent faces. The faces are connected and this connectivity enforces a coherent orientation across the mesh.
pcl::PolygonMesh is an "half edge data structure". This means that every triangle face is defined by an ordered set of vertices, which defines the orientation:
order of vertices => order of cross product => well defined unambiguous normals
You can either use the normals from the mesh (nearest neighbor), or calculate a low resolution mesh and just use it to orient the cloud.

3d Graphing Application Questions

For one of my classes, I made a 3D graphing application (using Visual Basic). It takes in a string (z=f(x,y)) as input, parses it into RPN notation, then evaluates and graphs the equation. While it did work, it took about 20 seconds to graph. I would have liked to add slide bars to rotate the graph vertically and horizontally, but it was definitely too slow to allow that.
Does anyone know what programming languages would be best for this type of thing? Ideally, I will be able to smoothly rotate the function once it is graphed.
Also, I’m trying to find a better way to rotate the function. Right now, I evaluate it at a bunch of points, and then plot the points to the screen. Every time it is rotated, it must be re-evaluated and plot all the new points. This takes just as long as the original graph process, as it basically treats it as a completely new function.
Lastly, I need a better way to display the graph. Currently (using VB with visual studio) I plot 200,000 points to a chart, but this does not look great by any means. Eventually, I would like to be able to change color based on height, and other graphics manipulation to make it look better.
To be clear, I am not asking for someone to do any of this for me, but rather the means to go about coding this in an efficient way. I will greatly appreciate any advice anyone can give to help with any of these three concerns.
So I will explain how I would go about it using C++ and OpenGL. This doesn't mean those are the tools that you must use, it's just those are standard graphics tools.
Your function's surface is essentially a 2D manifold, which has the nice property of having an intuitive mapping to a 2D space. What is commonly referred to as UV mapping.
What you should do is pick the ranges for the rectangle domain you want to display (minimum x, maximum x, minimum y, maximum y) And make 2 nested for loops of the form:
// Pseudocode
for (x=minimum; x<maximum; x++)
for (y=minimum; y=maximum; y++)
3D point = (x,y, f(x,y))
Store all of these points into a container (std vector for c++ works fine) and this will be your "mesh".
This is done once, prior to rendering. You then render those points using, for example GL_POINTS, and rotate your graph mesh using rotations on the GPU.
This will only show scattered points, not a surface.
If you also wish to show the surface of your function, and not just the points, you can triangulate that set of points fairly easily.
Group each 4 contiguous vertices (i.e the vertices at indices <x,y>, <x+1,y>, <x+1,y>, <x+1,y+1>) and create the 2 triangles:
(<x,y>, <x+1,y>, <x,y+1>), (<x+1,y>, <x+1,y+1>, <x,y+1>)
This will fill triangulate the surface of your mesh.
Essentially you only need to build your mesh once, and this way rendering should be 60 fps for something with 20 000 vertices, regardless of whether you only render points or triangles too.
Programming language is mostly not relevant, so VB itself is probably not the issue. You can have the same issues in Python, C#, C++, etc. Of course you must master the programming language you choose.
One key aspect is using the right algorithms and data-structures. Proper use of memory allocations and memory layout for maximizing CPU (and GPU) cache are also key. Then you must take advantage of the platform and hardware capabilities (GPU and Multithreading). For the last point you definetely need to use a graphics library such as OpenGL or Vulkan.

Untangle graph nodes: python-igraph

My question is similar to this question. I am using python-igraph library to create undirected graph. What I want to achieve is to untangle as much as possible such that minimum number of crossings of edges is achieved. Then I want convert this clean layout to a 2D plane where I can read the the coordinates of each vertex and no vertex is overlapping any other vertex.
For my current graph I have generated the layout based on the Fruchterman-Reingold force-directed algorithm (as shown in the image).
Can anyone give me some hints how can I achieve that? or this cannot be solved in polynomial time because to find the best placement of vertices with minimum of number of crossing is a NP-Hard problem.

Algorithm for cutting a mesh using another mesh

I am looking for an algorithm that given two meshes could clip one using another.
The simplest form of this is clipping a mesh using a plane. I've already implemented that by following something similar to what is described here.
What it does is basically inspecting all mesh vertices and triangles with respect to the plane (the plane's normal and point are given). If the triangle is completely above the plane, it is left untouched. If it falls completely below the plane, it is discarded. If some of the edges of the triangle intersect with the plane, the intersecting points with the plane are calculated and added as the new vertices. Finally a cap is generated for the hole on the place the mesh was cut.
The problem is that the algorithm assumes that the plane is unlimited, therefore whatever is in its path is clipped. In the simplest form, I need an extension of this without the assumption of a plane of "infinite" size.
To clarify, imagine that we have a 3D model of a desk with 2 boxes on it. The boxes are adjacent (but not touching or stacked). The user will define a cutting plane of a limited width and height underneath the first box and performs the cut. We end up with a desk model (mesh) with a box on it and another box (mesh) that can be freely moved around/manipulated.
In the general form, I'd like the user to be able to define a bounding box for the box he/she wants to separate from the desk model and perform the cut using that bounding box.
If I could extend the algorithm I already have to an algorithm with limited-sized planes, that would be great for now.
What you're looking for are constructive solid geometry/boolean algorithms with arbitrary meshes. It's considerably more complex than slicing meshes by an infinite plane.
Among the earliest and simplest research in this area, and a good starting point, is Constructive Solid Geometry for Polyhedral Objects by Trumbore and Hughes.
http://cs.brown.edu/~jfh/papers/Laidlaw-CSG-1986/main.htm
From the original paper:
More elaborate solutions extend upon this subject with a variety of data structures.
The real complexity of the operation lies in the slicing algorithm to slice one triangle against another. The nightmare of implementing robust CSG lies in numerical precision. It's easy when you involve objects far more complex than a cube to run into cases where a slice is made just barely next to a vertex (at which point you have the tough decision of merging the new split vertex or not prior to carrying out more splits), where polygons are coplanar (or almost), etc.
So I suggest initially erring on the side of using very high-precision floating point numbers, possibly even higher than double precision to focus on getting something working correctly and robustly. You can optimize later (first pass should be to use an accelerator like an octree/kd-tree/bvh), but you'll avoid many headaches this way in your first iteration.
This is vastly simpler to implement at render time if you're focusing on a raytracer rather than a modeling software, e.g. With raytracers, all you have to do to do this kind of arbitrary clipping is pretend that an object used to subtract from another has its polygons flipped in the culling process, e.g. It's easy to solve robustly at the ray level, but quite a bit harder to do robustly at the geometric level.
Another thing you can do to make your life so much easier if you can afford it is to voxelize your object, find subtractions/additions/unions of voxels, and then translate the voxels back into a mesh. This is so much easier to make robust, but harder to do efficiently and the voxel->polygon conversion can get quite involved if you want better results than what marching cubes provide.
It's a really tough area to do extremely well and requires perseverance, and thus the reason for the existence of things like this: http://carve-csg.com/about.
If someone is interested, currently there is a solution for this problem in CGAL library. It allows clipping one triangular mesh using another mesh as bounding volume. The usage example can be found here.

Three.js ParticleSystem flickering with large data

Back story: I'm creating a Three.js based 3D graphing library. Similar to sigma.js, but 3D. It's called graphosaurus and the source can be found here. I'm using Three.js and using a single particle representing a single node in the graph.
This was the first task I had to deal with: given an arbitrary set of points (that each contain X,Y,Z coordinates), determine the optimal camera position (X,Y,Z) that can view all the points in the graph.
My initial solution (which we'll call Solution 1) involved calculating the bounding sphere of all the points and then scale the sphere to be a sphere of radius 5 around the point 0,0,0. Since the points will be guaranteed to always fall in that area, I can set a static position for the camera (assuming the FOV is static) and the data will always be visible. This works well, but it either requires changing the point coordinates the user specified, or duplicating all the points, neither of which are great.
My new solution (which we'll call Solution 2) involves not touching the coordinates of the inputted data, but instead just positioning the camera to match the data. I encountered a problem with this solution. For some reason, when dealing with really large data, the particles seem to flicker when positioned in front/behind of other particles.
Here are examples of both solutions. Make sure to move the graph around to see the effects:
Solution 1
Solution 2
You can see the diff for the code here
Let me know if you have any insight on how to get rid of the flickering. Thanks!
It turns out that my near value for the camera was too low and the far value was too high, resulting in "z-fighting". By narrowing these values on my dataset, the problem went away. Since my dataset is user dependent, I need to determine an algorithm to generate these values dynamically.
I noticed that in the sol#2 the flickering only occurs when the camera is moving. One possible reason can be that, when the camera position is changing rapidly, different transforms get applied to different particles. So if a camera moves from X to X + DELTAX during a time step, one set of particles get the camera transform for X while the others get the transform for X + DELTAX.
If you separate your rendering from the user interaction, that should fix the issue, assuming this is the issue. That means that you should apply the same transform to all the particles and the edges connecting them, by locking (not updating ) the transform matrix until the rendering loop is done.

Resources