Three.js ParticleSystem flickering with large data - graphics

Back story: I'm creating a Three.js based 3D graphing library. Similar to sigma.js, but 3D. It's called graphosaurus and the source can be found here. I'm using Three.js and using a single particle representing a single node in the graph.
This was the first task I had to deal with: given an arbitrary set of points (that each contain X,Y,Z coordinates), determine the optimal camera position (X,Y,Z) that can view all the points in the graph.
My initial solution (which we'll call Solution 1) involved calculating the bounding sphere of all the points and then scale the sphere to be a sphere of radius 5 around the point 0,0,0. Since the points will be guaranteed to always fall in that area, I can set a static position for the camera (assuming the FOV is static) and the data will always be visible. This works well, but it either requires changing the point coordinates the user specified, or duplicating all the points, neither of which are great.
My new solution (which we'll call Solution 2) involves not touching the coordinates of the inputted data, but instead just positioning the camera to match the data. I encountered a problem with this solution. For some reason, when dealing with really large data, the particles seem to flicker when positioned in front/behind of other particles.
Here are examples of both solutions. Make sure to move the graph around to see the effects:
Solution 1
Solution 2
You can see the diff for the code here
Let me know if you have any insight on how to get rid of the flickering. Thanks!

It turns out that my near value for the camera was too low and the far value was too high, resulting in "z-fighting". By narrowing these values on my dataset, the problem went away. Since my dataset is user dependent, I need to determine an algorithm to generate these values dynamically.

I noticed that in the sol#2 the flickering only occurs when the camera is moving. One possible reason can be that, when the camera position is changing rapidly, different transforms get applied to different particles. So if a camera moves from X to X + DELTAX during a time step, one set of particles get the camera transform for X while the others get the transform for X + DELTAX.
If you separate your rendering from the user interaction, that should fix the issue, assuming this is the issue. That means that you should apply the same transform to all the particles and the edges connecting them, by locking (not updating ) the transform matrix until the rendering loop is done.

Related

Triangulate camera position and orientation in regards to known objects

I made an object tracker that calculates the position of an object recorded in a live camera feed using stereoscopic cameras. The math was simple, once you know the camera distance and orientation. However, now I thought it would be nice to allow me to quickly extract all these parameters, so when I change my setup or cameras I will be able to quickly calibrate it again.
To calculate the object position I made some simplifications/assumptions, which made the math easier: the cameras are in the same YZ plane, so there is only a distance in x between them. Their tilt is also just in the XY plane.
To reverse the triangulation I thought a test pattern (square) of 4 points of which I know the distances to each other would suffice. Ideally I would like to get the cameras' positions (distances to test pattern and each other), their rotation in X (and maybe Y and Z if applicable/possible), as well as their view angle (to translate pixel position to real world distances - that should be a camera constant, but in case I change cameras, it is quite a bit to define accurately)
I started with the same trigonometric calculations, but always miss parameters. I am wondering if there is an existing solution or a solid approach. If I need to add parameter (like distances, they are easy enough to measure), it's no problem (my calculations didn't give me any simple equations with that possibility though).
I also read about Homography in opencv, but it seems it applies to 2D space only, or not?
Any help is appreciated!

insertObservation method in COccupancyGridMap2D is producing warped results

The function insertObservation in COccupancyGridMap2D takes in two parameters which are the CPose3D and CObservation2DRangeScan values, even though both of these values are accurate with no noise, the grid is producing warped boundaries. The only thing I can think of is the scan.aperture settings might be producing this effect but these are correct with a range of 2*PI and other visual aides for point clouds show no warpage at all. Below is an illustration of this.
On the right the occupancy grid is warped compared to the ground truth square boundary. The left points look fine and are using the same aperture and load FromVectors settings.
Here is example code to try to verify the warp effect your self.
COccupancyGridMap2D gridmap;
gridmap.setSize(-4.0,4.0,-4.0,4.0,0.025f);
#define SCANS_SIZE 100
char SCAN_VALID[] = {1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1};
CPose3D transform = CPose3D(0,0,0,0,0,0);
CObservation2DRangeScan read_scan;
read_scan.aperture = 2*M_PIf;
read_scan.rightToLeft = true;
vector<float> landmark = {2.9f,2.906f,2.924f,2.953f,2.996f,3.052f,3.124f,3.212f,3.319f,3.447f,3.601f,3.786f,4.007f,3.948f,3.736f,3.560f,3.413f,3.290f,3.188f,3.104f,3.037f,2.984f,2.945f,2.918f,2.903f,2.900f,2.909f,2.930f,2.963f,3.009f,3.069f,3.144f,3.237f,3.349f,3.483f,3.644f,3.837f,4.069f,3.891f,3.689f,3.521f,3.380f,3.263f,3.166f,3.086f,3.022f,2.973f,2.937f,2.913f,2.901f,2.901f,2.913f,2.937f,2.973f,3.022f,3.086f,3.166f,3.263f,3.380f,3.521f,3.689f,3.891f,4.069f,3.837f,3.644f,3.483f,3.349f,3.237f,3.144f,3.069f,3.009f,2.963f,2.930f,2.909f,2.900f,2.903f,2.918f,2.945f,2.984f,3.037f,3.104f,3.188f,3.290f,3.413f,3.560f,3.736f,3.948f,4.007f,3.786f,3.601f,3.447f,3.319f,3.212f,3.124f,3.052f,2.996f,2.953f,2.924f,2.906f,2.900f};
float *SCAN_RANGES = &landmark[0];
read_scan.loadFromVectors(SCANS_SIZE, SCAN_RANGES,SCAN_VALID);
gridmap.insertObservation(&read_scan,&transform);
CSimplePointsMap m3;
m3.insertObservation(&read_scan);
m3.getAllPoints(map_xs,map_ys,map_zs);
Here is a image of the simplePointsMap plot (red points) vs the OccupanyGrid
The angles being casted from the occupany grid look correct, with a consistent interval, but the angle is still off from simplepoints map, length looks ok and it seems each ray could be rotated to match with one of the red points. Possibly what could be happening is a mapping issue, and since we try to make the angles into discrete horizontal and vertical steps this causes the misalignment. I've tried increasing the resolution but this does not help, I guess that makes sense since scaling a horizontal/vertical ratio would still result in the same ratio and mismatch. I might be missing something though, what else could be causing this distortion, is this expected and the best we can do? Thank you for any help.
It seems to me that the problem is in the assumption of which are the angles of each scan "ray".
Take a look at the class mrpt::obs::CSinCosLookUpTableFor2DScans, generate one such sin/cos LUT for your specific scan object, and double check if the sin/cos values coincide with yours, as used to generate the scan.
By the way, COccupancyGridMap2D has one method to simulate a 2D scan from a gridmap image, give it a try, and if that one generates warped results, please fill up a bug report (!) ;-)
Cheers.
I just realized what was going on, CSimplePointsMap and COccupancyGridMap2D use two slightly different references for point angles. CSimplePointsMap is expecting an overlap between the first and last point while COccupancyGridMap2D is not. The simple fix to all of this then is to read in one less scan for the COccupancyGridMap2D and then everything lines up. This is if your angles are being defined as so, which is fine for CSimplePointsMap.
for (int i = 0; i < Raysize; i++)
{
float angle = -angle_range / 2 + i * (angle_range) / (Raysize-1);
Here is the fix for OccupancyGridMap2D insertObservation using SCANS_SIZE-1 instead and CSimplePointsMap can still use SCANS_SIZE.
read_scan.loadFromVectors(SCANS_SIZE-1, SCAN_RANGES,SCAN_VALID);
gridmap.insertObservation(&read_scan,&transform);

"Inverting" a concave polygon

I'm building a 2D game where player can only see things that are not blocked by other objects. Consider this example on how it looks now:
I've implemented raytracing algorithm for this and it seems to work just fine (I've reduced the boundaries for demo to make all edges visible).
As you can see, lighter area is built with a bunch of triangles, each of them having common point in the position of player. Each two neighbours have two common points.
However I'm willing to calculate bounds for external the part of the polygon to fill it with black-colored triangles "hiding" what player cannot see.
One way to do it is to "mask" the black rectangle with current polygon, but I'm afraid it's very ineffective.
Any ideas about an effective algorithm to achieve this?
Thanks!
A non-analytical, rough solution.
Cast rays with gradually increasing polar angle
Record when a ray first hits an object (and the point where it hits)
Keep going until it no longer hits the same object (and record where it previously hits)
Using the two recorded points, construct a trapezoid that extends to infinity (or wherever)
Caveats:
Doesn't work too well with concavities - need to include all points in-between as well. May need Delaunay triangulation etc... messy!
May need extra states to account for objects tucked in behind each other.

Preventing pixelshader overdraw for a single ERG

Background
Using gluTess to build a triangle list in Direct3D9 from a GDI+ DrawString(..) path:
A pixel shader (v3.0) is then used to fill in the shape. When painting with opaque values, everything looks fine:
The problem
At certain font sizes, if the color has an alpha component (ie Argb #55FFFFFF) we begin to see these nasty tessellation artifacts where triangles may overlap ever so slightly:
At larger font sizes the problem is sometimes not present:
Using Intel's excellent GPA Frame Analyzer Pixel History tool, we can see in areas where the artifacts occur, the pixel has been "touched" 3 times from the single Erg.
I'm trying to figure out how I can stop my pixel shader from touching the same pixel more than once.
Other solutions relating to overdraw prevention seem to be all about zbuffer strategies, however this problem is more to do with painting of a single 2D triangle list within a single pixel shader pass.
I'm at a bit of a loss trying to come up with a solution on this one. I was hoping that HLSL might have some sort of "touch each pixel only once" flag, but I've been unable to find anything like that. The closest I've found was to set the BLENDOP to MAX instead of ADD. But the output is not correct when blending over other colors in the scene.
I also have SRCBLEND = ONE, DSTBLEND = INVSRCALPHA. The only combination of flags which produce correct output (albeit with overdraw artifacts.)
I have played with SEPARATEALPHABLENDENABLE in the GPA frame analyzer, which sounded like almost exactly what I need here -- set blending to MAX but only on the "alpha" channel, however from what I can determine, that setting (and corresponding BLENDOPALPHA) affects nothing at all.
One final thing I thought of was to bake text as opaque onto a texture, and then repaint that texture into the scene with the appropriate alpha value applied, however this doesn't actually work in this project because I also support gradient brushes, where stop values may contain alpha, meaning either the artifacts would still be seen, or the final output just plain wrong if we stripped the alpha away from the stop values prior to baking to a texture. Also the whole endeavor would be hideously expensive.
Any hints or pointers would be appreciated. Thanks for reading.
The problem you're seeing shouldn't happen.
If two of your triangles are overlapping it's because you've placed the vertices in such a way that when the adjacent triangles are drawn, they overlap. What's probably happening is that these two adjacent triangles share two vertices, but each triangle has its own copy of each vertex that's been calculated to be in a very, very slightly different position.
The solution to the problem isn't to try and make the pixel shader touch the pixel only once it's to use an index buffer (if you aren't already) and have the shared vertices between each triangle actually share the same vertex and not use one that's ever-so-slightly not in the same place as the one used by the adjacent triangle.
If you aren't in control of the tessellation algorithm being used you may have to run a pass over the vertex buffer after its been generated to detect and merge vertices that are within some very small tolerance of one another. Even without an index buffer, a naive solution would be this:
For each vertex in the vertex buffer, compare its position to every other vertex in the rest of the vertex buffer.
If two vertices are within some small tolerance of another, replace the second vertex's position with the position of the one you are comparing it against.
This should have the effect of pairing up the positions of two vertices if they are close enough that you deem them to be the same.
You now shouldn't have any problem with overlapping triangles. In everyday rendering two triangles share edges with each other all the time and you won't ever get the effect where they appear to every-so-slightly overlap. The hardware guarantees that a sample point is either on one side of the line or the other, but never both at the same time, no matter how close the point is to the line (even if it's mathematically on the line, it still fails on one side or the other).

How to construct ground surface of infinite size in a 3D CAD application?

I am trying to create an application similar in UI to Sketchup. As a first step, I need to display the ground surface stretching out in all directions. What is the best way to do this?
Options:
Create a sufficiently large regular polygon stretching out in all directions from the origin. Here there is a possibility of the user hitting the edges and falling off the surface of the earth.
Model the surface of the earth as a sphere/spheroid. Here I will be limiting my vertex co-ordinates to very large values prone to rounding off errors. (Radius of earth is 6371000000 millimeter).
Same as 1 but dynamically extend the ends of the earth as the user gets close to them.
What is the usual practice?
I guess you would do neither of these, but instead use a virtual ground.
So you just find out, what portion of the ground is visible in the viewport and then create a plane large enough to fill that. With some reasonable maxiumum, which simulates the end of the line of sight aka horizon as we know it.

Resources