We need to display 5 millions of dots (or very simple graphics objects) on a screen at the same time and we want to interact with each of the dots (e.g., change their colors or drag/drop them).
To achieve this, we usually run a for-loop through 5 millions items in the worst case O(N) to access and change the states of the dot, according to the mouse coordinates (x, y). Due to the huge number of the objects, this approach causes lots of overhead (we have to run the for-loop of five millions whenever a user selects a dot). I have already tested this approach but it was almost impossible to make an interactive tool with this. Is there anyway to rapidly and efficiently access the dots without running the million for-loop and causing this performance problem?
You really haven’t given many details
These questions quickly come to mind:
Are dots the same size?
Are dots uniformly disbursed on the canvas?
If one dot is “selected”, is only that one dot recolored or moved?
Why are you violating good data visualization rules by overwhelming the user? :)
With this lack of specificity in mind...
...Divide and conquer:
Divide your dot array into multiple parts.
Divide your dots onto multiple overlaying canvases.
Divide your dot array into multiple parts
This will allow you to examine far fewer array elements when searching for the 1 you need.
Create a container object with 1980 elements representing the 1980 “x” coordinates on the screen.
var container={};
for(var x=1;x<=1980;x++){
container[x]=[];
}
Each container element is an array of dot objects with their dot centers on that x-coordinate.
Every dot object has enough info to locate and redraw itself.
A dot at x-coordinate == 125 might be defined like this:
{x:125,y:100,r:2,color:"red",canvas:1};
When you want to add a dot, push a dot object to the appropriate "x" element of the container object.
// add a dot with x screen coordinate == 952
container[952].push({x:952,y:100,r:2,color:"red",canvas:1});
Dots can be drawn based on the dot objects:
function drawDot(dot,context){
context.beginPath();
context.fillStyle=dot.color;
context.arc(dot.x,dot.y,dot.r,0,PI2,false);
context.closePath();
context.fill();
}
When the user selects a dot, you can find it quickly by pulling the few container elements around the X where the user clicked:
function getDotsNearX(x,radius){
// pull arrays from "x" plus/minus "radius"
var dotArrays=[]
for(var i=x-radius;i<=x+radius;i++){
dotArrays.push(container[i]);
}
return(dotArray);
}
Now you can process the dots in these highly targeted arrays instead of all 5 million array elements.
When the user moves a dot to a new position, just pull the dot object out of its current container element and push it into the appropriate new "x" container element.
Divide your dots onto multiple overlaying canvases
To improve drawing performance, you will want to disburse your dots across multiple canvas overlayed on each other.
The dot element includes a canvas property to identify on which canvas this dot will be drawn.
Have you already taken a look at the KineticJS framework? There is a very impressive stress-test with exactly the same drag-and-drop functionality you're looking for. If you use KineticJS, you can access every single dot with the following eventlistener, and of course change its color, size etc.:
stage.on('mousedown', function(evt) {
var circle = evt.targetNode;
});
Related
I'm working on a system to automatically take 2D profiles of components and assemble them into 3D shapes.
Imagine given these pieces:
You want to make this shape:
I'm highlighting one of the components to show how they fit together.
I'm open to any suggestions on how to go about doing this but the current approach I'm attempting first finds joints that may fit together just by looking at the 2D profile.
How could I go about identifying the "tabs" from the polyline profile?
The same technique should also work on assemblies like such:
see How to compare two shapes?
so you basically trying to find the "same" sequences in polylines encoded in the polar increment format (turn angle, line length) and then just check if relative position of matched sequences are the same in both shapes ...
Beware that the locks might have some gap between the joined shapes to ensure assembly is possible... in same case the gap might be even negative (overlap) depends on material and function so You need to compare the sequences with some margin ...
Also I would divide each shape into its sides to speed up the process as the lock is most likely not crossing sides ...
You may define the "code" for a tab. For example:
3,C,5,C,3 would mean: Three units length, then turn 90º counter-clockwise, then 5 units length, then turn 90º counter-clockwise, then 3 units length.
Of course more identifiers than C can be used, for different angles and so.
A tab in another piece that fits in the tab of the first piece has the same (or very similar) 3,C,5,C,3 code
So, finding same code in both pieces may be a fit. Check if adjacents codes in both pieces also fit, and you're done.
Notice that pieces can be rotated. This case doesn't change the code, but may change the order of adjacents codes.
This is the scenario: I have an icosahedron, therefore I have 12 vertices and 20 faces.
From the point of view of each vertex he is the center of an "extruded" pentagon, whose triangles are the faces of the icosahedron.
Let's say we want to name each of the vertices of each of these triangles from 1 to 3, always in a counterclockwise fashion, imagining that each vertex is not shared among different triangles.
(can't upload the image here for some reason sorry)
https://ibb.co/FmYfRG4
Is there a way to arrange the naming of the vertices inside each triangle so that every pentagon yields the same pattern of numbers along the five triangles?
As you can see by arranging the vertex names that way there would be the first pentagon with 1,1,1,1,1 but around it other pentagons couldn't have the same pattern.
EDIT: following Andrew Morton's comment I tried to write a possible sequence
I came up with two sequences of triangles: 1,2,3,1,3 for most pentagons, and 2,2,2,2,2 for the two caps.
I wonder if there is some additional optimization so that I only have one sequence instead of two, or maybe if there's is some mathematical demonstration that makes this impossible.
I am running into problems when computing the relative risk estimation (relrisk.ppp) of two point patterns: One with four marks in a rectangular region and the other with two marks in a circular region.
For the first pattern with four marks, I am able to get the relative risk and the resulting object in a large imlist with 4 elements corresponding to each mark.
However, for the second pattern, it gives a list of 10 elements, of which the first matrix v is empty with NA entries. I am breaking my head on what possibly could be wrong when the created point pattern objects seems to be identical. Any help will be appreciated. Thanks.
For your first dataset, the result is a list of image objects (a list of four objects of class im). For your second dataset, the result of relrisk.ppp is a single image (object of class im). This is the default behaviour when there are only two possible types of points (two possible mark values). See help(relrisk.ppp).
In all cases, you should just be able to plot and print the resulting object. You don't need to examine the internal data of the image.
More explanation: when there are only two possible types of points, the default behaviour of relrisk.ppp is to treat them as case-control data, where the points belonging to the first type are treated as controls (e.g. non-infected people), and the points of the second type are treated as cases (e.g. infected people). The ratio of intensities (cases divided by controls) is estimated as an image.
If you don't want this to happen, set the argument casecontrol=FALSE and then relrisk.ppp will always return a list of images, with one image for each possible mark. Each image gives the spatially-varying probability of that type of point.
It's all explained in help(relrisk.ppp) or in the book.
I'm new to WebGL and for an assignment I'm trying to write a function which takes as argument an object, let's say "objectA". ObjectA will not be rendered but if it overlaps with another object in the scene, let’s say “objectB”, the part of objectB which is inside objectA will disappear. So the effect is that there is a hole in ObjectB without modifying its mesh structure.
I've managed to let it work on my own render engine, based on ray tracing, which gives the following effect:
image initial scene:
image with objectA removed:
In the first image, the green sphere is "objectA" and the blue cube is "objectB".
So now I'm trying to program it in WebGL, but I'm a bit stuck. Because WebGL is based on rasterization rather than ray tracing, it has to be calculated in another way. A possibility could be to modify the Z-buffer algorithm, where the fragments with a z-value lying inside objectA will be ignored.
The algorithm that I have in mind works as follows: normally only the fragment with the smallest z-value will be stored at a particular pixel containing the colour and z-value. A first modification is that at a particular pixel, a list of all fragments belonging to that pixel is maintained. No fragments will be discarded. Secondly per fragment an extra parameter is stored containing the object where it belongs to. Next the fragments are sorted in increasing order according to their z-value.
Then, if the first fragment belongs to objectA, it will be ignored. If the next one belongs to objectB, it will be ignored as well. If the third one belongs to objectA and the fourth one to objectB, the fourth one will be chosen because it lies outside objectA.
So the first fragment belonging to objectB will be chosen with the constraint that the amount of previous fragments belonging to objectA is even. If it is uneven, the fragment will lie inside objectA and will be ignored.
Is this somehow possible in WebGL? I've also tried to implement it via a stencil buffer, based on this blog:
WebGL : How do make part of an object transparent?
But this is written for OpenGL. I transformed the code instructions to WebGL code instructions, but it didn't work at all. But I'm not sure whether it will work with a 3D object instead of a 2D triangle.
Thanks a lot in advance!
Why wouldn't you write raytracer inside the fragment shader (aka pixel shader)?
So you would need to render a fullscreen quad (two triangles) and then the fragment shader would be responsible for raytracing. There are plenty of resources to read/learn from.
This links might be useful:
Distance functions - by iq
How shadertoy works
Simple webgl raytracer
EDIT:
Raytracing and SDFs (signed distance functions aka constructive solid geometry (CSGs)) are good way to handle what you need and how is generally achieved to intersect objects. Intersections, and boolean operators in general, for mesh geometry (i.e. made of polygons) is not done during the rendering, rahter it uses special algorithms that do all the processing ahead of rendering, so the resulting mesh actually exists in the memory and its topology is actually calculated and then just rendered.
Depending on the specific scenario that you have, you might be able to achieve the effect under some requirements and restrictions.
There are few important things to take into account: depth peeling (i.e. storing depth values of multiple fragments per single pixel, triangle orientation (CW or CCW) and polygon face orientation (front-facing or back-facing).
Say, for example, that both of your polygons are convex, then rendering backfacing polygons of ObjectA, then of ObjectB, then frontfacing polygons of A, then of B might achieve the desired effect (I'm not including full calculations for all cases of overlaps that can exist).
Under some other sets of restrictions you might be able to achieve the effect.
In your specific example in question, you have shown frontfacing faces of the cube, then in the second image you can see the backface of the cube. That already implies that you have at least two depth values per pixel stored somehow.
There is also a distinction between intersecting in screen-space, or volumes, or faces. Your example works with faces and is the hardest (there are two cases: the one you've shown where mesh A's pixels who are inside mesh B are simply discarded (i.e. you drilled a hole inside its surface), and there is a case where you do boolean operation where you never put a hole in the surface, but in the volume) and is usually done with algorithm that computes output mesh. SDFs are great for volumes. Screen-space is achieved by simply using depth test to discard some fragments.
Again, too many scenarios and depends on what you're trying to achieve and what are the constraints that you're working with.
I have an image sequence (video). I would like to count the number of objects in the image sequence. But the main objective is to count them once, meaning not just in each and every frame, since an object may exist in for several frames. My idea is to count the objects as they exit the screen, because of less occlusions. I am thinking of doing this by scanning the bottom part of the image for non zero pixels.
I have a CV_FILLED binary image (from rectangle function) where I want to do the scanning, then create an instance on an object if abject is found. But this scanning will not be scanning each and every pixel along the horizontal line, just certain sections.
Like we could do it over ranges, say certain columns, then skip by a margin.
A sample binary image I have is attached . This is an image obtained from the feed. I do not want to count only the objects in this image, but also those that are still coming.
A full picture of detected objects is attached here.Your guidance or constructive criticism is welcome
* I do not want to use CVBlob
If you don't want to use cvBlobLib, you could use the contour detection that is part of OpenCV.
There is a tutorial on the website.
The doc for the method is here. Your image seem pretty simple, but if you get blobs with occlusions and so you want to look at the CV_RETR_EXTERNAL constant to get only the outer contours.
That is what I usualy use, even though it needs a bit more work to use the results of the method.
Hope this helps.
If the squares do not overlap at the bottom, I suggest the following:
scan the very bottom row of the image and identify those connected pixels which are white. Each white line will correspond to one square. Save the center of the white line segment and its length. In the next frame, do the same and associate the corresponding line segments to the previous (same length and center very close). When you cannot find a corresponding line segment anymore, the square has moved out of the image which means you can increase your squares counter by one. Note that line segments at the right and left ends of the line will have decreasing length with every frame.
Thx guys. I managed to solve this already. I used small ROIs along the paths of the squares, and found countNonZero() within the ROI.
I kept on checking with boolean variables to see if the ROI still had the white pixels. If not, incremented counter. Worked well, and I was able to count.
Thx for your input...