I have an SVG path with 100 points (each point with "Lx y"). The path moves on mouse drag (no rotation structure change, just translation and scale).
Does anybody know: is there any performance difference between changing entire path string on each step and changing just transform matrix?
It's just more convenient for the underlying code to change entire path than transform matrix, so i wonder if i should be worried.
The code is being implemented using Raphaeljs, but i don't think it matters.
Thanks in advance.
Related
I am trying to build my own 3d engine from a 2d one. so far everything works fine but its very inefficient due to the fact that the wire frame model has lines between every point on the shape. I've been doing some research but haven't been able to find anything regarding what dictates where polygons go for the most optimal rendering.
Here is what a cube looks like in my program:
Is there some mathematical way to remove all the extra geometry?
Any advice really helps, thanks.
Ok so after longer than I'd like to to admit I figured out that you don't need to order faces by their z coordinate, instead just take the surface normal of the shape and render only render it if it's above a value (most of the time 0) (also you'll want to use premade triangles from object files instead of assigning them to the faces yourself)
I would like to calculate the minimum size of an rectangle that contains all elements of an DXF file, but apparently neither ezdxf or dxfgrabber have a function to do that.
Is iterating through all entities and calculating the points the only way to do it? If the drawing used only lines and boxes that would be easy, but with splines, arcs and circles the process become tiring.
I know that the context of my answer does not relate to the libraries you refer. But if you look here it mentions:
Extmin and Extmax - Returns the extents of Model space.
Pextmin and Pextmax - Returns the extents of the current Paper space layout.
Are you able to access these variables using your libraries? If not, then you most likely have to do it the manual way.
Below is an svg path (the points) which, when given a thickness, displays the grey shape. I have an svg with the outline of the grey shape, and I would like to generate an approximation (I'm assuming the original is impossible to get back) of the original path. Ideally it would work with shapes that intersect, like a lowercase 'e'.
The opposite of this: svg: generate 'outline path'
This is a bit beyond the scope of a SO answer. However there is plenty of information out there on the web. Converting a bitmap to vectors is called "vectorization". The class of algorithms that attempts to get the "skeleton" of the shape is known as "thinning". Google those three terms.
Most of these algorithms are designed to work with bitmaps, but they should be a useful starting point for your situation.
Back story: I'm creating a Three.js based 3D graphing library. Similar to sigma.js, but 3D. It's called graphosaurus and the source can be found here. I'm using Three.js and using a single particle representing a single node in the graph.
This was the first task I had to deal with: given an arbitrary set of points (that each contain X,Y,Z coordinates), determine the optimal camera position (X,Y,Z) that can view all the points in the graph.
My initial solution (which we'll call Solution 1) involved calculating the bounding sphere of all the points and then scale the sphere to be a sphere of radius 5 around the point 0,0,0. Since the points will be guaranteed to always fall in that area, I can set a static position for the camera (assuming the FOV is static) and the data will always be visible. This works well, but it either requires changing the point coordinates the user specified, or duplicating all the points, neither of which are great.
My new solution (which we'll call Solution 2) involves not touching the coordinates of the inputted data, but instead just positioning the camera to match the data. I encountered a problem with this solution. For some reason, when dealing with really large data, the particles seem to flicker when positioned in front/behind of other particles.
Here are examples of both solutions. Make sure to move the graph around to see the effects:
Solution 1
Solution 2
You can see the diff for the code here
Let me know if you have any insight on how to get rid of the flickering. Thanks!
It turns out that my near value for the camera was too low and the far value was too high, resulting in "z-fighting". By narrowing these values on my dataset, the problem went away. Since my dataset is user dependent, I need to determine an algorithm to generate these values dynamically.
I noticed that in the sol#2 the flickering only occurs when the camera is moving. One possible reason can be that, when the camera position is changing rapidly, different transforms get applied to different particles. So if a camera moves from X to X + DELTAX during a time step, one set of particles get the camera transform for X while the others get the transform for X + DELTAX.
If you separate your rendering from the user interaction, that should fix the issue, assuming this is the issue. That means that you should apply the same transform to all the particles and the edges connecting them, by locking (not updating ) the transform matrix until the rendering loop is done.
I implemented a multi-series line chart like the one given here by M. Bostock and ran into a curious issue which I cannot explain myself. When I choose linear interpolation and set my scales and axis everything is correct and values are well-aligned.
But when I change my interpolation to basis, without any modification of my axis and scales, values between the lines and the axis are incorrect.
What is happening here? With the monotone setting I can achieve pretty much the same effect as the basis interpolation but without the syncing problem between lines and axis. Still I would like to understand what is happening.
The basis interpolation is implementing a beta spline, which people like to use as an interpolation function precisely because it smooths out extreme peaks. This is useful when you are modeling something you expect to vary smoothly but only have sharp, infrequently sampled data. A consequence of this is that resulting line will not connect all data points, changing the appearance of extreme values.
In your case, the sharp peaks are the interesting features, the exception to the typically 0 baseline value. When you use a spline interpolation, you are smoothing over these peaks.
Here is a fun demo to play with the different types of line interpoations:
http://bl.ocks.org/mbostock/4342190
You can drag the data around so they resemble a sharp peak like yours, even click to add new points. Then, switch to a basis interpolation and watch the peak get averaged out.