I have made an SVG in Adobe Illustrator, and it uses quite a lot of symbols (as illustrator calls them) which convert to use elements in an exported SVG.
The ⚫ is the insertion point. The intersection of the lines is the desired insertion point
During that export, the insertion point is moved from the illustrator location to the bottom left corner. (It also seems to rotate it 180°, but that's changed back again by adding a transform. Very confusing.)
Is there a D3 way to move the insertion point back to where I'd like it to be? So far, I've got most of the way there*, but it's not quite working. My current approach is:
Move each instance to the original insertion point
Move the view box of the symbol definition to be centred on the new insertion point
Move the internals of the symbol back into the right place
This is a surprising amount of work, and it feels like the kind of thing that someone else has done before me. The reason I want to do the transform on the symbol is that it drastically simplifies all downstream work because the insertion point is the only geometry data I need to store from then on.
Ideally, there'd be a way to export from Illustrator with the correct, untransformed, origin, but I'm not holding out much hope.
I'm open to the idea of using something else (inkscape etc.) to draw the source file if it's a hard problem.
* once I get this code working, I'll put it in as an answer, but for the moment it's too ugly to make any sense.
Related
I'm having trouble finding a way to solve this specific problem using MeshLab.
As you can see in the figure, the mesh with which I'm working presents some cracks in certain areas, and I would like to try to close them. The "close holes" option does not seem to work because, being technically cracks and not holes, it seems not to be able to weld them.
I managed to get a good result using the "Screened Poisson Surface Reconstruction" option, but using this operation (rebuilding the whole mesh topology), I would lose all the information about the mesh's UVs (and I can not afford to lose them).
I would need some advice to find the best method to weld these cracks, which does not change the vertices that are not along them, adding only the geometry needed to close the mesh (or, ideally, to make a weld using the existing edges along the edge).
Thanks in advance!
As answered by A.Comer in a comment to the main question, I was able to get the desired result simply by playing a bit with the parameters of the "close holes" tool.
Just for the sake of completeness, here is a copy of the comment:
The close holes option should be able to handle this. Did you try changing the max size for that filter to a much larger number? Do filters >> selection >> select border and put the number of selected faces as the max size into that filter – A.Comer
I have a (circular) dial. I would like to place text centered at a specific angle on the dial. A simple analogy would be a clock face with text above each hour mark. I can write text along an arc, placing text at the top (12h) position, then rotate that path to place it at a specific angle. The only solution I could think of was to create 1 layer for each text then rotate that layer appropriately (10 deg, 20 deg, 30 deg, etc.). This seems a brute force method, and I have not even been able to get it to work (yet). So, is there a better, standard way to do this?
TIA
ken
Definitely no standard way. But there are scripts that can make that easier. They can be found here (some come with a self-contained doc, go directly to the download page.
dial-marks will generate marks (with round sides if required). It's easy to remove the excess point to keep only arcs. You can have other uses for that one if you are in clocks.
ofn-path-edit (break apart function) is used to make one path per stroke
text-along-path can then be used to add text (centered) on each of these paths.
Results of the three main steps
If you want to script this, plenty of code to borrow from in these scripts.
I have 3 KML's that do not draw at all and 2-3 that act sporadically depending on what zoom level they are at. I checked the file limitations and I don't seem to be violating any of the limits. I went back to my original shapefiles to check for geometry errors. One of the files had geometry errors and I fixed them yet it didn't seem to fix the problem of the KML not rendering. I've also implemented zoom functionality with Googles Visualization API and geoxml3 processor. Here are some interesting things that happen with my application:
One of the KML files that does not draw will actually respond to the
zoom functionality by zooming to its extent but still won't draw the
polygon; evidence that the KML is being parsed but not drawing.
One of the KML files that does not draw will eventually draw if I
click on the polygon next to it and am zoomed in close enough. It
will not initially.
I have two KML files that draw when zoomed out but 'disappear' when
I zoom in.
My application is here and my fusion table is here. If anyone has had similar problems and was able to fix them I would really appreciate to know how it was accomplished because I'm stumped at this point.
Thanks
first of all: Fusion Tables are still experimental
some issues:
South Nelson Elementary is missing in varID
JV Humphries Secondary Polygons needs to be fixed
I thought I would post an update.
It turns out some of my data did have geometry errors; those were fixed and converted to KML.
The problem is my actual coding. The code was orginally written to simply display polygons from an array and to be turned on/off via a checkbox. The reason for this was to be able to view adjacent boundaries of the other polygons. I achieved this in my initial coding and the user had to zoom into the area of interest via Google's map functionality.
Then I was asked to have a zoom function when the checkbox was clicked to have the application zoom to the polygon in question. This of course works but it depends on which order the checkboxes are clicked on. I'm fairly certain it has to do with how the empty array is populated as checkboxes are clicked on/off.
I don't fully understand the logic of how the code decides which polygon to zoom or not zoom to. All I know is that if all checkboxes are unchecked then each checkbox is checked on/off one at a time the zoom functionality works.
If anyone has a suggestion on how to have each checkbox act 'independently' to zoom regardless of order clicked I would appreciate it.
The problem I am facing is following.
I have a number of 3D head scans, some of them are taken correctly (like attached example) but in many it is easy to see that the scanned person had his head not exactly aligned with the machine's front and thus one side of the texture (and depth map) seems to be "wider" (the exact reason is that one side was taken more from behind, it can be easily seen if you look at the ears).
Fortunately when I go from the cylindrical coordinates to carthesian ones and render the face with XNA, the face is symmetrical.
Now the thing is that I would like the texture and depth maps of all my heads by as nice and symmetrical as the correct one (because later i want to align them and perform PCA).
The idea I have at the moment is that I could interpolate the surfaces between all of the vertices and from those interpolations take new vertices that are equally distanced from each other.
This solutions seems a lot of work and maybe its an overkill.
Maybe there is some other way (like geting that interpolation data from DirectX/XNA that has to calculate it at some point anyway).
I will be most thankful for helpful answers.
The correct example:
http://i55.tinypic.com/332mio2.jpg
Incorrect example:
http://i54.tinypic.com/309ujvt.jpg
It's probably possible to salvage (some of) the bad scans to some degree using some coordinate transformations, but you would have to guess the "incorrectness" of the alignment and it's probably impossible to do automatically.
But, unless the original subject is dead (or otherwise unavailable); it's probably a lot easier to redo the scans.
Making another scan is very likely to be quicker, and you won't loose quality as transforming the bad scans probably will. The nose on the incorrect sample seems to be shadowing the side of the nose, and no fancy algorithm can ever fix the missing data.
I have a map that I converted from a raster graphic into an SVG file by converting the differently coloured areas into paths.
I know how to do a basic point-in-polygon check given an array of edges, but the svg:path elements represent multiple polygons as well as masks (to account for seas etc) and extracting that information by parsing the d attribute seems rather heavy-handed.
Is there a JS library that allows me to simplify that check? I basically want to create random points and then check whether they are on land (i.e. inside the polygons) or water (i.e. outside).
As SVG elements seem to allow for mouse event handling, I would think that this shouldn't be much of a problem (i.e. if you can tell whether the mouse pointer is on top of an element, you are already solving the point-in-polygon problem).
EDIT: Complicating the matter a bit, I should mention that the svg:path elements seem to be based on curves rather than lines, so just parsing the d attribute to create an array of edges doesn't seem to be an option.
As the elements can take a fill attribute, a ghetto approach of rendering the SVG on a canvas and then finding the colour value of the pixel at the given point could work, but that seems like a really, really awful way to do it.
The answers on Hit-testing SVG shapes? may help you in this quest. There are issues with missing browser support, but you could perhaps use svgroot.checkIntersection to hit test a small (perhaps even 0 width/height would work?) rectangle within your polygon shape.
The approach I suggested as a last resort seems to be the easiest solution for this problem.
I found a nice JS library that makes it easy to render SVG on a canvas. With the SVG rendered, all it takes is a call to the 2D context's getImageData method for a 1x1 region at the point you want to check. I guess it helps to create a copy of the SVG with colour coding to make the check easier if your SVG is more complex than the one I'm using (you'll have to check the RGBA value byte-by-byte).
This feels terribly hackish as you're actually inspecting the pixels of a raster image, but the performance seems to be decent enough and the colour checks can be written in a way that allows for impurities (e.g. near the edges).
I guess if you want relative coordinates you could try creating a 1-to-1 sized canvas and then divide the pixel coordinates by the canvas dimensions.
If somebody comes up with a better answer, I'll accept it instead. Until then, this one serves as a placeholder in case someone comes here with the same problem looking for an easy solution.