I have a map that I converted from a raster graphic into an SVG file by converting the differently coloured areas into paths.
I know how to do a basic point-in-polygon check given an array of edges, but the svg:path elements represent multiple polygons as well as masks (to account for seas etc) and extracting that information by parsing the d attribute seems rather heavy-handed.
Is there a JS library that allows me to simplify that check? I basically want to create random points and then check whether they are on land (i.e. inside the polygons) or water (i.e. outside).
As SVG elements seem to allow for mouse event handling, I would think that this shouldn't be much of a problem (i.e. if you can tell whether the mouse pointer is on top of an element, you are already solving the point-in-polygon problem).
EDIT: Complicating the matter a bit, I should mention that the svg:path elements seem to be based on curves rather than lines, so just parsing the d attribute to create an array of edges doesn't seem to be an option.
As the elements can take a fill attribute, a ghetto approach of rendering the SVG on a canvas and then finding the colour value of the pixel at the given point could work, but that seems like a really, really awful way to do it.
The answers on Hit-testing SVG shapes? may help you in this quest. There are issues with missing browser support, but you could perhaps use svgroot.checkIntersection to hit test a small (perhaps even 0 width/height would work?) rectangle within your polygon shape.
The approach I suggested as a last resort seems to be the easiest solution for this problem.
I found a nice JS library that makes it easy to render SVG on a canvas. With the SVG rendered, all it takes is a call to the 2D context's getImageData method for a 1x1 region at the point you want to check. I guess it helps to create a copy of the SVG with colour coding to make the check easier if your SVG is more complex than the one I'm using (you'll have to check the RGBA value byte-by-byte).
This feels terribly hackish as you're actually inspecting the pixels of a raster image, but the performance seems to be decent enough and the colour checks can be written in a way that allows for impurities (e.g. near the edges).
I guess if you want relative coordinates you could try creating a 1-to-1 sized canvas and then divide the pixel coordinates by the canvas dimensions.
If somebody comes up with a better answer, I'll accept it instead. Until then, this one serves as a placeholder in case someone comes here with the same problem looking for an easy solution.
Related
Question
Is it possible to tell OpenSceneGraph to use the Z-buffer but not update it when drawing semi-transparent objects?
Motivation
When drawing semitransparent objects, the order in which they are drawn is important, as surfaces that should be visible might be occluded if they are drawn in the wrong order. In some cases, OpenSceneGraph's own intuition about the order in which the objects should be drawn fails—semitransparent surfaces become occluded by other semitransparent surfaces, and "popping" (if that word can be used in this way) may occur, when OSG thinks the order of the object centers' distance to the camera has changed and decides to change the render order. It therefore becomes necessary to manually control the render order of semitransparent objects by manually specifying the render bin for each object using the setRenderBinDetails method on the state set.
However, this might still not always work either, as it in the general case, is impossible to choose a render order for the objects (even if the individual triangles in the scene were ordered) such that all fragments are drawn correctly (see e.g. the painter's problem), and one might still get occlusion. An alternative is to use depth peeling or some other order-independent transparency method but, frankly, I don't know how difiicult this is to implement in OpenSceneGraph or how much it would slow the application down.
In my case, as a trade-off between aestetics and algorithmic complexity and speed, I would ideally always want to draw a fragment of a semi-transparent surface, even though another fragment of another semi-transparent surface that (in that pixel) is closer to the camera has already been drawn. This would prevent both popping and occlusion of semi-transparent surfaces by other semi-transparent surfaces, and would effectivelly be achieved if—for every semi-transparent object that was rendered—the Z-buffer was used to test visibility but wasn't updated when the fragment was drawn.
You're totally on the right track.
Yes, it's possible to leave Z-test enabled but turn off Z-writes with setWriteMask() during drawing:
// Disable Z-writes
osg::ref_ptr<osg::Depth> depth = new osg::Depth;
depth->setWriteMask(false);
myNode->getOrCreateStateSet()->setAttributeAndModes(depth, osg::StateAttribute::ON)
// Enable Z-test (needs to be done after Z-writes are disabled, since the latter
// also seems to disable the Z-test)
myNode->getOrCreateStateSet()->setMode(GL_DEPTH_TEST, osg::StateAttribute::ON);
https://www.mail-archive.com/osg-users#openscenegraph.net/msg01119.html
http://public.vrac.iastate.edu/vancegroup/docs/OpenSceneGraphReferenceDocs-2.8/a00206.html#a2cef930c042c5d8cda32803e5e832dae
You may wish to check out the osgTransparencyTool nodekit we wrote for a CAD client a few years ago: https://github.com/XenonofArcticus/OSG-Transparency-Tool
It includes several transparency methods that you can test with your scenes and examine the source implementation of, including an Order Independent Transparency Depth Peeling implementation and a Delayed Blend method inspired by Open Inventor. Delayed Blend is a high performance single pass unsorted approximation that probably checks all the boxes you want if absolute transparency accuracy is not the most important criteria.
Here's a paper discussing the various approaches in excruciating detail, if you haven't read it:
http://lips.informatik.uni-leipzig.de/files/bathesis_cbluemel_digital_0.pdf
I am a little stuck with what might be a trivial question:
I have two Raphael canvas on my site, following a certain event I need to take the content of the one canvas and add it (with a different scale and position) to the second canvas.
How do I even approach that task?
Just to render this question maybe a wee bit useful, I by now succeeded.
I've gathered up all elements from the one paper (with the forEach-method) and pushed them into a set.
That set I cloned to the other paper using Shamasis' .cloneToPaper(targetPaper) method from the answer to this question.
So it turned out to be fairly simply if one were to enter the right search terms >.>
I am currently working on a 2D project that generates random black terrain over a loaded background. I have a sprite that is loaded and controlled and am trying to find out what the best method for identifying the color behind the sprite to code some color based collision. I have tried a bunch of tutorials on perpixel and color but they all seem dependant on a collision map being used or bounding boxes between two preloaded images IE: sprite and colliding object.
If anyone could point me in the right direction it would be greatly appriciated.
Querying textures is a relatively expensive operation; I would strongly recommend that you avoid doing so in real time. Since you're generating your terrain information procedurally at runtime, why not just store it in an array and reference that?
If you need to composite textures or perform other rendering operations in order to create your terrain data, you can copy the resulting render target's data into an array in system memory using the following code:
var data = new Color[width * height];
texture.GetData(data);
Just try to avoid doing it any more often than is necessary.
I think the right direction would be away from pixel-perfect collisions. Most people assume it's necessary, but the fact is, 99% of games don't use pixel-perfect collisions because they are slow, difficult to implement properly, and overkill for most practical games. Most games use AABBs, circles, or spheres. They are simple to detect collisions between, and are "good enough" for most games. The only game I can name that uses pixel-perfect collisions is the original Worms.
This video also does a good job of covering collision detection: http://pyvideo.org/video/615/introduction-to-game-development (Collision Detection #1:13:20)
Firstly,I use google static map API to get the image to display on html/wml.
And then, I want to get the point's GPS position where user pressed on the image.
Is there some way to get the GPS position if I got the co-ordinates on the image?
The short answer is probably not. You can't be sure exactly what the static map's dimensions are (the server might change the location slightly to fit things better, etc.). If you're just asking for a map by center and zoom then you stand a slightly better chance, but it will still be tricky.
If you're trying to add dynamic behaviour to a static map, have you considered instead the Maps JavaScript API? Finding the coordinates of where a user clicks is trivial there. (Also, you can disable the zooming, panning, controls, etc. if you want so that it still feels like it's static).
Just have a question for anyone out there who knows some sort of game engine pretty well. What I am trying to implement is some sort of script or code that will allow me to make a custom game character and textures mid-game. A few examples would be along the lines of changing facial expressions and body part positions in the game SecondLife. I don't really need a particular language, feel free to use your favorite, I'm just really looking for an example on how to go about this.
Also I was wondering if there is anyway to combine textures for optimization; for example if i wanted to add a tattoo to a character midgame, is there any code that could combine his body texture and the tattoo texture into one texture to use (this way I can simply just render one texture per body.)
Any tips would be appreciated, sorry if the question is a wee bit to vauge.
I think that "swappable tattoos" are typically done as a second render pass of the polygons. You could do some research into "detail maps" and see if they provide what you're looking for.
As for actually modifying the texture data at runtime, all you need to do is composite the textures into a new one. You could even use the rendering API to do it for you, more than likely; render the textures you want to combine in the order you want to combine them into a new texture. Mind, doing this every frame would be a disoptimization since it'll be slower to render two textures into one and then draw the new one than it would be just to draw the two sources one after the other.