Assuming a relatively modern, SVG-supporting desktop browser and an SVG consisting of hundreds of similar, simple nodes:
The document could be set up as many individual shape elements (<circle>, <line>, etc.) with their own attributes defined.
The document could be set up as a few <symbol> elements and many individual <use> instances that place them and size them appropriately (W3 spec).
I understand the semantic and code-maintenance reasons to use <symbols>/<use>, but I'm not concerned with those now, I'm strictly trying to optimize rendering, transformation and DOM update performance. I could see <symbol> working similar to reusing sprites in Flash, conserving memory and being generally a good practice. However, I'd be surprised if browser vendors have been thinking this way (and this isn't really the intent of the feature).
Edit: I am not expecting the base symbols to be changed or added to over the life-cycle of the SVG, only the instance locations, sizes, etc
Are there any clear patterns to <symbol>/<use> performance?
How idiosyncratic is it to individual browser implementations?
Are there differences between reusing <symbol> vs <g> vs nested <svg>?
Rohit Kalkur compared rendering speed of the creation of 5000 SVG symbols using use against directly creating the SVG symbol's shapes, see here. It turns out that rendering SVG shapes using use was almost 50% slower. He reasons that:
The use element takes nodes from within the SVG document, and
duplicates them in a non-exposed DOM
Given this, I assume that using SVG symbols is at best as performant as manually creating the symbolss shape.
I would advise you to not nest <use> elements deeply. That is known to cause slowdowns in most browsers, see here and here.
In the general case though it should be fast, at least as long as the template itself isn't changed much (since if you do then each of the instances need to be updated too, and each of them can differ from the rest due to CSS inheritance).
Between <svg> and <symbol> there isn't that big of a difference on a functional level, they both allow you to define a coordinate system (via the 'viewBox' attribute). A <g> element doesn't let you do that. Note that <symbol> elements are invisible unless referenced by a <use>, whereas <svg> and <g> are both visible per default. However, in most cases it's advisable to make the template be a child of a <defs> element.
If you change the contents of a g or svg element then a UI can look at the area the old contents were drawn in and where the update will be drawn to and simply redraw those two areas, even redraw only once if they are the same e.g. changing the colour of a shape.
If you update the contents of a symbol then all the instances must be redrawn. It's harder to do that by calculating for each instance where the old and new parts to redraw are as the areas may be affected by transforms and simpler to just redraw all parts of all instances. Some browsers may do the former and some the latter.
In either case, a UI must at a minimum track the changes in the symbol and propagate those changes to all the instances. That's more than likely to have some overhead.
Of course, if you're just moving individual symbol instances and the contents are static then no tracking is required and performance is likely to be similar.
Related
In the Godot Engine, I am wondering what happens when objects/scenes leave the viewport? For example: I am trying to make a large map with lots of scenes/entities (such as multiple moving enemies, as well as resource nodes). I am trying to figure out the best way to handle the entities that no longer need to be loaded in memory.
My initial thought was that every tile that is moved to, check the "map" array that holds all the tiles and load the new ones off the screen a little, and vice versa for the ones that will disappear. I assume this is horrible practice. I also thought of having "regions" that once entered, could load upcoming sections - but that also gets super complicated.
I noticed that Godot is already handling part of this problem. As an example, when an object emitting particles leaves the viewport, it stops emitting particles.
Globally performancewise, having multiple instances shouldn't be a problem, but if you have a lot of entities, you may want to execute code only when they are in viewport.
For instance :
if(isInViewport):
Do everything
Else:
Do nothing but exist
To that purpose the VisibilityNotifier2D class may be usefull.
For starter, I know what a lightmap is, what to be gained by using it, and how to implement it. What I don't get is, if you have dynamically moving objects, which won't be able to generate a lightmap, you still need a light source to project their shadows. So, what would be gained by lightmap if we still need a light for the ones that won't get any lightmapping (ie. dynamic objects)?
Thanks in advance.
If you don't use realtime shadows (it's an option, often on mobile), than you can have more or less 2 approach for dynamic objects:
Use lightmap data baked into probes to approximate per-vertex lighting (no need to have realtime light). It's an approximation but can work on some contexts.
Use real-time lights only on dynamic objects, so you'll improve the look of those without sacrifice any performance on static ones, which can use only baked lights
If you need dynamic shadows casted by dynamic objects on static baked ones, than you still can benefits from lightmaps for several reasons:
Even if additional lighting passes are necessary to project shadows on static lightmapped objects, probably not all objects will be affected by shadow, but only the ones relatively close to the dynamic shadow caster object. So you still can save a lot of GPU time.
Lightmaps (especially on forward render path), allow to produce complex light scenarios, that would be otherwise impossible to achieve in realtime. Dynamic objects, doesn't need to be affected by all baked lights, but eventually only from the more important ones. This way you can have a limited amount of drawcalls for a really good looking static environments, and a limited number of "important" lights affecting the dynamic objects
if you have dynamically moving objects, which won't be able to
generate a lightmap, you still need a light source to project their
shadows.
That's true. But:
you save computing shading of static lightmapped objects because light won't affect them
as already said before, projected shadow will be casted on a limited set of objects
My understanding is that glBindAttribLocation allows you to custom set a handle to an attribute (before linking a shader program), which you can later use when rendering with glVertexAttribPointer.
But you don't have to use it, and may instead just rely on OpenGL assigning whatever handle it so chooses in its infinite wisdom. However, you would then need to query OpenGL to find out this handle by using glGetAttribLocation at some point before rendering with glVertexAttribPointer.
Now you could use glGetAttribLocation each time you render, which would seem wasteful since you can just use glGetAttribLocation once after building your program, then store the handle.
So essentially, you can store this handle by either using glBindAttribLocation or by using glGetAttribLocation so is there any difference performance-wise and what are the pros and cons of one over the other?
I cannot speak much about the direct performance difference, but it should be irrelevant anyway, since no matter if using glBindAttribLocation or glGetAttribLocation, you're doing it at initialization time anyway (and even then calling glGetAttribLocation shouldn't hurt that much).
But the main difference and advantage of an explicit glBindAttribLocation over letting GL decide is, that it allows you to establish your own attribue semantics and keep them consistent for each and every shader.
Say you have a whole bunch of objects and a whole bunch of shaders. But each shader has some notion of a position attribute (and normal, color, ...), likewise each object has attribute data for positions, normals, ... Now with glBindAttribLocation you can bind your position attribute to location 0 in each and every different shader. So when drawing your objects with different shaders, they can use a single vertex format (i.e. how you call glVertexAttribPointer for the individual attributes, and the individual enable calls).
On the other hand glGetAttribLocation doesn't give you any guarantees about what attributes get which indices (maybe one shader has some additional attribute and the compiler thinks it's a good way to reorder them, who knows). So in this case you have a different vertex format (glVertexAttribPointer call) for each object and each shader.
This is even more important when using Vertex Array Objects (which encapsulate all the attribute state, especially the glVertexAttribPointer and glEnableVertexAttribArray calls). In this case you usually don't need (and don't want) to call glVertexAttribPointer each time you draw an object with another shader.
So the bottom line is, always use glBindAttribLocation, at best (in a large application) it saves you many object and shader management issues and many unneccessary glVertexAttribPointer calls each frame (and that can likely be a performance gain), and at least (in a very small application) it is good practice and lets you stay open and flexible for extensions. As a side note, in desktop GL 3+ (or with the ARB_explicit_attrib_location extension) you can even assign attribute locations directly in the shader without the need for any API call.
I have a some tables displaying side by side. But I am having great difficulty in aligning them in the rendered view. When I export it, it looks different again!
Are there any easier ways to aligning them correctly apart from some extreme trial and error?
The best way to control layout in Reporting Services is with Rectangles.
In your example, you can try moving your tables into a Rectangle object.
This affects report rendering, as the the report will render the objects within the Rectangle first relative to each other, then render and place other objects relative to the Rectangle, i.e. ignoring the objects within the Rectangle and treating them as one whole.
Every different export medium has its own challenges so it's no surprise that Reporting Services will sometimes give different results for the same report in different media; this method just reduces the decisions made when rendering.
I used the properties window and set the location height attributes of each object that I wanted to align vertically.
I have a map that I converted from a raster graphic into an SVG file by converting the differently coloured areas into paths.
I know how to do a basic point-in-polygon check given an array of edges, but the svg:path elements represent multiple polygons as well as masks (to account for seas etc) and extracting that information by parsing the d attribute seems rather heavy-handed.
Is there a JS library that allows me to simplify that check? I basically want to create random points and then check whether they are on land (i.e. inside the polygons) or water (i.e. outside).
As SVG elements seem to allow for mouse event handling, I would think that this shouldn't be much of a problem (i.e. if you can tell whether the mouse pointer is on top of an element, you are already solving the point-in-polygon problem).
EDIT: Complicating the matter a bit, I should mention that the svg:path elements seem to be based on curves rather than lines, so just parsing the d attribute to create an array of edges doesn't seem to be an option.
As the elements can take a fill attribute, a ghetto approach of rendering the SVG on a canvas and then finding the colour value of the pixel at the given point could work, but that seems like a really, really awful way to do it.
The answers on Hit-testing SVG shapes? may help you in this quest. There are issues with missing browser support, but you could perhaps use svgroot.checkIntersection to hit test a small (perhaps even 0 width/height would work?) rectangle within your polygon shape.
The approach I suggested as a last resort seems to be the easiest solution for this problem.
I found a nice JS library that makes it easy to render SVG on a canvas. With the SVG rendered, all it takes is a call to the 2D context's getImageData method for a 1x1 region at the point you want to check. I guess it helps to create a copy of the SVG with colour coding to make the check easier if your SVG is more complex than the one I'm using (you'll have to check the RGBA value byte-by-byte).
This feels terribly hackish as you're actually inspecting the pixels of a raster image, but the performance seems to be decent enough and the colour checks can be written in a way that allows for impurities (e.g. near the edges).
I guess if you want relative coordinates you could try creating a 1-to-1 sized canvas and then divide the pixel coordinates by the canvas dimensions.
If somebody comes up with a better answer, I'll accept it instead. Until then, this one serves as a placeholder in case someone comes here with the same problem looking for an easy solution.