I am implementing a gradient heat map chart with zooming functionality, I would draw up to 100x100 matrix, though the size might vary a lot,
for smaller blocks zooming works perfectly fine but its dreadfully slow when I draw big matrix,
here is my http://jsfiddle.net/6BEET/
Could someone please suggest ways to improve efficient zooming, I would like to implement native zooming rather than using other libraries like svg pan or raphael-zpd as I need to support IE8 browser.
Related
I am working on a 2d fantasy map displayed in browser via WebGL. Here is what it looks like:
It is procedurally generated so you can move wherever you want but you can also zoom and unzoom without losing quality. I would like to add assets in some places, especially mountains when the altitude is high. I have those assets as vector images (.svg) so that you can still zoom in without losing quality. The thing is I have no idea how I could draw them on screen. I think I would need to convert those vectors to vertices of triangles but I am wondering if there is an automatic way to do it. I heard about something called SVGLoader but I think this is only for threejs and I am using webgl alone. What would you advise me to do?
edit: I just found https://github.com/MoeYc/svg-webgl-loader which looks interesting
Commonly, techniques such as supersampling or multisampling are used to produce high fidelity images.
I've been messing around on mobile devices with CSS3 3D lately and this trick does a fantastic job of obtaining high quality non-aliased edges on quads.
The way the trick works is that the texture for the quad gains two extra pixels in each dimension forming a transparent one-pixel-wide outline outside the border. Due to texture sampling interpolation, so long as the transformation does not put the camera too close to an edge the effect is not unlike a pre-filtered antialiased rendering approach.
What are the conceptual and technical limitations of taking this sort of approach to render a 3D model, for example?
I think I already have one point that precludes using this kind of trick in the general case. Whenever geometry is not rectangular it does nothing to reduce aliasing: The fact that the result with a transparent 1px outline border is smooth for HTML5 with CSS3 depends on those elements being rectangular so that they rasterize neatly into a pixel grid.
The trick you linked to doesn't seem to have to do with texture interpolation. The CSS added a border that is drawn as a line. The rasterizer in the browser is drawing polygons without antialiasing and is drawing lines with antialiasing.
To answer your question of why you wouldn't want to blend into transparency over a 1 pixel border is that transparency is very difficult to draw correctly and could lead to artifacts when polygons are not drawn from back to front. You either need to presort your polygons based on distance or have opaque polygons that you check occlusion of using a depth buffer and multisampling.
When using an feDisplacementMap svg filter, my smooth svg lines are getting all jagged. I could probably render it large and then shrink it down, but isn't SVG supposed to be able to anti-alias?
Okay, so I figured out the answer to my own question: the filterRes attribute: http://www.w3.org/TR/SVG/filters.html#FilterElementFilterResAttribute
In my testing, on Chrome, increasing the filterRes slows things down pretty dramatically.
SVG filters process inputs at the pixel level, not the vector level. As far as an SVG filter is concerned, it's been handed a big rectangle of RGBA pixels to work with. Results from a displacement map can look pixelated because a filter has no idea where the edges that have been displaced are - it's all just pixels as far as it is concerned. (The old semi-transparent pixels that used to be the anti-aliasing have been displaced as well.) However, sometimes you can add another filter or two to solve any problem that this creates. Creative ways to solve this problem:
Take the post-displacement graphic, blur it with a radius of a few pixels then blend the blur back into the original graphic.
Take the post-displacement graphic, do a luminance to alpha conversion, then use that alpha map with a diffuse lighting effect to add a fake anti-alias lighting effect.
Use a convolvematrix with edge detection values to extract edges from the graphic, blur that result and blend it back into the source graphic.
Depending on your graphic, you might be able to use an erode or dilate filter, but that tends to produce boxy highlights and might not work. And of course, you can always tweak your input in SVG (using stroke effects) to "pre-antialias" your source graphic so the result doesn't look so odd.
SVG images are great for high detailed graphics, but since they consist of a number of coordinates that need to be calculated before rendering, are they potentially bad for performance, say compared to rendering a jpg which is simply drawing an array of pre-calculated pixels?
I use Context.drawImage, and I do not know if the SVG graphics need to be calculated every drawn frame of the canvas or if they are perhaps cached somehow? or maybe I'm worrying about nothing?
The performance will depend on your specific application and the complexity of your graphic, but generally speaking the vector graphics are not going to have a significant impact. Your main bottleneck will typically be in manipulating the pixel data in the canvas; the larger your canvas, the more time it will take to draw.
Unless you are redrawing the canvas every frame however, the only calculations that are performed at all are those made when you initially draw the image. When you are not modifying it, the canvas is effectively nothing more than a static bitmap.
Does anyone know of a graphics system which handles composition of multiple anti-aliased lines well?
I'm showing a dependency diagram and have a bunch of curves emanating from a point. These are drawn anti-aliased in the usual way, of blending partially covered pixels. So if two lines would occupy the same half of a pixel, the antialiasing blends it to 75% filled rather than 50% filled. With enough lines drawn on top of each other, the pixel blend clamps and you end up with aliased lines.
I know anti-grain geometry has algorithms for calculating blends which cater for lines which abut, and that oversampling might work, but are there any other approaches?
Handling this form of line composition well is going to be slow (you have to consider all the lines that impinge upon each pixel using a deferred rendering approach). I doubt that there are many (if any) libraries out there that will do it for you.
The quickest and easiest method (and possibly the only realistic and cost effective solution for your case), which will work with virtually any drawing library would be to supersample it - draw to an offscreen bitmap at much higher resolution (e.g. 4 times wider and higher, with lines of 4 pixels width. Disable antialiasing when drawing this as it'll only slow it down) and then scale the result down with bilinear filtering. The main down-side is that it uses a lot of memory for the offscreen bitmap.
If you need an existing system that gets antialiased lines "visually correct", you might try using one of several existing RenderMan-compliant 3D renderers. The REYES algorithm, which many of these renderers use, works by breaking up primitives into micropolygons, then sampling them at several random point locations within each pixel. So even if you have a million lines collectively obscuring 50% of a pixel, the resulting image value will show roughly 50% coverage. (This is, for example, how the millions of antialiased hairs are drawn on characters in many animated movies.)
Of course, using a full-blown 3D renderer to draw 2D lines is like driving nails with a sledgehammer. You'd need a fairly pathological scenario for the 3D renderer to be any more efficient than simply supersampling with a traditional 2D renderer.
It sounds like you want a premade drawing library, which I do not know of.
However, to answer your question of knowing any approach that would work, you can consider a pixel to be a square. You can then approximate any shape that you draw as a polygon that intersects the pixel box. By clipping these polygons against the box of the pixel and against each other, you can get a very good estimate of the areas associated with each color that intersects the pixel for accurate antialiasing. This is, of course, very slow to calculate and is not suitable for interactive drawing.