ProcessingJS performance with large data - svg

My goal is to create an interactive web visualization of data from motion tracking experiments.
The trajectories of the moving objects are rendered as points connected by lines. The visualization allows the user to pan and zoom the data.
My current prototype uses Processing.js because I am familiar with Processing, but I have run into performance problems when drawing data with greater than 10,000 vertices or lines. I pursued a couple of strategies for implementing the pan and zoom, but the current implementation, which I think is the best, is to save the data as an svg image and use the PShape data type in Processing.js to load, draw, scale and translate the data. A cleaned version of the code:
/* #pjs preload="nanoparticle_trajs.svg"; */
PShape trajs;
void setup()
{
size(900, 600);
trajs = loadShape("nanoparticle_trajs.svg");
}
//function that repeats and draws elements to the canvas
void draw()
{
shape(trajs,centerX,centerY,imgW,imgH);
}
//...additional functions that get mouse events
Perhaps I should not expect snappy performance with so many data points, but are there general strategies for optimizing the display of complex svg elements with Processing.js? What would I do if I wanted to display 100,000 vertices and lines? Should I abandon Processing all together?
Thanks
EDIT:
Upon reading the following answer, I thought an image would help convey the essence of the visualization:
It is essentially a scatter plot with >10,000 points and connecting lines. The user can pan and zoom the data and the scale bar in the upper-left dynamically updates according to the current zoom level.

Here's my pitch:
Zoom level grouping and break down data as your users focuses/zooms-in
I would suggest you group together some data and present it as simple node
On zooming in to a particular node you can break down the node and release the group thus showing it's details.
This way you limit the amount of data you need to show on zoomed-out views (where all the nodes would be shown) and you add details as the user zooms-in to a region - in which case not all nodes would be showing since zooming in only focuses on one area of your graph
Viewport limit
Detect what is in the current view area and draw just that. Avoid drawing the whole node graph structure if your user cannot see it in his viewport - Show only what is necessary. Although I suspect that this is already done by Processing.js anyway, I don't know if your zooming functionality takes advantage of this.
Consider bitmap caching if your nodes are interactive/clickable
If your elements are clickable/interactive you might want to consider grouping data and showing them as bitmaps(large groups of data showed as a single image), until the user clicks on a bitmap in which case the bitmap is removed and the original shape is re-drawed in that bitmaps place. This minimizes the amount of points/lines the engine has to draw on each redraw cycle.
For bitmap caching see this link,(this is Fabric.js - a canvas library and SVG but the concept/idea is the same) and also this answer I posted to one of my questions for interactive vector/bitmap caching
As a side note:
Do you really need to use Processing?
If there's no interaction or animation happening and you just want to blit pixels(just draw it once) on a Canvas, consider abandoning a vector based library altogether. Plain-old canvas just blits pixels on a canvas and that's all. The initial startup drawing of data might have some delay, but since there's not any internal reference to the points/shapes/lines after they were drawn - there's nothing eating up your resources/clogging your memory.
So if this is the case - consider making the switch to plain Canvas.
However data visualisations are all about animations and interactivity so I doubt you'll want to give them up.

Related

SDL2 / Surface / Texture / Render

I'm trying to learn SDL2. The main difference (as I can see) between the old SDL and SDL2 is that old SDL had window represented by it's surface, all pictures were surfaces and all image operations and blits were surface to surface. In SDL2 we have surfaces and textures. If I got it right, surfaces are in RAM, and textures are in graphics memory. Is that right?
My goal is to make object-oriented wrapper for SDL2 because I had a similar thing for SDL. I want to have class window and class picture (has private texture and surface). Window will have it's contents represented by an instance of the picture class, and all blits will be picture to picture object blits. How to organize these picture operations:
Pixel manipulation should be on surface level?
If I want to copy part of one picture to another without rendering it, it should be on surface level?
Should I blit surface to texture only when I want to render it on the screen?
Is it better to render it all to one surface and then render it to the window texture or to render each picture to the window texture separately?
Generally, when should I use surface and when should I use texture?
Thank you for your time and all help and suggestions are welcome :)
First I need to clarify some misconceptions: The texture based rendering does not work as the old surface rendering did. While you can use SDL_Surfaces as source or destination, SDL_Textures are meant to be used as source for rendering and the complimentary SDL_Renderer is used as destination. Generally you will have to choose between the old rendering framework that is done entirely on CPU and the new one that goes for GPU, but mixing is possible.
So for you questions:
Textures do not provide direct access to pixels, so it is better to do on surfaces.
Depends. It does not hurt to copy on textures if it is not very often and you want to render it accelerated later.
When talking about textures you will always render to SDL_Renderer, and is always better to pre-load surfaces on textures.
As I explained in first paragraph, there is no window texture. You can either use entirely surface based rendering or entirely texture based rendering. If you really need both (if you want to have direct pixel access and accelerated rendering) is better do as you said: blit everything to one surface and then upload to a texture.
Lastly you should use textures whenever you can. The surface use is a exception, use it when you either have to use intensive pixel manipulation or have to deal with legacy code.

Conservatively cover bitmap with small number of primitives?

I'm researching the the possibility of performing occlusion culling in voxel/cube-based games like Minecraft and I've come across a challenging sub-problem. I'll give the 2D version of it.
I have a bitmap, which infrequently has pixels get either added to or removed from it.
Image Link
What I want to do is maintain some arbitrarily small set of geometry primitives that cover an arbitrarily large area, such that the area covered by all the primitives is within the colored part of the bitmap.
Image Link
Is there a smart way to maintain these sets? Please not that this is different from typical image tracing in that the primitives can not go outside the lines. If it helps, I already have the bitmap organized into a quadtree.

Draw an image in -drawrect or load a same image from a file, which is more efficient?

Say I want to display a coordinate graph in UIView which is going to be updated in time. Assume I am implementing -drawrect of UIView and other methods to update the graph.
Since the the coordinate frame in the graph stays the same over time, would it be more efficient to have two UIViews, with one (view1) loading the coordinate frame from an image file as its view thus no need to draw the frame every time the graph updates and the other (view2) having -drawrect implemented and being added as a subview of view1, than have only one UIView where the entire graph is drawn in -drawrect?
Above is just a specific example. What I am wondering is if it's a design pattern to split static UI elements as much as possible from dynamic ones as far as -drawrect is involved and if it does substantially save CPU (or GPU) resource in doing so.
Any insight would be appreciated. Thanks.
My experience is that it is often best to have an offscreen buffer where you draw to, and then copy some or all of the offscreen buffer to the screen. There are a number of ways you can do this, and it depends on the type of view you're using. For example, if you have an NSOpenGLView, you might draw to a texture-backed FBO, and then use that texture to draw a textured quad to the screen. If you are working with CGImages for the static part, you might draw to a CGBitmapContext in main memory, and then turn that bitmap context into a CGImage to draw to the view.
How you draw to your offscreen buffer will also depend on what you're drawing. For something like a graph, you might draw the background once to your offscreen buffer and then add points to a curve you're drawing over it as time progresses, and just copy the portion you need to screen. For something like a game where all the characters and objects in the scene are moving, you might need to draw everything in the scene on every frame.

2d tile based game design, how do I draw a map with a viewport?

I've been struggling with this for a while.
Presently, I have a grid of 100 by 100 tiles, which belong to a Map.
The Map implements IDrawable. I call Draw() and it draws itself at 0,0 which is fine.
However, I want to expand this to draw essentially a viewport. The player will be drawn on the screen in the middle, and thus I want to display say, 10 tiles in each direction (rather than the entire map).
I'm having trouble thinking up the architecture for this one. I'm in the mindset that things should draw themselves, ie I say player1.Draw() and it draws itself. This would have worked before, where it drew the player at x,y on the screen, but with a viewport it will no longer know where to draw itself.
So should the viewport be told to draw, and examine every object in the game and draw those which are visible? Should the map tiles be objects that are subjected to this? Or should the viewport intelligently draw the map by coupling both together?
I'd love to know how typical scrolling tile games accomplish this.
If it matters, I'm using XNA
Edit to add: Can you do graphics manipulation such as trying the HTML rendering approach, where you tell things to draw, and they return a graphic of themselves, and then the parent places the graphic in the correct location? I'm thinking, if I had 2 viewports side by side for splitscreen, how would I stop them drawing outside the edges?
Possible design:
There's a 2D "world" that contains object instances.
"Object instance" is a sprite reference + its coordinates in the world.
When you draw scene, you request list of visible objects that exist in given 2D area, THEN you draw them.
With such design world can be very huge.
I'm in the mindset that things should draw themselves, ie I say player1.Draw() and it draws itself.
visible things should draw themselves. Objects outside of viewport are not visible.
, how would I stop them drawing outside the edges?
Not sure about XNA, but OpenGL has "scissors test"/"glViewport" and Direct3D 9 has "SetViewport" method that allows you to use part of the screen/window for rendering. There are also clipplanes and stencil buffer (using stencil for 2D clipping is overkill, though) You could also render to texture then render the texture. There are many ways to deal with this.
So should the viewport be told to draw, and examine every object in the game and draw those which are visible?
For a large world, you shouldn't examine every object, because it will be slow. You should be able to find visible object without testing every one of them. For that you'll need some kind of space partitioning - quad trees (because we are in 2D), k-d trees, etc. This way you should be able to handle few thousands (or even hundreds of thousands) of objects, as long as you don't see them all at once.
Should the map tiles be objects that are subjected to this?
If you keep drawing invisible things, FPS will drop.
and they return a graphic of themselves
For 2D game this may be very slow. Remember KISS principle.
Some basic ideas, not specifically for XNA:
objects draw themselves to a "virtual screen" in world coordinates, they don't draw themselves to the screen directly
drawable objects get a "graphics context" object which offers you a drawing API. The "graphics context" knows about the current viewport bounds and realizes the coordinate transformation from world coordinates to screen coordinates (for every drawing operations). The graphics context also does the direct drawing to the screen (or to a background screen buffer, if you need double buffering).
when you have many objects outside the visible bounds of your viewport, then as a performance optimization, your drawing loop can make a before-hand bounds-check for your objects and test if they are completely outside the visible area. If so, there is no need to let them draw themselves.

Best way to move sprites in OpenGL - translate or alter vertices

I am creating an app for android using openGL ES. I am trying to draw, in 2D, lots of moving sprites which bounce around the screen.
Let's consider I have a ball at coordinates 100,100. The ball graphic is 10px wide, therefore I can create the vertices boundingBox = {100,110,0, 110,110,0, 100,100,0, 110,100,0} and perform the following on each loop of onDrawFrame() with the ball texture loaded.
//for each ball object
FloatBuffer ballVertexBuffer = byteBuffer.asFloatBuffer();
ballVertexBuffer.put(ball.boundingBox);
ballVertexBuffer.position(0);
gl.glVertexPointer(3, GL10.GL_FLOAT, 0, ballVertexBuffer);
gl.glDrawArrays(GL10.GL_TRIANGLE_STRIP, 0,4);
I would then update the boundingBox array to move the balls around the screen.
Alternatively, I could not alter the bounding box at all and instead translatef() the ball before drawing the verticies
gl.glVertexPointer(3, GL10.GL_FLOAT, 0, ballVertexBuffer);
gl.glPushMatrix();
gl.glTranslatef(ball.posX, ball.posY, 0);
gl.glDrawArrays(GL10.GL_TRIANGLE_STRIP, 0,4);
gl.glPopMatrix();
What would be the best thing to do in the case in terms of efficient and best practices.
OpenGL ES (as of 2.0) does not support instancing, unluckily. If it did, I would recommend drawing a 2-triangle sprite instanced N times, reading the x/y offsets of the center point, and possibly a scale value if you need differently sized sprites, from a vertex texture (which ES supports just fine). This would limit the amount of data you must push per frame to a minimum.
Assuming you can't do the simulation directly on the GPU (thus avoiding uploading the vertex data each frame) ... this basically leaves you only with only one efficient option:
Generate 2 VBOs, map one and fill it, while the other is used as the source of the draw call. You can also do this seemingly with a single buffer if you glBufferData(... 0) in between, which tells OpenGL to generate a new buffer and throw the old one away as soon as it's done reading from it.
Streaming vertices in every frame may not be super fast, but this does not matter as long as the latency can be well-hidden (e.g. by drawing from one buffer while filling another). Few draw calls, few state changes, and ideally no stalls should still make this fast.
Drawing calls are much more expensive than altering the data. Also glTranslate is not nearly as efficient as just adding a few numbers, after all it has to go through a full 4×4 matrix multiplication, which is 64 scalar multiplies and 16 scalar additions.
Of course the best method is using some form of instancing.

Resources