I'm currently working on a tool which requires to display a large number of nodes (here ImageView) on a canvas pane. Most of them will contain the very same image. In order to save resources I want to avoid adding the ImageView multiple times to the scene graph. Is there a way to render the ImageView on different positions within my canvas at the same time?
Game frameworks usually use such a technique. I think it's called templating. Is there something similar in JavaFX?
Related
My goal is to show multiple (small) panes of video on-screen simultaneously.
I would prefer to use the hardware scalar. This is currently working well for a single video on a single surface. For multiple streams it appears multiple SurfaceViews are needed - I don't see a way to use the hardware scaler to blit multiple images into different parts of the same Surface. What's the best way to lock/blit image pixels to these surfaces?
ANativeWindow_unlockAndPost causes a wait-for-vsync + swap (I think?), so I can't call this per-SurfaceView in the same update cycle (well I can, but I get horrible jittering).
One alternative is to use a seperate render thread per SurfaceView. Does this seem like a sane avenue to pursue? Are there any other ways to update multiple SurfaceViews with a single wait-for-vsync+swap?
I am writing an iOS/Android game and looking for the most performant way to render my vertex data with OpenGL ES 2.0. I have two different kinds of data: dynamic data that changes its attributes every frame, for example the player or animated background objects, and static data such as the static background or the terrain. I googled a lot since yesterday, but I could not find a clear and unique answer to the question of what is the best was to render such data.
There are basically three options for rendering such data (If I do not miss one. If so, feel free to correct me.):
Vertex Arrays Only:
Just fill your vertex every frame on the CPU (including the dynamic data).
Vertex Buffer Objects Only:
Allocate a VBO on the GPU with GL_DYNAMIC_DRAW where both, the dynamic and static data is stored. The dynamic data is then updated every frame via glBufferSubData.
Use both:
Static data is stored and render with a VBO and the dynamic data is rendered with a Vertex Array. With this option, we need two rendering passes, one for rendering the VBO and one for rendering the vertex array.
Since the first option does not exploit the immutability of the static data and since the third option requires two rendering passes, my guess is that I should go with the second option. However, I am absolutely not sure about this and I hope you can clarify my confusion.
Allocate two Vertex Buffer Objects. One with hint GL_DYNAMIC_DRAW that will be updated frequently. Allocate a second VBO for immutable data and use the hint GL_STATIC_DRAW. According to the API documentation, GL_STATIC_DRAW should be used for data that "will be modified once and used many times"; just what you need.
Speaking of two rendering passes here is probably a misuse of the term: what you do is to render your scene in two separate drawing commands. Since drawing commands run asynchronously, you should not expericence any performance hit by doing so.
A second rendering pass, on the other hand, is when you render the entire scene twice (see for example here) with different settings, or when you do some image processing effects on outputs of previous rendering passes.
I need to display a Directed Acyclic Graph in a web page. I am not looking for an off-the-shelf library or solution. I am looking for suggestions, recommendations or a push in the right direction.
1. DAG Visualization
I am not sure of how the nodes and relations will be represented. Viable solutions may be Treemaps, the good old node & line or a combination of that two. I don't have a problem if one node appears more than once on the screen.
I don't need all the nodes to appear on the screen from the start. The user may expand a node by double clicking or zooming for example.
I am open to all suggestions and advices.
2. Technology
There are some functionalities that the implementation must have:
drag & drop
zoom
events on mouse interaction with nodes
From my point of view, I have 2 options (Flash is out of the question):
a. HTML5 Canvas
Disadvantages: no vectors, basically just an image; no implicit mouse events on nodes;
Advantages: speed; popularity; animations
b. SVG
Disadvantages: low speed when there are many nodes;
Advantages: vector graphics; elements are in the DOM so you can have events and so on;
c. A mix of HTML5 Canvas & SVG
Assuming you want to dynamically update your graph, you could probably use python on the server with the pydot GraphViz module.
I have not tried this, but it's something worth looking into.
Just have a question for anyone out there who knows some sort of game engine pretty well. What I am trying to implement is some sort of script or code that will allow me to make a custom game character and textures mid-game. A few examples would be along the lines of changing facial expressions and body part positions in the game SecondLife. I don't really need a particular language, feel free to use your favorite, I'm just really looking for an example on how to go about this.
Also I was wondering if there is anyway to combine textures for optimization; for example if i wanted to add a tattoo to a character midgame, is there any code that could combine his body texture and the tattoo texture into one texture to use (this way I can simply just render one texture per body.)
Any tips would be appreciated, sorry if the question is a wee bit to vauge.
I think that "swappable tattoos" are typically done as a second render pass of the polygons. You could do some research into "detail maps" and see if they provide what you're looking for.
As for actually modifying the texture data at runtime, all you need to do is composite the textures into a new one. You could even use the rendering API to do it for you, more than likely; render the textures you want to combine in the order you want to combine them into a new texture. Mind, doing this every frame would be a disoptimization since it'll be slower to render two textures into one and then draw the new one than it would be just to draw the two sources one after the other.
What is the generic algorithm or process that is commonly used to dynamically render portions of a scrolling area?
For example, in Google Maps, when the user scrolls past the bounds of the currently rendered area, a grey checkerboard pattern is displayed within the not-yet-rendered portions while the application loads and renders those areas.
I'm looking specifically for the approach, or the mathematics, related to filling a graphics area in chunks based on what has just come into view.
If possible, I'm looking for anything relevant to the GDI+ process of doing so.