Permanently refreshing fabric.js canvas - fabricjs

I am working on the development of a software artifact that uses fabric.js-1.6.6. The code will be used by other programmers who might be adding/removing objects to/from the canvas and interacting with them.
In order to avoid delays caused by other programmers' refreshing routines, I decided to implement a global refreshing function that is transparent to users of my code. The refreshing routine looks like this:
(function render() {
fabric.util.requestAnimFrame(render);
canvas.renderAll();
})();
I know it might be not super efficient (as the canvas gets refreshed all the time), but I am preventing other programmers from having to deal with rendering the canvas on their own when implementing specialized objects whose appearance changes permanently and, thus, require the canvas to update.
Recently, I tried to migrate my code to fabric 2.0.0 beta 4 but the canvas is not updated anymore. Any ideas on how to fix this?
Also, is there any better, more efficient way to have a permanent canvas refreshing routine? What I decided to do was inspired on the Animating polygon points official fabric demo, but my canvas requires a significantly higher number of objects that are changing all the time, so this might not be the best strategy.

My personal suggestion is to start watching at 2.0.0
The new version implement a new method called requestRenderAll that will make use of requestAnimationFrame properly.
Consider that having the canvas autorefreshing every 16ms may be an overkill.
To me the most elegant solution is to ask other developers to use requestRenderAll and not renderAll.
chained calling of requestRenderAll will have no effect, since till the animation is still in request stage, other do not get requested.

Related

PixiJS: having two different renderers for antialiasing

I have a PixiJS canvas rendered with WebGLRenderer and I need to find a way to make a decent looking snap of one of my containers with Graphics elements in it. Problem is, rendering to texture and cacheAsBitmap don't use antialiasing in this render mode.
Is there a way to create a CanvasRenderer and use it to render my container into texture? It seems like active WebGLRenderer does not allow anything else: there is no error thrown, but rendered texture have only eternal darkness in it. So maybe there is a way to disable it for a moment and then turn it on back again?
There are different other ways to get almost what I want. Using renderer.generateTexture with resolution set to 2 helps a bit. Finally, I can create really large clone of my container, make some snaps with generateTexture and then scale them down. But still it looks a bit odd. And feels like a dirty way to do things too, especially since cloning a container with a lot of content can be a bit problematic in some cases.

Render loop vs. explicitely calling update method

I am working on a 3D simulation program using openGL which uses a render loop with a fixed framerate to keep the screen updated as the world changes. Standard procedure really, and for a typical video game this is certainly the best approach (I originally took this code from an openGL game tutorial). But for me, the 3D scene will not be changing as rapidly and unpredictably as in a computer game. It will be possible for the 3D scene itself to change from time to time but in general it won't change between render calls (it's more of a visualisation tool for geometric problems). The user will be able to control the position/orientation of the camera but in general there will be times when the camera won't move for several seconds/minutes (potentially hundreds of render calls) and since the 3D scene is likely be static for the majority of the time, I wonder if I really need a continuous render loop...?
My thinking is that I will remove the automatic render loop and instead I will explicitly call my update method when either,
The 3D scene changes (very rare)
The camera moves (somewhat rare)
As I will be using this largely for research purposes, the scene/camera is likely to stay in one state for several minutes at a time and it seems silly to be continuously updating the frame buffer when it's not changing.
My question then is, is this a good approach? All the online tutorials for 3D graphics rendering seem to deal with game design but that's not really my requirement. In other words, what are the pros and cons of using a render loop vs. manually calling "update()" whenever something changes?
Thanks
There's no problem with this approach, in fact many 3D apps, like 3DS MAX use explicit rendering. You just pick what is better for your needs, in most games scene changes each frame so it's better to have update loop, but if you were doing some chess game, without animated UI you could also use explicit rendering only when the scene changes.
For apps with rare changes, like 3DS or Blender it would be better to call rendering only on change. This way you save the CPU/GPU but also power and your PC don't heat up so much.
With explicit rendering you can also have some performance tricks, like drawing simplified scene when camera moves, for better performance. Then when camera stops you render the full scene in background once again, and replace the low-quality rendering with the new one.

How to read Location of a view whilst it is being animated

I am animating a small space ship (derived from UIView) and periodically (whilst in animation) send it a PointF to check if this is near the space ship's current position.
However, when reading out the Frame position of the View it keeps returning the starting position before the animation started.
I think this is by design but it is causing me big problems since the space ship(s) should move independently along Paths and it is very tricky for me to do this by hand.
Is there another way - and/or has anyone some sample code?
Not sure of a workaround for your issue, but I have some suggestions on game development for iOS.
Your problem is one of the reasons why using GUI frameworks like UIKit/CoreGraphics for games isn't a good idea. For both performance reasons, as well as the fact as they aren't designed for it.
If you are looking for a simple framework for making games on iOS, have you looked at MonoGame? If you are doing lots of animations, we also use XNA Tweener along with MonoGame to get some lifelike animations.
PS - check out our game here.

When playing tours authored in KML, is it possible to dynamically to control the camera?

Using the Google Earth plugin AI, I want to play a tour authored in KML with the touring capability, but let the user modify the camera controls during the play.
Is it possible?
It depends on how much modification you want to allow.
Tour playback is designed to work with the user changing the orientation of the view (via dragging or the camera controls), but not the position. If the user stops changing the view for long enough, the camera will smoothly snap back to the default orientation for that point in the tour. The zoom and panning controls disappear during the tour, but if the user tries to change the camera position via other methods (like the keyboard), the tour will typically be paused.
The Earth API, however, allows you to absorb or change any of those event behaviors, since you can add a listener for mouse and keyboard events and prevent them from processing as usual or act on them in a completely different way.
If you haven't tried it, there's a tour example in the Google Code Playground where you can see what happens with different interactions based on the default event responses.
Finally, if you want really custom tour behavior -- like allowing certain kinds of movement of the camera away from the tour path even as the tour continues -- you will most likely need to write your own camera movement code. Getting the basics of this working isn't too difficult, but getting the right intuitive feel for that kind of interaction is difficult, and probably dataset-dependent. To get started, you can parse the KML directly, find the tour and the tour primitives it contains, and then use the regular camera controls you cited to move between those primitives, adding offsets for any user-supplied movements.
edit: the Earth API tour page cited in the question has an example of getting started with parsing the KML file by getting the plugin to do it for you. You can use this to implement the above suggestion by using the KML DOM walking code to find all the tour primitives (instead of halting as soon as a Tour element is found).
This isn't always the most efficient approach (plugin function calls have overhead, and meanwhile browsers have built-in XML parsing capabilities), but it may be the most straightforward way to start. For many tours, this approach would be perfectly sufficient.
It is possible, but pretty hard to implement and even harder to control well. I have been playing around with trying to do this for quite a while now. I have not had much success myself, but here are two example by others who have made some progress.
Firstly, the underlying principle they are using is based upon the TICK - a simple example of it is here
http://earth-api-samples.googlecode.com/svn/trunk/examples/event-frameend.html
The two example are :
http://maps.myosotissp.com/
and
http://racemyrace.com/race.php
Also, here is an example that used to work up until recently, I am not sure why it has stopped but it appears you can still read the JS being used. It is made by the same person who created the racemyrace website
http://www.thekmz.co.uk/GEPlugin/pathtour/v3/path_tour_v3.htm
If you happen to work something out, I would appreciate you creating a simple example page and sharing the link. It will probably take a while so if you could look up my email via profile and notify me that would be even better.
Good Luck!

Draw from a separate thread with NSOpenGLLayer

I'm working on an app which needs to draw with OpengGL at a refresh rate at least equal to the refresh rate of the monitor. And I need to perform the drawing in a separate thread so that drawing is never locked by intense UI actions.
Actually I'm using a NSOpenGLView in combination with CVDisplayLink and I'm able to achive 60-80FPS without any problem.
Since I need also to display some cocoa controls on top of this view I tried to subclass NSOpenGLView and make it layer-backed, following LayerBackedOpenGLView Apple example.
The result isn't satisfactory and I get a lot of artifacts.
Therefore I've solved the problem using a separate NSWindow to host the cocoa controls and adding this window as a child window of the main window containing the NSOpenGLView.
It works fine and I'm able to get quite the same FPS as the initial implementation.
Since I consider this solution quite like a dirty hack, I'm looking for an alternative and more clean way of achiving what I need.
Few days ago I came across NSOpenGLLayer and I thought that it could be used as a viable solution for my problem.
So finally, after all this preamble, here comes my question:
is it possible to draw to a NSOpenGLLayer from a separate thread using CVDisplayLink callback?.
So far I've tried to implement this but I'm not able to draw from the CVDisplayLink callback. I can only -setNeedsDisplay:TRUE on the NSOpenGLLayer from the CVDisplayLink callback and then perform the drawing in -drawInOpenGLContext:pixelFormat:forLayerTime:displayTime: when it gets automatically called by cocoa. But I suppose that this way I'm drawing from the main thread, isn't it?
After googling for this I've even found this post in which the user claims that under Lion drawing can occur only inside -drawInOpenGLContext:pixelFormat:forLayerTime:displayTime:.
I'm on Snow Leopard at the moment but the app should run flawlessly even on Lion.
Am I missing something?
Yes, it is possible, though not recommend. Call display on the layer from within your CVDisplayLink. This will cause canDrawInContext:... to be called and if it returns YES, drawInContext:... will be called and all this on whatever thread called display. To make the rendered image visible on screen, you have to call [CATransaction flush]. This method has been suggested on the Apple mailing list, though it is not completely problem free (the display method of other view may get called on your background thread as well and not all views support rendering from a background thread).
The recommend way is to make the layer asynchronous and render the OpenGL context on main thread. If you cannot achieve a good framerate that way, since your main thread is busy elsewhere, it is recommend to rather move everything else (pretty much your whole application logic) to other threads (e.g. using Grand Central Dispatch) and only keep user input and drawing code on the main thread. If your window is very big, you may still not get anything better than 30 FPS (one frame ever two screen refreshes), yet that comes from the fact, that CALayer composition seems a rather expensive process and it has been optimized for more or less static layers (e.g. layers containing a picture) and not for layers updating themselves 60 FPS.
E.g. if you are writing a 3D game, you are advised not to mix CALayers with OpenGL content at all. If you need Cocoa UI elements, either keep them separated from your OpenGL content (e.g. split the window horizontally into a part that displays only OpenGL and a part that only displays controls) or draw all controls yourself (which is pretty common for games).
Last but not least, the two window approach is not as exotic as you may think, that's how VLC (the video player) draws its controls over the video image (which is also rendered by OpenGL on Mac).

Resources