How to disable multitouch (multiple touches) - fabricjs

Is there a nice clean way to disable multiple touches on the canvas at all?
Thanks in advance.

Fabric.js uses Event.js for touch handling. I couldn't figure out which options could be passed into the canvas constructor, but you might try var canvas = new fabric.Canvas('c', {maxFingers:1}).
Another option might be to use event.preventDefault() inside the handlers for pinch and rotate, to cancel 2 finger gestures. Here are the number of 'pointers' and their actions defined for that library:
1 : click, dblclick, dbltap
1+ : tap, longpress, drag, swipe
2+ : pinch, rotate
If either of those aren't what you need, you will need to set up event listeners for touchstart, touchmove, etc. The MDN page describes how to listen for all touches, so just call preventDefault for each one after the first.

Related

Fabricjs detect hovered control box

I have a fabricjs canvas and onto that i am adding image. when we click the image the image is in selected mode.
is there a way to know if the mouse if over the grapples(the square boxed around the image) but not the one used to rotate?
and if it is can i fire a javascript command from there?
the rotation square is called 'mtr' inside fabricjs.
You can hook a 'mousedown' or 'mouseover' event to the object in this way:
object.on('mouseover' myfunction.bind(object));
the function you pass will receive an object parameter that we will call opt, just for example.
function(opt) {
var mouseevent = opt.e;
if (this.__corner === 'mtr') {
callMyOtherFunction(mouseevent);
}
}
this will not stop the rotation handling during mousemove or firing modified on mouseup
this.__corner referes to the object (this) and the last findTargetCorner result. Is not documented and is not a public feature, is mostly used for internal fabric logic, but is there and you can use it.
You should propose on the fabric issue tracker to make it an official feature, removing the double dash in front and documenting it in the docs.

Particles generated behind sprite

I just started out with Phaser.
I have a simple sprite in the middle of the screen, and whenever I click the sprite, I emit a particle at the clicked x,y coordinates.
My problem is that the particles are generated behind the sprite. I have tried setting z on the sprite to 1 and the emitter to 1000 without luck.
What am I missing?
var emitter = game.add.emitter(game.world.centerX, game.world.centeryY);
emitter.makeParticles('phaser');
var sprite = game.add.sprite(game.world.centerX, game.world.centerY, 'phaser');
sprite.scale.setTo(2, 2);
sprite.inputEnabled = true;
sprite.events.onInputDown.add(function(sender, pointer){
emitter.emitX = pointer.x;
emitter.emitY = pointer.y;
emitter.emitParticle();
}, this);
http://phaser.io/sandbox/cxBVeHrx
EDIT
My actual code is based on the Phaser-ES6-Boilerplate. Even though BdRs answer solves the issue in the sandbox code, I'm not able to utilize this in my real code.
I have uploaded both the code and a running example. Hopefully someone can tell me where I have screwed things up...
Separate Phaser items don't have a z-order, instead it just depends on the order you create and add them to game. Each new sprite or emitter or group etc. will be displayed on top of all previously added items.
So, simply changing your code to something like this should work.
// first the sprite
var sprite = game.add.sprite(game.world.centerX, game.world.centerY, 'phaser');
sprite.scale.setTo(2, 2);
// then the particles in front of sprite
var emitter = game.add.emitter(game.world.centerX, game.world.centeryY);
emitter.makeParticles('phaser');
// then maybe text in front of particles and sprite
var mytest = game.add.bitmapText(10, 20, 'myfont', 'Level 1', 16);
// etc.
Btw sprites do have a .z value but that only used when it's part of a Phaser.Group, it will then be used as the display z-order but only within that group of sprites.
By default, phaser will not sort objects that get added to any group, it will just render them in the order that they get added. In your case, you can just add the emitter to the group after you add the sprite (the group in this case is the 'game' object).
Of course, having to add objects in the drawing order is not ideal, and if you need to have them sorted dynamically, not possible.
Another way is you can sort objects within a group using the 'sort' function, in which you give it the name of a parameter to sort by, and you sort whenever you need to (in some cases, in the Update callback).
Sorting every frame can be a performance hit though, especially if you have a lot of objects. Another way you could go about this is by adding groups, sorting those groups in draw order (think of them like layers), and then adding objects to those groups in any order. Any group that needs sorting within itself you can sort as well. This way, you can choose to have (for example) a background layer not needing to be sorted but everything added to that layer will be behind every other layer.
Good answers from everybody, but you are missing that every GameObject has a depth property which serves exactly the z-index purpose. This way you do not need to rely on the order of objects creation.
There is also an official
example.
Hope this helps.

Resizable MKOverlay using MKOverlayRenderer

I want to have a custom MKOverlay that's a circle anchored to the user location annotation that the user can resize by pinching. I was able to successfully achieve this using MKOverlayPathRenderer and a custom MKOverlay object by overriding the createPath method and making an arc. The resizing and moving of the overlay was handled by using KVO on the radius and coordinate properties of my overlay. However the resizing was incredibly choppy and the boundingMapRect wasn't correctly calculated.
I've also tried using an image and instead of subclassing MKOverlayPathRenderer just MKOverlayRenderer, overriding - (void)drawMapRect:(MKMapRect)mapRect zoomScale:(MKZoomScale)zoomScale inContext:(CGContextRef)context but when I resize my CPU percentage jumps to 160% usage (not great yeah?) and the boundingRect is again being drawn incorrectly.
I really think the way to do it is with MKOverlayPathRenderer and maybe having an atomic counter of some kind so that a redraw only gets called say every 5 or 10 times the pinch gesture is triggered.
Does anyone have any suggestions? I've also considered but haven't tried making a UIView and adding it as a subview to the map view and putting the pinch gesture on that but that seems hacky and dirty.
When you computed new boundingMapRect on the Overlay, you must invoke invalidatePath on your Renderer. After that, system will invoke createPath for you when appropriate.

show popovers on uilabel touch in iphone

I have a number of UILabels on a single view of a iphone application ios 4.3. How to handle touch events for all these labels at one time? I wanted to show pop overs on touch of that label. I know popovers are not available on iphone and will be making my custom ones.
what i did was using UITapGestureRecognizer and adding an action #selector(labelTap:) and then doing [label addGestureRecognizer:TapGestureRecognizerObject. But when i use the same UITapGestureRecognizer for all my UIlabels only the last added label shows the tap action.
i have set userInteractionEnable to YES.
Can any one point me to the right direction?
You need to create separate UITapRecognizer for tracking different UILabel, when a UIGestureRecognizer is added to multiple views, it will only track event from last it added to. To better understand why you need different instances of UITapRecognizer, think of it as a UIView that only handles touches event but doesn't do any drawing.

Using GDI+ to draw tooltip

I'm new to GDI+ programming and am looking for some advice.
I am loading an image from a file and displaying it using the following functions (some pseudo code included):
Gdiplus::Image *i = new Gdiplus::Image(file, other parameters ... );
Gdiplus::DrawImage(i, other parameters ... );
I would like to associate a tooltip with the image. Is there any way that I can automatically set/attach a tooltip to the Gdiplus::Image objact (or any other Gdiplus control that I wish to draw for that matter)?
If not, how can such functionality be achieved? I have looked at CToolTipCtrl in WTL but don't know how to attach it to the Gdiplus::Image.
Thanks in advance.
After investigating this more, I've realised that this is not possible, so to speak. You must use GDI+ to draw your own tool tip my monitoring mouse events to see when it is hovering over something, then using the device context to do the drawing withing the mouse hover event handler.

Resources