In phaser.js capture touch events on the canvas - phaser-framework

I'm trying to make a simple game that has a single input based on lessmilk.com's Flappy Bird tutorial. On the keyboard you hit space to jump, but on touch enabled devices I'd like to just have the users touch anywhere on the canvas to jump.
Looking at the input docs it seems straight forward to capture touch input for a sprite, but I'd like the user to be able click/touch anywhere.
What's the "Phaser way" to capture all touch events? Do I need to do something hacky like create an invisible sprite covering the entire canvas? Should I bypass the phaser and just attach DOM event handlers?

Seems like I'd been misunderstanding the role of phaser's input manager. I'd been trying to use game.input.touch to hook those events but I just needed to be using the higher level input.onDown event:
// click / touch to jump
game.input.onDown.add(this.jump, this);

Related

How can I configure MRTK to work with touch input in editor and on mobile devices?

I'm building an application that will run on both HoloLens and mobile devices (iOS/Android). I'd like to be able to use the same manipulation handlers on all devices with the goals:
Use ARFoundation for mobile device tracking and input
Use touch input with MRTK with ManipulationHandler and otherwise use touch input as normal (UI)
Simulate touch input in the editor (using a touch screen or mouse) but retain the keyboard/mouse controller for camera positioning.
So far I've tried/found:
MixedRealityPlayspace always parents the camera, so I added the ARSessionOrigin to that component, and all the default AR components to the camera (ARCameraManager, TrackedPoseDriver, ARRayCastManager, etc.)
Customizing the MRTK pointer profile to only countain MousePointer and TouchPointer.
Removing superfluous input data providers.
Disabling Hand Simulation in the InputSimulationService
Generally speaking, the method of adding the ARSessionOrigin to the MixedRealityPlayspace works as expected and ARFoundation is trivial to set up. However, I am struggling to understand how to get the ManipulationHandler to respond to touch input.
I've run into the following issues:
Dragging on a touch screen with a finger moves the camera (editor). Disabling the InputSimulationService fixes this, but then I'm unable to move the camera...
Even with the camera disabled, clicking and dragging does not affect the ManipulationHandler.
The debug rays are drawn in the correct direction, but the default touchpointer rays draw in strange positions.
I've attached a .gif explaining this. This is using touch input in the editor. The same effect is observed running on device (Android).
This also applies to Unity UI (world space canvas) whereby clicking on a UI element does not trigger (on device or in editor), which suggests to me that this is a pointer issue not a handler issue.
I would appreciate some advice on how to correctly configure the touch input and mouse input both in editor and on device, with the goal being a raycast from the screen point using the projection matrix to create the pointer, and use two-finger touch in the same way that two hand rays are used.
Interacting with Unity UI in world space on a mobile phone is supposed to work in MRTK, but there are few bugs in the input system preventing it from working. The issue is tracked here: https://github.com/microsoft/MixedRealityToolkit-Unity/issues/5390.
The fix has not been checked in, but you can apply a workaround for now (thanks largely to the work you yourself did, newske!). The workaround is posted in the issue. Please see https://gist.github.com/julenka/ccb662c2cf2655627c95ffc708cf5a69. Just replace each file in MRTK with the version in the gist.

How to make an overlay which capture no events

I would like to draw some sort of window on top of all the other windows. For example, to display some debugging infos (like conky) or things like a timer.
The main thing is that I would like to able to continue using the other windows while using it (the events go through transparently).
I've tried doing it with pygtk, pyqt and others but can't find a way to make it a real overlay with no event capture.
Is there some low-level x11 solution?
I think the Composite-extension-approach will not work when a compositing manager is running (and thus Composite's overlay window is already used).
Since you explicitly mention "no event capture":
The SHAPE extension allows to set some different shapes for a window. Version 1.1 of this extension added the "input" shape. Just setting this to an empty region should pretty much do what you want.
Some concrete example of exactly what I think you ask for can be found in Conky's source code: http://sources.debian.net/src/conky/1.10.3-1/src/x11.cc/?hl=769#L764-L781
Edit: Since you said that you didn't find anything in Gtk (well, PyGtk), here is the function that you need in Gtk: https://developer.gnome.org/gdk3/stable/gdk3-Windows.html#gdk-window-input-shape-combine-region
You might need Composite extension + GetOverlayWindow request:
Version 0.3 of the protocol adds the Composite Overlay Window, which
provides compositing managers with a surface on which to draw without
interference. This window is always above normal windows and is always
below the screen saver window. It is an InputOutput window whose width
and height are the screen dimensions. Its visual is the root visual
and its border width is zero. Attempts to redirect it using the
composite extension are ignored. This window does not appear in the
reply of the QueryTree request. It is also an override redirect
window. These last two features make it invisible to window managers
and other X11 clients. The only way to access the XID of this window
is via the CompositeGetOverlayWindow request. Initially, the Composite
Overlay Window is unmapped.
CompositeGetOverlayWindow returns the XID of the Composite Overlay
Window. If the window has not yet been mapped, it is mapped by this
request. When all clients who have called this request have terminated
their X11 connections the window is unmapped.
Composite managers may render directly to the Composite Overlay
Window, or they may reparent other windows to be children of this
window and render to these. Multiple clients may render to the
Composite Overlay Window, create child windows of it, reshape it, and
redefine its input region, but the specific arbitration rules followed
by these clients is not defined by this specification; these policies
should be defined by the clients themselves.
C api : XCompositeGetOverlayWindow
PyGTK Solution:
I think the composite and shapes X extensions are sufficiently ubiquitous and shall assume here that they are active on your system. Here's PyGtk code for this:
# avoid title bar and standard window minimize, maximize, close buttons
win.set_decorated(False)
# make the window stick above all others (super button will still override it in the z-order, which is fine)
win.set_keep_above(True)
# make events pass through
region = cairo.Region(cairo.RectangleInt(0, 0, 0, 0))
my_window.input_shape_combine_region(region)
win.show_all()
# set the entire window to be semi-transparent, if we like
win.set_opacity(0.2)
Basically what this does is tell Gtk that other than pixel (0,0) the entire window my_window should not be considered part of itself in terms of event propagation. That in turn, according to my current understanding means that when the pointer moves and clicks, the events go to the underlying window under the pointer position, as if my_window was not there.
Caveat:
This does allow your overlay window being the focus window (due to user-solicited window switching or just because it pops up and gets the focus when your application starts). Which means that for example, keyboard events will still undesirably go to it up until the user has clicked through it to make it lose focus in favor of whatever window is under the cursor. I would likely use the approach described here to iron out this aspect.
If there's a different and proper approach for making a portion of the screen "display stuff but not receive events", without building an oddball window like above over it, I'm happy to learn about it.
I assume that one's particular desktop environment (gnome, unity, etc. on linux) may interfere with this solution depending on version and configuration, on some occasions.

iPhone SDK UIImagePicker Hide Controls

I know how to hide the camera controls (.showsCameraControls = NO) but if I do this I will lose the button to switch from rear to front facing camera which I need, is there a way to keep that but lose the controls at the bottom?
I tried keeping all of the controls and overlaying on top of the bottom bar but the bottom bar is always on top of the cameraOverlayView whatever I try. I think it used to work but doesn't in 5.0.
I also realise you can add your own button to switch between the 2 cameras (.cameraDevice) but I want to keep it looking as much like the proper interface as possible.
Any pointers are really appreciated, The whole point of this is that I need to call .takePicture myself but want the interface to look exactly like it normally does with all of the default buttons.

Magnifier like feature inside popup window....how to?

I need to create a magnifier like feature in my app. Like the "loupe" effect on the iphone !
The problem is that I need to do that inside a popup window and I don't get how to make it work !
The popup window display a grid of colors that I generate and draw one by one using shapeDrawables. What I want is to display that color bigger, zoom on it when the user touch and move his finger around the popup window (color grid). The idea is to create a tracking-zooming effect on the colors so the user can see more clearly under wich color his finger is currently located.
Problems are :
I can't seem to create another popup window on top of this one, Android limitation I think ?
If I modify the current shapeDrawable, resize it, change the boundaries, It needs to re-display the popup window before it takes effect (which is not acceptable of course)
So, anyone knows of a way I could draw over that popup window ?
EDIT :
I've tried solving this issue using a Custom Toast object...But it doesn't quite do the trick. It works, but toast object appears slowly and so the touch motion is not in sync at all with the user movement over the color grid.
I'm not sure if this will help you or not, but you might be able to accomplish this by using a second Activity... this second Activity would use Android's translucent theme if you include the following attribute in your manifest:
<activity android:theme="#android:style/Theme.Translucent">
This second activity will now only contain what you place in your layout. That is... the "real" activity you're running will still be visible behind it (anywhere you don't cover it up with views in the new layout).
You also might prefer Theme.Dialog if you really want to resemble a popup.
Something to keep in mind if you take this approach is you will probably want to override onWindowFocusChanged() in the new activity, and finish() in the event of you losing focus. Additionally, you'll need to figure out how to share your data between the two activities.

How to write an X11 app that follows the cursor

I'd like to write a Linux screen magnifier that's customized to my liking. Ideally, the magnified window would be a square about 150 pixels wide that follows the mouse cursor wherever it goes.
Is it possible to do this in X11? Would it be easier to have an application window that follows the mouse around, or would it be better (or possible) to forget about the window altogether and just make the mouse pointer a 150x150 square that magnifies whatever's underneath?
Look at the source to xeyes?
This actually already exists, it's called Xmag (do a Google search for additional info). You might want to check out the source code for it if you want to know how it works.
EDIT: looks like I misread your question a little bit... if you want a magnified square to follow the mouse pointer around, I suppose it should be possible, but I don't know the technical details of how you'd do it. Regardless, the place to start is probably by looking at Xmag as a starting point.
I am unsure if this can run as its own app or would have to be integrated into your window manager. Either way, you would need libx11 (might have a different name from distro to distro). Also, I would suggest taking a look at swarp. I know this is not even close to what you are talking about, but the source code is only 35 lines and it shows what can be done with libx11.
I would personally make that a frameless window that always stays atop with a 1px hole in the middle. The events that the user makes (Mouse clicks, keypresses, whatever) is passed to the window below.
And when the user moves it's cursor it is ought to be visible to your window and you just move it over a bit. For the magnifying part, well - that is left as an exercise to the reader (Because I do not know how to do that as of yet ;-).
Texworks comes with such a feature to inspect the pdf resulting from typesetting a latex source. You can also choose between a square or a circular magnifier. See https://www.tug.org/texworks/ for access to the code which can serve a launchpad.

Resources