Adding a disabled game mode on a Phaser game - phaser-framework

I'm trying to add a disabled mode to a Phaser game.
Basically this would not stop the game from moving forward, but would disable the user's output and display a gray overlay. I'm trying to make it work like this.game.paused works.
Link to my example repo
Live Example

Use game.input.enabled
game.input.enabled = false; //all input sources are ignored.
To disable just one type of input; for example, the Mouse, use
game.input.mouse.enabled = false;
a gray overlay should be added manually.

Related

How can I configure MRTK to work with touch input in editor and on mobile devices?

I'm building an application that will run on both HoloLens and mobile devices (iOS/Android). I'd like to be able to use the same manipulation handlers on all devices with the goals:
Use ARFoundation for mobile device tracking and input
Use touch input with MRTK with ManipulationHandler and otherwise use touch input as normal (UI)
Simulate touch input in the editor (using a touch screen or mouse) but retain the keyboard/mouse controller for camera positioning.
So far I've tried/found:
MixedRealityPlayspace always parents the camera, so I added the ARSessionOrigin to that component, and all the default AR components to the camera (ARCameraManager, TrackedPoseDriver, ARRayCastManager, etc.)
Customizing the MRTK pointer profile to only countain MousePointer and TouchPointer.
Removing superfluous input data providers.
Disabling Hand Simulation in the InputSimulationService
Generally speaking, the method of adding the ARSessionOrigin to the MixedRealityPlayspace works as expected and ARFoundation is trivial to set up. However, I am struggling to understand how to get the ManipulationHandler to respond to touch input.
I've run into the following issues:
Dragging on a touch screen with a finger moves the camera (editor). Disabling the InputSimulationService fixes this, but then I'm unable to move the camera...
Even with the camera disabled, clicking and dragging does not affect the ManipulationHandler.
The debug rays are drawn in the correct direction, but the default touchpointer rays draw in strange positions.
I've attached a .gif explaining this. This is using touch input in the editor. The same effect is observed running on device (Android).
This also applies to Unity UI (world space canvas) whereby clicking on a UI element does not trigger (on device or in editor), which suggests to me that this is a pointer issue not a handler issue.
I would appreciate some advice on how to correctly configure the touch input and mouse input both in editor and on device, with the goal being a raycast from the screen point using the projection matrix to create the pointer, and use two-finger touch in the same way that two hand rays are used.
Interacting with Unity UI in world space on a mobile phone is supposed to work in MRTK, but there are few bugs in the input system preventing it from working. The issue is tracked here: https://github.com/microsoft/MixedRealityToolkit-Unity/issues/5390.
The fix has not been checked in, but you can apply a workaround for now (thanks largely to the work you yourself did, newske!). The workaround is posted in the issue. Please see https://gist.github.com/julenka/ccb662c2cf2655627c95ffc708cf5a69. Just replace each file in MRTK with the version in the gist.

In phaser.js capture touch events on the canvas

I'm trying to make a simple game that has a single input based on lessmilk.com's Flappy Bird tutorial. On the keyboard you hit space to jump, but on touch enabled devices I'd like to just have the users touch anywhere on the canvas to jump.
Looking at the input docs it seems straight forward to capture touch input for a sprite, but I'd like the user to be able click/touch anywhere.
What's the "Phaser way" to capture all touch events? Do I need to do something hacky like create an invisible sprite covering the entire canvas? Should I bypass the phaser and just attach DOM event handlers?
Seems like I'd been misunderstanding the role of phaser's input manager. I'd been trying to use game.input.touch to hook those events but I just needed to be using the higher level input.onDown event:
// click / touch to jump
game.input.onDown.add(this.jump, this);

MergEXT MergZXing layer and barcode not reading

Just started working with this awesome external but have a couple of questions.
When the control is evoked, is it always the top layer or can I have a background transparent image on top of it so I can frame the control nicely?
Also, my testing seems to read most Barcodes but when it comes down to reading Barcodes on hard drives, the control does not want to decode those.... Too dense of bar code pattern?
I am very impressed thus far with the ease of use of your externals. Makes we want to code more for mobile devices!
an overlaying transparent image is not possible, as far as i know.
but couldnĀ“t you use
command mergZXingControlSetRect pLeft,pTop,pRight,pBottom
to define the rect of that scanner after creation
or
command mergZXingControlCreate pLeft,pTop,pRight,pBottom
to create the scanner control in the specified rect.
Set the rect smaller than the width and the height of the screen.
You could then use an underlying image, which is displayed outside of the scanner rect, to show the frame around scanner control. Did not test it myself, but i would assume that this should work.
Unfortunately the native controls in externals and the ones the engine provides are added as views on top of the LiveCode view. That means you can't intermingle LiveCode controls with them. One thing that some users have done is add a web view with a transparent background and a load a png image. If you create the barcode view first and the web view second then the web view will be on top.

How can I create an X window/client that is on top of all other windows, not under WM control and has no input? (overlay, OSD)

I want to write applications (or use existing ones, that would be even more convenient) that behave like a hardware screens OSD (on screen display), only without input.
That is: A graphical output (e.g. from a GUI toolkit like Qt or Gtk) is placed on a layer where it is above even fullscreen-windows like Firefox F11 mode or a video player in fullscreen mode. That includes "above" the mouse cursor as well, so technically and graphically the mouse cursor would move below this widget.
I don't know about real fullscreen applications with SDL or OpenGL though, but this is not the requirement. If you know this as well please include it in your answer.
Real world applications are read-only overlays like a little webcam window, a TV-station like logo or premade annotations. So all in all this is meant for live presentations, streaming and recording of screencasts and tutorials with minimal post processing.
My own hacked, unsuccesful, experiments showed at least that removing this window from the WM control ( I did this by choosing a GTK popup dialog instead of a real main window) lets you position in absolute coordinates and it will ignore things like virtual desktops and workspaces, which is good, so you can switch between those and the overlay/HUD will stay in place.
Of course this cannot be done in software with the same Z-value (top/bottom windows) as the hardware screen. So technically I am talking above all other windows but below the screensaver or lock-screen layer.
+1 internet for linking to docs and giving the right keywords.
+2 internet for a working code example, language, gui-toolkit etc. doesn't matter.
You probably need composite overlay window from Composite extension - see section 3.2 "Composite Overlay Window" extension docs. (cursor is above this window)
Version 0.3 of the protocol adds the Composite Overlay Window, which
provides compositing managers with a surface on which to draw without
interference. This window is always above normal windows and is always
below the screen saver window. It is an InputOutput window whose width
and height are the screen dimensions. Its visual is the root visual
and its border width is zero. Attempts to redirect it using the
composite extension are ignored. This window does not appear in the
reply of the QueryTree request. It is also an override redirect
window. These last two features make it invisible to window managers
and other X11 clients. The only way to access the XID of this window
is via the CompositeGetOverlayWindow request. Initially, the Composite
Overlay Window is unmapped.
Example using node-x11:
var x11 = require('x11');
x11.createClient(function(err, display) {
var X = display.client;
var root = display.screen[0].root;
X.require('composite', function(err, Composite) {
Composite.GetOverlayWindow(root, function(err, overlay) {
// already automatically mapped here:
//
// CompositeGetOverlayWindow returns the XID of the Composite Overlay
// Window. If the window has not yet been mapped, it is mapped by this
// request. When all clients who have called this request have terminated
// their X11 connections the window is unmapped.
});
});
});

Three.js First Person Controls moves the camera all the time

The game I'm designing currently requires a first person controller and luckily Three.js offers that class as well.
However I can't stop the camera from flying around. I know that the mouse movement causes the fly because it happens as soon as I move the mouse. But reading the js code,I cant find the attribue which causes this movement. Here is how I initiate the controls:
controls = new THREE.FirstPersonControls(camera);
controls.movementSpeed = 0.1;
controls.lookSpeed = 0.001;
controls.lookVertical = true;
I do not want the view direction to change when I am not moving the mouse.
any idea ?
Keep in mind that the FPS style mouse movement in webGL is usable rather only in a full screen mode. If an application runs in a standard windowed mode, the cursor is visible, and the application can not detect cursor movements that cross the edge of the window. This makes it impossible to look around in the FPS style (look movement stops when the cursors reaches the window edge).
This is probably the main reason why a PointerLockControls demo asks you to switch to the full screen mode.
With FirstPersonControls the look movement continues when the mouse reaches the edge. Such approach works well in the windowed mode.
You might want to use the PointerLockControls instead
See an example here:
https://github.com/mrdoob/three.js/blob/master/examples/misc_controls_pointerlock.html

Resources