Symbian, Java ME: determine if phone has touch screen - java-me

is there a way to know from code if phone has touch screen or not?
I need it for smth like if (PhoneHasTouchScreen) enable_something()

Perhaps call Canvas.hasPointerMotionEvents(). Bear in mind that there are two types of touch screen in Java ME; a touch screen that uses the usual style pointer/softkey events, and the style that returns pointer co-ordinates when you touch the screen. For all usefulness on your app, consider the former to not be touch screen at all.

Related

How can I configure MRTK to work with touch input in editor and on mobile devices?

I'm building an application that will run on both HoloLens and mobile devices (iOS/Android). I'd like to be able to use the same manipulation handlers on all devices with the goals:
Use ARFoundation for mobile device tracking and input
Use touch input with MRTK with ManipulationHandler and otherwise use touch input as normal (UI)
Simulate touch input in the editor (using a touch screen or mouse) but retain the keyboard/mouse controller for camera positioning.
So far I've tried/found:
MixedRealityPlayspace always parents the camera, so I added the ARSessionOrigin to that component, and all the default AR components to the camera (ARCameraManager, TrackedPoseDriver, ARRayCastManager, etc.)
Customizing the MRTK pointer profile to only countain MousePointer and TouchPointer.
Removing superfluous input data providers.
Disabling Hand Simulation in the InputSimulationService
Generally speaking, the method of adding the ARSessionOrigin to the MixedRealityPlayspace works as expected and ARFoundation is trivial to set up. However, I am struggling to understand how to get the ManipulationHandler to respond to touch input.
I've run into the following issues:
Dragging on a touch screen with a finger moves the camera (editor). Disabling the InputSimulationService fixes this, but then I'm unable to move the camera...
Even with the camera disabled, clicking and dragging does not affect the ManipulationHandler.
The debug rays are drawn in the correct direction, but the default touchpointer rays draw in strange positions.
I've attached a .gif explaining this. This is using touch input in the editor. The same effect is observed running on device (Android).
This also applies to Unity UI (world space canvas) whereby clicking on a UI element does not trigger (on device or in editor), which suggests to me that this is a pointer issue not a handler issue.
I would appreciate some advice on how to correctly configure the touch input and mouse input both in editor and on device, with the goal being a raycast from the screen point using the projection matrix to create the pointer, and use two-finger touch in the same way that two hand rays are used.
Interacting with Unity UI in world space on a mobile phone is supposed to work in MRTK, but there are few bugs in the input system preventing it from working. The issue is tracked here: https://github.com/microsoft/MixedRealityToolkit-Unity/issues/5390.
The fix has not been checked in, but you can apply a workaround for now (thanks largely to the work you yourself did, newske!). The workaround is posted in the issue. Please see https://gist.github.com/julenka/ccb662c2cf2655627c95ffc708cf5a69. Just replace each file in MRTK with the version in the gist.

How to move window offscreen with wmctrl

I am trying to programmatically move a window so that it is partially on screen. For instance, clicking the VLC title bar and dragging it so that only half the window is visible works just fine.
When I output the results of wmctrl -lG this works just fine:
0x04a00011 0 -293 138 600 420 HEVM002 VLC media player
However, when I then move it back on screen and try and replicate its position, it doesn't work and clips the window to the far side:
wmctrl -r "VLC media player" -e 0,-200,0,800,600
I have tested on a couple of window managers, and it seems to work fine on xfwm but NOT on compiz. Is there a flag or something like that I can set to enable moving windows off-screen?
When running under a window manager, this is entirely up to the window manager. Whether there is a flag to force partial off-screen positions depends on which window manager it is.
The only window manager agnostic way of achieving this is making the window an override_redirect window. But, of course, this means the window is no longer managed. Making it a normal window again will cause the window manager to manage it again which likely, again depending on the window manager, means forcing it to be in-bounds again.
That said, looking at wmctrl's source code, it uses _NET_MOVERESIZE_WINDOW if supported by the window manager and falls back to XMoveResizeWindow (or similar) otherwise. However, in the first case it casts the position values to unsigned long first which effectively means any negative values will be lost anyway. In the second case, negative values seem to signal "don't move", so no luck there either.
You could try using xdotool windowmove instead which will deal with negative values correctly. Maybe also consider filing a bug against wmctrl?

In phaser.js capture touch events on the canvas

I'm trying to make a simple game that has a single input based on lessmilk.com's Flappy Bird tutorial. On the keyboard you hit space to jump, but on touch enabled devices I'd like to just have the users touch anywhere on the canvas to jump.
Looking at the input docs it seems straight forward to capture touch input for a sprite, but I'd like the user to be able click/touch anywhere.
What's the "Phaser way" to capture all touch events? Do I need to do something hacky like create an invisible sprite covering the entire canvas? Should I bypass the phaser and just attach DOM event handlers?
Seems like I'd been misunderstanding the role of phaser's input manager. I'd been trying to use game.input.touch to hook those events but I just needed to be using the higher level input.onDown event:
// click / touch to jump
game.input.onDown.add(this.jump, this);

Xlib center window

I am writing a Xlib app where I want the window to be centered. I have used XMoveWindow with (desktopWidth - width) / 2, (desktopHeight - height) / 2 and it is roughly in the right place.
However the problem is that width and height are the client area, not the total area. Is there any way for me to get the total area of the window?
I need to use Xlib because I am using Glx and OpenGL. I don't want to use SDL, nor have a bulky graphics library.
There are various ways to go about this, depending on why you are doing it. The first two are "officially supported" by most window managers and described in specs, and then it descends into fragile hacks.
Semantic
The specs encourage you to use _NET_WM_WINDOW_TYPE rather than setting the position, if it makes sense to do so. See http://standards.freedesktop.org/wm-spec/wm-spec-1.3.html#id2507144
For example, a DIALOG type (or a window with the WM_TRANSIENT_FOR hint set) will usually be centered on its parent window or on the screen, and the _NET_WM_WINDOW_TYPE_SPLASH (splashscreen) type will usually be centered on the screen. "Usually" here means "sensible window managers probably center it, and people using weird window managers are not your problem, let them suffer."
(Another hint along the same lines, though not what you want here, is _NET_WM_STATE_FULLSCREEN, which avoids manually sizing/positioning in order to be fullscreen.)
If semantic hints work, the window manager code to handle the positioning is hopefully smarter than anything one can easily code by hand, for example it might deal with multihead setups. Setting the proper semantic type may also allow the WM to be smart in other ways, beyond positioning.
Gravity
If there's no semantic hint in the specs that helps you, then you can center by hand. It's important to note that window managers are allowed to ignore a manual position request and some of them will. Some may only honor the request if you set the USPosition flag in WM_NORMAL_HINTS (this flag is supposed to be set only if the user explicitly requested the position, for example with a -geometry command line option). Others may ignore the request always. But, you can probably ignore WMs that do this; the user chose to use that WM.
The way you compensate for the window decorations (the titlebar, etc.) is to use the win_gravity field of WM_NORMAL_HINTS, which is originally in the ICCCM (see http://tronche.com/gui/x/icccm/sec-4.html#s-4.1.2.3) but better-specified in an implementation note in the EWMH: http://standards.freedesktop.org/wm-spec/latest/ar01s09.html#id2570420
For WM_NORMAL_HINTS see http://tronche.com/gui/x/xlib/ICC/client-to-window-manager/wm-normal-hints.html#XSizeHints (note: the type of the property is WM_SIZE_HINTS and the name of the property is WM_NORMAL_HINTS, so there are two different atom names involved).
To center, you would set the win_gravity to Center, which allows you to position the center of the window (including its decorations) instead of the top-left corner.
win_gravity is not often used and is likely to be buggy in some window managers because nobody bothered to code/test it, but it should work in the more mainstream ones.
Update, possible confusion point: There are other "gravities" in the X protocol, specifically the CreateWindow request lets you set a "bit_gravity" and "win_gravity"; these are different from the XSizeHints.win_gravity. The CreateWindow gravities describe how the contents (pixels/subwindows) of a window are handled when the window is resized.
Hacks based on guessing decoration size
It's a fragile hack, but... you can try to figure out the decoration size and then incorporate that into your positioning.
To get the size of the window decorations, one way is the _NET_FRAME_EXTENTS hint, see http://standards.freedesktop.org/wm-spec/latest/ar01s05.html#id2569862
For older-school window managers (but not the fancy new compositing ones, though those hopefully support _NET_FRAME_EXTENTS) the window decorations are an X window, so you can get your parent window and look at its size.
The problem with both of these approaches is that you have to map the window before the decorations are added, so you have to map; wait to get the MapNotify event; then get the decoration size; then move the window. This will unfortunately cause user-visible flicker (the window will initially appear and then move). I don't think there's a way to get the window decoration size without mapping first.
Descending further into the realm of awful hacks, you could assume that for windows after the first one you map, the decorations will match previously-mapped windows. (Not that this is a sound assumption: different kinds of windows may have different decorations.)
Implementation note: keep in mind that the decoration window can be destroyed at any time, which would cause an X error in any outstanding Xlib requests you have that mention that window and by default exit your program. To avoid this, set the X error handler when touching windows that don't belong to your client.
Override redirect
Using override redirect is a kind of bazooka with bad side effects, and not at all a good idea if your goal is just to center a window.
If you set the override redirect flag when creating a window, then the window manager won't manage its size, position, stacking order, decorations, or map state (the window manager's redirection of ConfigureRequest and MapRequest is overridden).
This is a really bad idea for anything the user would think of as a window. It's usually used for tooltips and popup menus. If you set override redirect on a window, then all the normal window management UI will be broken, the stacking order will end up basically random (the window will tend to get stuck on top or on bottom, or worse get in an infinite-loop restack fight with another client).
But, the override-redirected window won't have decorations or be touched by the WM, so you can surefire center it with no interference.
(If you just want no decorations, use a semantic type like SPLASH or use the "MWM" hints, don't use override redirect.)
Summary
The short answer is set the semantic hint if any is applicable, and otherwise use XSizeHints.win_gravity=Center.
You can kind of see why people use toolkits and SDL ;-) lots of weird historical baggage and corner cases in the client-to-window-manager interaction generally, setting window positions is just the beginning of the excitement!
win_gravity is not often used and is likely to be buggy in some window managers because nobody bothered to code/test it, but it should work in the more mainstream ones.
Apparently Unity haven't implemented this. Testing shows that XCB_GRAVITY_STATIC is not respected and by taking a quick look at Unity source code I could not find code implementing this part of the specification.

How to write an X11 app that follows the cursor

I'd like to write a Linux screen magnifier that's customized to my liking. Ideally, the magnified window would be a square about 150 pixels wide that follows the mouse cursor wherever it goes.
Is it possible to do this in X11? Would it be easier to have an application window that follows the mouse around, or would it be better (or possible) to forget about the window altogether and just make the mouse pointer a 150x150 square that magnifies whatever's underneath?
Look at the source to xeyes?
This actually already exists, it's called Xmag (do a Google search for additional info). You might want to check out the source code for it if you want to know how it works.
EDIT: looks like I misread your question a little bit... if you want a magnified square to follow the mouse pointer around, I suppose it should be possible, but I don't know the technical details of how you'd do it. Regardless, the place to start is probably by looking at Xmag as a starting point.
I am unsure if this can run as its own app or would have to be integrated into your window manager. Either way, you would need libx11 (might have a different name from distro to distro). Also, I would suggest taking a look at swarp. I know this is not even close to what you are talking about, but the source code is only 35 lines and it shows what can be done with libx11.
I would personally make that a frameless window that always stays atop with a 1px hole in the middle. The events that the user makes (Mouse clicks, keypresses, whatever) is passed to the window below.
And when the user moves it's cursor it is ought to be visible to your window and you just move it over a bit. For the magnifying part, well - that is left as an exercise to the reader (Because I do not know how to do that as of yet ;-).
Texworks comes with such a feature to inspect the pdf resulting from typesetting a latex source. You can also choose between a square or a circular magnifier. See https://www.tug.org/texworks/ for access to the code which can serve a launchpad.

Resources