I'm writing a game that asks the user to click on an image, which then reveals a different image. I'd like to make the transition between the images look like a playing card being turned over on both Android and IOS.
I've done a bit of research, but it all seems to indicate that the "curl" visual effect will do what I want, but is only available on IOS ( I can't test this as I don't have access to a MAC at the moment. )
Is there a cross platform way of doing this "turning a playing card over" sort of transition?
You might scale the (front) image control vertically until it is only 1 line and then scale the second (backside) image from 1 vertical line to its original size.
Only very few visual effects are cross platform. One of them is the reveal up/down/left/right effect. You might use this effect to display a neutral, e.g. gray or blue picture after hiding the front side image and before showing the back side image. Something like this:
lock screen for visual effect
hide img "front"
show img "intermediary"
unlock screen with visual effect reveal left fast
lock screen for visual effect
hide img "intermediary"
show img "back"
unlock screen with visual effect reveal right fast
I know it isn't ideal, but if you want it to be cross platform, you need to find a workaround. Why don't you check for the platform and write a different conditional routine for each platform?
I think the effect you want is flip and yes it's only available on iOS at the moment. There are a couple of iOS visual effects that push the image into a UIView and animate that with native methods. This blog post indicates it would be possible to implement something similar on android but it would need to be in the engine: http://www.techrepublic.com/blog/software-engineer/use-androids-scale-animation-to-simulate-a-3d-flip/
Related
I'm building an application that will run on both HoloLens and mobile devices (iOS/Android). I'd like to be able to use the same manipulation handlers on all devices with the goals:
Use ARFoundation for mobile device tracking and input
Use touch input with MRTK with ManipulationHandler and otherwise use touch input as normal (UI)
Simulate touch input in the editor (using a touch screen or mouse) but retain the keyboard/mouse controller for camera positioning.
So far I've tried/found:
MixedRealityPlayspace always parents the camera, so I added the ARSessionOrigin to that component, and all the default AR components to the camera (ARCameraManager, TrackedPoseDriver, ARRayCastManager, etc.)
Customizing the MRTK pointer profile to only countain MousePointer and TouchPointer.
Removing superfluous input data providers.
Disabling Hand Simulation in the InputSimulationService
Generally speaking, the method of adding the ARSessionOrigin to the MixedRealityPlayspace works as expected and ARFoundation is trivial to set up. However, I am struggling to understand how to get the ManipulationHandler to respond to touch input.
I've run into the following issues:
Dragging on a touch screen with a finger moves the camera (editor). Disabling the InputSimulationService fixes this, but then I'm unable to move the camera...
Even with the camera disabled, clicking and dragging does not affect the ManipulationHandler.
The debug rays are drawn in the correct direction, but the default touchpointer rays draw in strange positions.
I've attached a .gif explaining this. This is using touch input in the editor. The same effect is observed running on device (Android).
This also applies to Unity UI (world space canvas) whereby clicking on a UI element does not trigger (on device or in editor), which suggests to me that this is a pointer issue not a handler issue.
I would appreciate some advice on how to correctly configure the touch input and mouse input both in editor and on device, with the goal being a raycast from the screen point using the projection matrix to create the pointer, and use two-finger touch in the same way that two hand rays are used.
Interacting with Unity UI in world space on a mobile phone is supposed to work in MRTK, but there are few bugs in the input system preventing it from working. The issue is tracked here: https://github.com/microsoft/MixedRealityToolkit-Unity/issues/5390.
The fix has not been checked in, but you can apply a workaround for now (thanks largely to the work you yourself did, newske!). The workaround is posted in the issue. Please see https://gist.github.com/julenka/ccb662c2cf2655627c95ffc708cf5a69. Just replace each file in MRTK with the version in the gist.
I'm building an app with Android Studio, and one of my activities (which lets the user draw on a canvas) has a 'toolbar', with things like 'Clear', 'Undo', 'Redo' etc.
I'm using the built-in icons for this - ie I go to the 'Drawables' folder, right-click, go to 'Add vector asset' and then select the appropriate icon. I've figured out that I can change the size of the icon that gets added, but I can't figure out how to use that.
So, on a phone screen, I'd like to use the standard 24dp icons. However, on a tablet screen, I'd like the icon to be bigger, as they're a bit lost on the bigger screen.
I can't figure out how to do this, though, and I'm not even sure whether I'm using the right approach. I know I can create different drawables sub-folders for different densities, but it's not so much the density that matters as the actual screen size.
What's the best way to go about this?
You should increase the toolbar height, the icon will fit its container automatically.
In case, if you want to change the size of the icon, double-click the xml file of the icon and change the width and height inside. Do not touch the viewportWidth and viewportHeight parameters!
In general, you should not do this, as it wouldn't follow the google guidelines.
app:itemIconsize="30dp"
Just started working with this awesome external but have a couple of questions.
When the control is evoked, is it always the top layer or can I have a background transparent image on top of it so I can frame the control nicely?
Also, my testing seems to read most Barcodes but when it comes down to reading Barcodes on hard drives, the control does not want to decode those.... Too dense of bar code pattern?
I am very impressed thus far with the ease of use of your externals. Makes we want to code more for mobile devices!
an overlaying transparent image is not possible, as far as i know.
but couldnĀ“t you use
command mergZXingControlSetRect pLeft,pTop,pRight,pBottom
to define the rect of that scanner after creation
or
command mergZXingControlCreate pLeft,pTop,pRight,pBottom
to create the scanner control in the specified rect.
Set the rect smaller than the width and the height of the screen.
You could then use an underlying image, which is displayed outside of the scanner rect, to show the frame around scanner control. Did not test it myself, but i would assume that this should work.
Unfortunately the native controls in externals and the ones the engine provides are added as views on top of the LiveCode view. That means you can't intermingle LiveCode controls with them. One thing that some users have done is add a web view with a transparent background and a load a png image. If you create the barcode view first and the web view second then the web view will be on top.
Application develop in the J2ME using LWUIT. When I port this application on the Samsung Device it create the following problem.
int h = Display.getInstance().getDisplayWidth();
It returns 388 for the Samsung GT S5250. So when I draw an image using this dimension it displays a white strip at the bottom of the screen. When I call the Form.show() it displays correctly and height is 400 so how to resolve this issue.
I want to know how the Form size is calculated in LWUIT and how it takes MenuBar Height and white strip display at bottom of the screen.
I think you typed getDisplayWidth() where you meant to type getDisplayHeight().
Regardless, the problem you are seeing is due to a bug in the samsung device. LWUIT invokes the full screen mode in MIDP which hides the native title area, however that doesn't always happen immediately in some devices and thus LWUIT gets incorrect information from the devices. A refresh usually solves this and by the time LWUIT draws on the screen the size is corrected.
Generally the solution is rather simple, create generic code to create your image and if the image dimensions are inappropriate when you are about to draw to the screen then just recreate the image on the fly. This will also solve the issue of device rotation which might pose a problem too.
Well, simple situation. Is it possible to detect if a user has a dual monitor setup from a web application?
If this is possible, is it possible to open a child browser page on this second monitor, so the new window doesn't overlap the old one?
Reason why I ask: I'm working on a web application and at home I have a dual-monitor system. When I go to the administration part of this site, I want it to open in a new browser, preferably on the other desktop. Of course, I could just click, then drag the new window, but doing this automatically seems more fun. :-)
Don't think JavaScript has the proper functions for this. How about Java itself?
I don't think you'll be able to directly detect a dual monitor setup, but you can probably make a good guess by looking at their screen resolution, using javascript's screen.width and screen.height. If the ratio of the width to the height is 8:3, its a good chance they have 2 standard 4:3 monitors side by side. You can do a similar calculation for 16:9 or 16:10.
Using maxpower47's suggestion about resolution, the only way to display the page on the other monitor would be to open a popup, and use the options to set the top, right, width and height properties so the window will appear on the second monitor in a decent size.
Here is a link that describes how to do this: http://www.netmechanic.com/news/vol4/javascript_no7.htm