Need help navigating the virtual scene programmatically on emulator - android-studio

I am trying to create automated tests for our app using UIAutomator. Part of our tests involves scanning multiple QR codes. I found a way to load a the QR code images to the virtual scene using this solution: Android emulator camera custom image
However, I need to navigate the inside the scene in order to scan the intended QR code. Manually, I can navigate by holding option + WASD or option + mouse movement but I wish to do in automatically using code.
I have tried looking into work arounds like sending keystrokes for the mentioned keys above, or manipulating the x,y,z rotation coordinates of the emulator to move the view, but to no avail.

Related

When switching Layouts in Android-studio, how to go back if there was no layout when app started? Or how to un-set a new layout?

I‘ve developed a game in Love2d (Lua), and used love-android as the tool to make it run on android-studio. The game works perfectly in Android devices, so now I’m trying to add billingclient.
My problem is, the Project created by love-android has only 1 activity (gameActivity) that extends SdlActivity. Somehow it runs the zipped game .love, but it has no layouts.
So, when I create a button so players can buy in-apps, I set a layout. But after the onClick, I cannot get rid of this button layout, because I have no layout to setContentView back. I’ve also tried to hide the billing layout with “GONE”, “INVISIBLE”, button.enable(false), but it just goes to a black screen. Since I’m new to Android Studio and still haven’t found how the game runs after the android tool is used, I’m not sure even if it is still running after I call this button. Any suggestion on how can I implement a full working button in such case?
If you need more info, I’d be happy to provide it.

How can I configure MRTK to work with touch input in editor and on mobile devices?

I'm building an application that will run on both HoloLens and mobile devices (iOS/Android). I'd like to be able to use the same manipulation handlers on all devices with the goals:
Use ARFoundation for mobile device tracking and input
Use touch input with MRTK with ManipulationHandler and otherwise use touch input as normal (UI)
Simulate touch input in the editor (using a touch screen or mouse) but retain the keyboard/mouse controller for camera positioning.
So far I've tried/found:
MixedRealityPlayspace always parents the camera, so I added the ARSessionOrigin to that component, and all the default AR components to the camera (ARCameraManager, TrackedPoseDriver, ARRayCastManager, etc.)
Customizing the MRTK pointer profile to only countain MousePointer and TouchPointer.
Removing superfluous input data providers.
Disabling Hand Simulation in the InputSimulationService
Generally speaking, the method of adding the ARSessionOrigin to the MixedRealityPlayspace works as expected and ARFoundation is trivial to set up. However, I am struggling to understand how to get the ManipulationHandler to respond to touch input.
I've run into the following issues:
Dragging on a touch screen with a finger moves the camera (editor). Disabling the InputSimulationService fixes this, but then I'm unable to move the camera...
Even with the camera disabled, clicking and dragging does not affect the ManipulationHandler.
The debug rays are drawn in the correct direction, but the default touchpointer rays draw in strange positions.
I've attached a .gif explaining this. This is using touch input in the editor. The same effect is observed running on device (Android).
This also applies to Unity UI (world space canvas) whereby clicking on a UI element does not trigger (on device or in editor), which suggests to me that this is a pointer issue not a handler issue.
I would appreciate some advice on how to correctly configure the touch input and mouse input both in editor and on device, with the goal being a raycast from the screen point using the projection matrix to create the pointer, and use two-finger touch in the same way that two hand rays are used.
Interacting with Unity UI in world space on a mobile phone is supposed to work in MRTK, but there are few bugs in the input system preventing it from working. The issue is tracked here: https://github.com/microsoft/MixedRealityToolkit-Unity/issues/5390.
The fix has not been checked in, but you can apply a workaround for now (thanks largely to the work you yourself did, newske!). The workaround is posted in the issue. Please see https://gist.github.com/julenka/ccb662c2cf2655627c95ffc708cf5a69. Just replace each file in MRTK with the version in the gist.

Guidelines when using camera in android studio

I'm currently working on an app in Android Studio where I need to capture an image using the camera on the phone. The image must be very specific and not contain any background noise.
The way I want to solve this problem is by adding a box to the camera preview indicating the region of interest (ROI). After that the image can be cropped in a way that only the content of the ROI is present.
How do I add this box to define ROI?
In my mind it would be perfect if it was a thin white line.
Can I do it if I use the Image Capture Intent or do I have to create my own camera app?
Check this out - is that what you're after?
http://code.tutsplus.com/tutorials/capture-and-crop-an-image-with-the-device-camera--mobile-11458

MergEXT MergZXing layer and barcode not reading

Just started working with this awesome external but have a couple of questions.
When the control is evoked, is it always the top layer or can I have a background transparent image on top of it so I can frame the control nicely?
Also, my testing seems to read most Barcodes but when it comes down to reading Barcodes on hard drives, the control does not want to decode those.... Too dense of bar code pattern?
I am very impressed thus far with the ease of use of your externals. Makes we want to code more for mobile devices!
an overlaying transparent image is not possible, as far as i know.
but couldn´t you use
command mergZXingControlSetRect pLeft,pTop,pRight,pBottom
to define the rect of that scanner after creation
or
command mergZXingControlCreate pLeft,pTop,pRight,pBottom
to create the scanner control in the specified rect.
Set the rect smaller than the width and the height of the screen.
You could then use an underlying image, which is displayed outside of the scanner rect, to show the frame around scanner control. Did not test it myself, but i would assume that this should work.
Unfortunately the native controls in externals and the ones the engine provides are added as views on top of the LiveCode view. That means you can't intermingle LiveCode controls with them. One thing that some users have done is add a web view with a transparent background and a load a png image. If you create the barcode view first and the web view second then the web view will be on top.

In Dreamweaver CS5, is it possible to detach design preview (to move to second monitor)

Is it possible to detach the design preview from the code view, for use on the second monitor?
If you don't like Split view (vertical or horizontal) you can put the code view portion on another monitor by using the Code Inspector (Window -> Code Inspector, or F10 on Windows, I think CMD+F10 on Mac ). The Code Inspector is a floating panel that you can place wherever you want. If you make changes in the Code Inspector you need to refresh the document, (F5 on Win and Mac) so that the updates will migrate over.
Beyond that you'll need to live with it unless you stretch Dreamweaver such that it spans 2 monitors and then use split view and adjust the split point.
When you working on localhost try this:
Monitor 1: dw code view
Monitor 2: Firefox with Auto Reload Add-on (Reload page automatically when selected local files are changed.)
You will have better results...

Resources