ARKit Plane detection - xcode9-beta

I am trying to use the code provided by the apple in Demo ARKit app for plane detection, but it's not working consistently, for some cases it detects the surface perfectly but in some cases, it does not detects the plane. Then, I also noticed in the Demo ARKit app same thing happens with plane detection.
When it detects plane surface the yellow square closes but that is not the case every time. Has any one faced the same? How to make this plane detection behavior consistent?

Plane detection depends a lot on real world conditions. You need good lighting, a surface that has a decent amount of visible detail, and a decent amount of clear flat space. For example, a plain white table or a black tablecloth makes plane detection much slower / less reliable. A wood desk with visible grain works much better, but not if it's cluttered with keyboards and mice and cables and devices (not that any of us would have a desk like that, of course...).
Plane detection involves motion and parallax triangulation, too. If you point your device at a good surface (as described above), but only change your perspective on that surface by rotating the device (say, by spinning in your swivel chair), you're not feeding ARKit much more useful information than if you just held still. On the other hand, if you move the device side to side or up and down by at least several inches, its perspective on the surface will gain some parallax, which will speed/improve plane detection.
Update: If you're developing an app that depends on plane detection, it helps to cue the user to perform these motions. The third party demos they showed in the labs at WWDC17 had some great app-specific ways to do this: Lego had a little mini game of guiding a toy helicopter in to a landing, The Walking Dead tells the player to search the floor for zombie footprints; etc.

Add more light in room. ArKit works better in well lighted room.
You can't affect on plane detection. Wait for official iOS 11 release.

class ViewController: UIViewController {
#IBOutlet var sceneView: ARSCNView!
let configuration = ARWorldTrackingConfiguration()
override func viewDidLoad() {
self.sceneView.debugOptions = [SCNDebugOptions.showFeaturePoints, SCNDebugOptions.showWorldOrigin]
self.configuration.planeDetection = .horizontal
self.sceneView.session.run(configuration)
self.sceneView.delegate = self
super.viewDidLoad()
}
}
//MARK: - ARSEN delegate -
extension ViewController: ARSCNViewDelegate {
func renderer(_ renderer: SCNSceneRenderer, didAdd node: SCNNode, for anchor: ARAnchor) {
guard let planeDetection = anchor as? ARPlaneAnchor else {
return
}
print("Plane anchor detect")
}
func renderer(_ renderer: SCNSceneRenderer, didUpdate node: SCNNode, for anchor: ARAnchor) {
guard let planeDetecttion = anchor as? ARPlaneAnchor else {
return
}
}
func renderer(_ renderer: SCNSceneRenderer, didRemove node: SCNNode, for anchor: ARAnchor) {
guard anchor is ARPlaneAnchor else{
return
}
}
}

Related

Do instanced objects Area 2D's not detect the mouse

I am trying to get a dynamically instanced kinematicBody2D with an area 2D attached to handle mouse entered/exit inputs. I have created my area 2D with correct collision body, and have tested a similar collision body for detecting some area 2d's and this is working happily, however, the mouse detection is not triggering the function as it should.
I am unsure of why it does not appear to be detecting my mouse. I am assuming I have messed with the Masks incorrectly, and it is not on the same level, however looking at some of the documentation this is not suggested to be a problem.
I am unsure of what code to attach because it is not really coded at this point.
Any help would be appreciated.
To detect mouse events on an Area or KinematicBody, set input_pickable to true and connect to one or more of the provided signals.
KinematicBody2D and Area2D both inherit from CollisionObject2D, so they can both handle mouse input. This means you don't need to add an Area to your KinematicBody unless the area that detects clicks needs to be different than the area that detects collisions (e.g. only a small part of a larger object is clickable).
Here's how you could detect mouse events on a KinematicBody with some CollisionShape:
func _ready():
input_pickable = true
connect("mouse_entered", self, "_on_mouse_entered")
connect("mouse_entered", self, "_on_mouse_entered")
connect("input_event", self, "_on_input_event")
func _on_mouse_entered():
print("mouse entered")
func _on_mouse_exited():
print("mouse exited")
func _on_input_event(viewport, input_event, shape_idx):
var mouse_event = input_event as InputEventMouseButton
if mouse_event:
prints("Mouse button clicked:", mouse_event.button_index)

How can I simulate hand rays on HoloLens 1?

I'm setting a new project which is intended to deploy to both HoloLens 1 and 2, and I'd like to use hand rays in both, or at least be able to simulate them on HoloLens 1 in preparation for HoloLens 2.
As far as I have got is:
Customizing the InputSimulationService to be gesture only (so I can test in editor)
Adding the GGVHand Controller Type to DefaultControllerPointer Options in the MRTK/Pointers section.
This gets it to show up and respond to clicks both in editor and device, but it does not use the hand coordinates and instead raycasts forward from 0,0,0, which suggests that the GGV Hand Controller is providing a GripPosition (of course with no rotation due to HL1) but not providing a Pointer Pose.
I imagine the cleanest way to do this would be to add a pointer pose to the GGV Hand controller, or add (estimated) rotation to the GripPosition and use this as the Pose Action in the ShellHandRayPointer. I can't immediately see where to customize/insert this in the MRTK.
Alternatively, I could customize the DefaultControllerPointer prefab but I am hesitant to do so as the MRTK seems to still be undergoing frequent changes and this would likely lead to upgrade headaches.
You could create a custom pointer that would set the pointer's rotation to be inferred based on the hand position, and then like you suggested use Grip Pose instead of Pointer Pose for the Pose Action.
The code of your custom pointer would look something like this:
// Note you could extend ShellHandRayPointer if you wanted the beam bending,
// however configuring that pointer requires careful setup of asset.
public class HL1HandRay : LinePointer
{
public override Quaternion Rotation
{
get
{
// Set rotation to be line from head to head, rotated a bit
float sign = Controller.ControllerHandedness == Handedness.Right ? -1f : 1f;
return Quaternion.Euler(0, sign * 35, 0) * Quaternion.LookRotation(Position - CameraCache.Main.transform.position, Vector3.up);
}
}
// We cannot use the base IsInteractionEnabled
// Because HL1 hands are always set to have their "IsInPointing pose" field as false
// You may want to do more thorough checks here, following BaseControllerPointer implementation
public override bool IsInteractionEnabled => IsFocusLocked || IsTracked;
}
Then create a new pointer prefab and configure your pointer profile to use the new pointer prefab. Creating your own prefab instead of modifying MRTK prefabs has advantage of ensuring that MRTK updates will not overwrite your prefabs.
Here's some captures of the simple pointer prefab I made to test this with relevant changes highlighted:
And then the components I used:

Display notifications correctly in Android Wear

My notifications display correctly when paired with an Android Wear square device - they are just sent to the device (watch) for automatic display to keep the code simple. However, they are in the default center of the screen on Wear round watches and some of the notification info is actually missing from a corner of the round screen notification.
BigTextStyle myStyle = new Notification.BigTextStyle();
.bigText(myText);
Notification.Builder notificationBuilder =
new Notification.Builder(this)
.setStyle(myStyle)
// and so on
It might, however, be a case of adding the following few lines of code (and its class import) somewhere:
Notification.WearableExtender myextend = new Notification.WearableExtender();
.WearableExtender()
.setGravity(Gravity.TOP);
// add other code
.extend(myextend)
Based from this documentation, since some Android Wear devices have square screens, while others have round screens, your watch face should adapt to and take advantage of the particular shape of the screen, as described in the design guidelines.
Android Wear lets your watch face determine the screen shape at runtime. To detect whether the screen is square or round, override the onApplyWindowInsets() method in the CanvasWatchFaceService.Engine class as follows:
private class Engine extends CanvasWatchFaceService.Engine {
boolean mIsRound;
int mChinSize;
#Override
public void onApplyWindowInsets(WindowInsets insets) {
super.onApplyWindowInsets(insets);
mIsRound = insets.isRound();
mChinSize = insets.getSystemWindowInsetBottom();
}
...
}
Devices with round screens can contain an inset (or "chin") at the bottom of the screen. To adapt your design when you draw your watch face, check the value of the mIsRound and mChinSize member variables.
Here are some related threads which might help:
Is there any way to detect if the clock is round?
How to set the Watchface in the center of watch

Object Collision Issue - Weird Behaviour - Unity

I wondering if someone could give me a hand with this problem I'm having with objects and collisions in Unity.
I have a sphere object being controlled by the users phone's accelerometer. The sphere moves around fine but once it hits a wall the sphere starts acting weird. It pulls in the direction of the wall it collided with, starts bouncing, and just overall not responsive anymore to the movement of the phone.
Any idea as to why this could be happening?
Here is the script used to control the player's sphere.
using UnityEngine;
using System.Collections;
public class PlayerController : MonoBehaviour {
public float speed;
void Update() {
Vector3 dir = Vector3.zero;
dir.x = Input.acceleration.x;
dir.z = Input.acceleration.y;
if (dir.sqrMagnitude > 1)
dir.Normalize();
dir *= Time.deltaTime;
transform.Translate(dir * speed);
}
void OnTriggerEnter (Collider other)
{
if (other.gameObject.tag == "Pickup") {
other.gameObject.SetActive(false);
}
}
}
That happens because your object has a 'Rigidbody' component, and, I suppose, it's not a kinetic rigidbody. Basically, it behaves just like it should: a real physical object will not pass through another object, that is the most basic behaviour of a physics engine. However, since you don't operate with the physics-based object using forces, but manually change it's position, you break a level of abstraction. In result, you move the object inside the wall, and now it can't get out.
Use ApplyForce method instead. If you want to pull or push object (instead of just move, which contradicts the fact that these objects are managed by physics) in a certain direction every frame, you should use ForceMode.Acceleration (or ForceMode.Force, if you want the effect to depend on the mass) every physics frame, which means that you have to use FixedUpdate method instead of Update.

j2me screen flicker when switching between canvases

I'm writing a mobile phone game using j2me. In this game, I am using multiple Canvas objects.
For example, the game menu is a Canvas object, and the actual game is a Canvas object too.
I've noticed that, on some devices, when I switch from one Canvas to another, e.g from the main menu to the game, the screen momentarily "flickers". I'm using my own double buffered Canvas.
Is there anyway to avoid this?
I would say, that using multiple canvases is generally bad design. On some phones it will even crash. The best way would really be using one canvas with tracking state of the application. And then in paint method you would have
protected void paint(final Graphics g) {
if(menu) {
paintMenu(g);
} else if (game) {
paintGame(g);
}
}
There are better ways to handle application state with screen objects, that would make the design cleaner, but I think you got the idea :)
/JaanusSiim
Do you use double buffering? If the device itself does not support double buffering you should define a off screen buffer (Image) and paint to it first and then paint the end result to the real screen. Do this for each of your canvases. Here is an example:
public class MyScreen extends Canvas {
private Image osb;
private Graphics osg;
//...
public MyScreen()
{
// if device is not double buffered
// use image as a offscreen buffer
if (!isDoubleBuffered())
{
osb = Image.createImage(screenWidth, screenHeight);
osg = osb.getGraphics();
osg.setFont(defaultFont);
}
}
protected void paint(Graphics graphics)
{
if (!isDoubleBuffered())
{
// do your painting on off screen buffer first
renderWorld(osg);
// once done paint it at image on the real screen
graphics.drawImage(osb, 0, 0, Tools.GRAPHICS_TOP_LEFT);
}
else
{
osg = graphics;
renderWorld(graphics);
}
}
}
A possible fix is by synchronising the switch using Display.callSerially(). The flicker is probably caused by the app attempting to draw to the screen while the switch of the Canvas is still ongoing. callSerially() is supposed to wait for the repaint to finish before attempting to call run() again.
But all this is entirely dependent on the phone since many devices do not implement callSerially(), never mind follow the implementation listed in the official documentation. The only devices I've known to work correctly with callSerially() were Siemens phones.
Another possible attempt would be to put a Thread.sleep() of something huge like 1000 ms, making sure that you've called your setCurrent() method beforehand. This way, the device might manage to make the change before the displayable attempts to draw.
The most likely problem is that it is a device issue and the guaranteed fix to the flicker is simple - use one Canvas. Probably not what you wanted to hear though. :)
It might be a good idea to use GameCanvas class if you are writing a game. It is much better for such purpose and when used properly it should solve your problem.
Hypothetically, using 1 canvas with a sate machine code for your application is a good idea. However the only device I have to test applications on (MOTO v3) crashes at resources loading time just because there's too much code/to be loaded in 1 GameCanvas ( haven't tried with Canvas ). It's as painful as it is real and atm I haven't found a solution to the problem.
If you're lucky to have a good number of devices to test on, it is worth having both approaches implemented and pretty much make versions of your game for each device.

Resources