How do I get outside of the Mammoth's rib cage? - position

I am new to A-frame...but love its potential. I recently discovered the Smithsonian's collection of 3D glTF scans and found myself inside the rib cage of a mammoth (3D scan). Ideally, I'd like to move into and out of the skeleton, but I seem to be fixed in one location... somewhere around the navel button.
Do I use the "position (0,0,0)" function to shift to position away from the camera view? Also... is there a collide function to prevent ghosting through the bones?
screen capture on Samsung Galaxy 7 Edge

Related

Blender : Some objects disappear when I turn to camera mode (it's not the snapping I think)

OK... here is the thing...
I am modeling a Star Wars Cruiser.. I'm doing pretty fine, but.. whenever I look through the camera, half of my ship disappears ?! Now you might think it's the clipping, but that doesn't make sense, since the clipping is set to 0,1 mtr - 10.000 mtr, which should be more than enough. Besides that, the clipping is the same for both the visible and now disappeared objects...
Does anyone have any clue ?
I also tried to change the Local view to Global view.. but no dice...
Here is a screenshot in not-camera view :
And here is one in camera view : i.imgur.com/XaxAHIN.png
I'm using Blender 2.82
Please let me know if you need more info or even a link to the model...
OK, I got some help from a friend... Joshua Rooijakkers from the Netherlands...
He fixed the problem in seconds...
So ... it WAS the clipping.. the setting was just buried somewhere I didn't look...
Thanks Josh ! :-)
Objects disappear in camera view modeWhile setting up my camera with the camera lock to view option selected, I noticed that on switching on to the camera view, the object had disappeared.
I tried zooming in and out, but that did not work. I exited the camera view and then selected the object in the outliner and then tried zooming in and out and then panning down and adjusting the view in the view port. I could see that the object was now visible and that the camera was positioned below the object. That was because the camera was added wrt to the position of the cursor which was snapped to the world origin. I a sure many would have encountered such issues.
I then selected the camera to ascertain its position. Then by using the grab tool, I moved the camera along the Z axis just above the object and then switched to the camera view and selected the lock camera to
view option to adjust the camera view to the desired position.
This helped me locate my object in the camera view. Do let me know in the comments if there are other ways of dealing with this.

How can I configure MRTK to work with touch input in editor and on mobile devices?

I'm building an application that will run on both HoloLens and mobile devices (iOS/Android). I'd like to be able to use the same manipulation handlers on all devices with the goals:
Use ARFoundation for mobile device tracking and input
Use touch input with MRTK with ManipulationHandler and otherwise use touch input as normal (UI)
Simulate touch input in the editor (using a touch screen or mouse) but retain the keyboard/mouse controller for camera positioning.
So far I've tried/found:
MixedRealityPlayspace always parents the camera, so I added the ARSessionOrigin to that component, and all the default AR components to the camera (ARCameraManager, TrackedPoseDriver, ARRayCastManager, etc.)
Customizing the MRTK pointer profile to only countain MousePointer and TouchPointer.
Removing superfluous input data providers.
Disabling Hand Simulation in the InputSimulationService
Generally speaking, the method of adding the ARSessionOrigin to the MixedRealityPlayspace works as expected and ARFoundation is trivial to set up. However, I am struggling to understand how to get the ManipulationHandler to respond to touch input.
I've run into the following issues:
Dragging on a touch screen with a finger moves the camera (editor). Disabling the InputSimulationService fixes this, but then I'm unable to move the camera...
Even with the camera disabled, clicking and dragging does not affect the ManipulationHandler.
The debug rays are drawn in the correct direction, but the default touchpointer rays draw in strange positions.
I've attached a .gif explaining this. This is using touch input in the editor. The same effect is observed running on device (Android).
This also applies to Unity UI (world space canvas) whereby clicking on a UI element does not trigger (on device or in editor), which suggests to me that this is a pointer issue not a handler issue.
I would appreciate some advice on how to correctly configure the touch input and mouse input both in editor and on device, with the goal being a raycast from the screen point using the projection matrix to create the pointer, and use two-finger touch in the same way that two hand rays are used.
Interacting with Unity UI in world space on a mobile phone is supposed to work in MRTK, but there are few bugs in the input system preventing it from working. The issue is tracked here: https://github.com/microsoft/MixedRealityToolkit-Unity/issues/5390.
The fix has not been checked in, but you can apply a workaround for now (thanks largely to the work you yourself did, newske!). The workaround is posted in the issue. Please see https://gist.github.com/julenka/ccb662c2cf2655627c95ffc708cf5a69. Just replace each file in MRTK with the version in the gist.

How to load textures from a spritesheet in Godot 3

I've just got started with Godot yesterday, and I'm starting a game. I drew a few spritesheets for it. It seems much more efficient to pack all of the frames of an animation into a single image file, right?
Anyway, in Godot I have an AnimatedSprite, who of course has a SpriteFrames property, or whatever it's called. I want to split my spritesheet up into multiple images so that I can use each image as a separate frame in the animation, but as far as I can see Godot provides no such feature. Is this the case?
I've been searching for an answer on the web for a while now, and I can't find anything relevant.
I'd be very surprised if I can't do this in Godot, since I can do it in just about every other game engine I've seen.
Thanks!
(Just to clarify, I want to (programmatically or otherwise,) split a spritesheet into multiple textures, within Godot.)
Click New SpriteFrames in the Frames property menu of the AnimatedSprite node. Now click on the just created SpriteFrames next to the name of the Frames property. Animation Frames window should appear.
Click this Add frames from a Sprite Sheet button. Select your spite sheet file, set grid sizes and finally select individual frames from that sprite sheet.
(This works for me in Godot v3.2.2)
As it says: "Sprite node that can use multiple textures for animation."
Source: http://docs.godotengine.org/en/3.0/classes/class_animatedsprite.html
What you are searching for is using normal Sprite(2D), and set regions for it if you are using AnimationPlayer at each frame.
Example: https://www.youtube.com/watch?v=IGHcscKpA7Y
If you want to do all programmically, you just have to set the sprite to Sprite(2D), and then:
func _ready():
set_region(true)
set_region_rect(Rect2(positionx,positiony,width,height))
but i guess using AnimationPlayer is better option.
This is not tested because i should be sleeping right now, but it should work.

MergEXT MergZXing layer and barcode not reading

Just started working with this awesome external but have a couple of questions.
When the control is evoked, is it always the top layer or can I have a background transparent image on top of it so I can frame the control nicely?
Also, my testing seems to read most Barcodes but when it comes down to reading Barcodes on hard drives, the control does not want to decode those.... Too dense of bar code pattern?
I am very impressed thus far with the ease of use of your externals. Makes we want to code more for mobile devices!
an overlaying transparent image is not possible, as far as i know.
but couldnĀ“t you use
command mergZXingControlSetRect pLeft,pTop,pRight,pBottom
to define the rect of that scanner after creation
or
command mergZXingControlCreate pLeft,pTop,pRight,pBottom
to create the scanner control in the specified rect.
Set the rect smaller than the width and the height of the screen.
You could then use an underlying image, which is displayed outside of the scanner rect, to show the frame around scanner control. Did not test it myself, but i would assume that this should work.
Unfortunately the native controls in externals and the ones the engine provides are added as views on top of the LiveCode view. That means you can't intermingle LiveCode controls with them. One thing that some users have done is add a web view with a transparent background and a load a png image. If you create the barcode view first and the web view second then the web view will be on top.

How do I make a circle move on events?

I'm pretty new to javafx so I'm trying to learn here so please be reasonable and don't dis away my question, I really appreciate any help at all, thanks!
I would like to know how I could move an object, let's say this circle on different events, like keypress or mouseclick, mousemove, whatever.
Circle circle = new Circle();
circle.setCenterX(100.0f);
circle.setCenterY(100.0f);
circle.setRadius(50.0f);
Do I need to use that KeyFrame thing I saw on the javafx site tutorial, or how does this work?
I would not have asked this here if I weren't feeling so lost, honestly.
So to make this clear: What is the code for moving objects that I created, by using events?
EDIT: By moving it I mean, press up key and it moves up by a few pixels, transform it maybe, with another key, or click somewhere on the scene and make it move there instantly or travel there with a certain speed. I don't have to redraw it like you need to with html5 canvas, I hope, right?
I don't have to redraw it like you need to with html5 canvas, I hope, right?
Not if you are using a standard JavaFX scene graph as opposed to a JavaFX canvas.
I would like to know how I could move an object, let's say this circle on different events, like keypress or mouseclick, mousemove, whatever
There are three ways to move a Shape:
You can adjust the shape's geometry (e.g. the centerX/centerY properties of a circle).
You can adjust the shape's layout (e.g. it's layoutX/layoutY properties).
You can adjust the shape's translation (e.g. it's translateX/translateY properties).
You can think of the layout as the home position for the object; i.e. where it should normally be in the context of it's parent group. You can think of it's translation transform as a temporary position for an object (often used when the object is being animated).
If you are using a layout pane such as a VBox or TilePane, then the layout pane will handle setting the layout co-ordinates of the child node for you. If you are using a simple Group or a plain Pane or Region, then you are responsible for setting the correct layout values for the child nodes.
To listen for events, set event handlers on Nodes or Scenes.
Here is a small sample app which demonstrates the above. It places the object to be moved inside a Group and modifies the position of the object within a Group in response to various events.

Resources