co-ordinate frame reference frame for head transform (google cardboard) - google-cardboard

I would like to know the co-ordinate frame of reference the HeadTransform class uses.
As per my limited understanding the headTransform represents the rotation of the head w.r.t the phone. But how are the x,y, and z axes setup?
Holding the phone in landscape mode with the home button to the right,
camera reference: +x to the right, +y up, +z coming towards the face
head reference: +x to the right, +y up, +z going away from the face
Is the above correct?

HeadTransform is a class that allows you to access various orientational data, What you probably want is this:
https://developers.google.com/cardboard/android/latest/reference/com/google/vrtoolkit/cardboard/HeadTransform#getQuaternion(float[], int)
(the above requires just a java float[] initalised as float[4])
One very important thing to understand, Is, This isn't movement in 3D space, Persay, It is rotation around a single point, Being your head. So instead of like X meaning moving left or moving right, It means rotating your head left or rotating it right, i.e looking left or right.
As for the frame of reference, It seems to just be some point in front of the screen which it assumes is your head. I hope this helped!

Related

how to rotate the yellow cube towards the car?

how to rotate the yellow cube towards the car ? I have a spinning camera, I think this is the case
Are you trying to do with with code? I remind you StackOverflow is for programming. For other game related things there is gamedev.stackexchange.com.
If you are doing this with code - and given that I don't know how the scene tree looks like - I suggest using look_at. Something like this (code for the Camera):
look_at(car.global_transform.origin, car.global_transform.basis.y)
There car is a reference to the car. I can't tell you how to get one without looking at the scene tree, beyond that you can probably use get_node. So car.global_transform.origin is the position of the car in global coordinates. And car.global_transform.basis.y is the direction towards the up of the car.
The method look_at needs an up vector because there are infinite ways to look at a given point (rotate around the view line). Thus, we do not want an up vector that matches the view line. For example, Vector3.UP won't work if the camera is looking directly up or directly down.
And if you just want to rotate this in the designer. You can use the gizmo you see when you select it. You can drag the blue ring until it is aligned correctly.
The de facto standard for this gizmos is that x is red, y is green, and z is blue (this is true in Godot, Blender, and plenty of other software). So the blue ring rotates around the z axis. You can also find that rotation in the inspector panel, look for rotation degrees for the z under transform.
I remind you that if you place the Camera as a child node of another Spatial, it will keep its position and orientation relative to it. So placing the Camera as child of your player character (e.g. a KinematicBody) is often good enough for the early stages of development, as that guarantees that the Camera follows the player character. No coding necessary. You may want a more elaborate Camera control later, which would require some code.
Since you mention "spinning camera", perhaps you want a Camera that orbits around a point. The easier way to do this is to add an auxiliary Spatial for the point the Camera rotates around. Let us call it Pivot, and rotate that. For clarity, I'm suggesting a setup like this:
PlayerCharacter
└ Pivot
└ Camera
Here the Pivot follows the player character. And the Camera follows the Pivot. So moving the player character moves the Camera, and rotating the Pivot makes the Camera orbit. This is just lacking some code to make the Pivot rotate. For example something like this (code for Pivot):
global_transform.rotate_y(Input.get_axis("camera_left", "camera_right"))
Where "camera_left" and "camera_right" are actions configured in the Input Map (in Project settings). Which reminds me, you can set actions from code with Input.action_press, so there could be code somewhere else (e.g. _input) writing these actions from mouse movement.
Camera Control does not have to be hard.

Libgdx sprite ring object passing through it

How is it possible object can pass through ring spirte like in the image below?
Please can you help me I have no idea how can i do that.
I think you posted a incorrect image. To get the image you posted you just have to draw the red bar on top of the black ring.
I guess you want the ring on the left side to be on top and the right side to be over so it visually goes through. Well this is simply not so easy in 2D since draw order.
I have a couple of suggestion you can explore.
Always draw the ring on top of the bar but when a collision is happening you calculate where the bar overlaps and don't draw the pixels in that place. You can use a Pixmap for calculations like this. Depending on the size of your images this could be very expensive to calculate each frame.
A faster but slightly more hacky way could be to split red bar in multiple images and if a certain part of it should be overlapped by the ring draw it first otherwise draw it after the ring. Depending on what the red bar is going to look in your end product and how much possible angles the bar could have I can imagine this can be very tricky to get right.
Use 3D for this. You could have a billboard with a slight angle for the ring and have the bar locked on the distance axis at the rings center. However, on certain angles of entrance and exit you will get Z fighting since the pixels will be at the same distance from the camera. This might or might not be noticable and I have no idea how LibGDX would handle Z fighting.
I wanna add this solution :
if the object gonna pass through the ring horizontally i propose to devise sprite ring in to to sprite (sprite 1 & sprite 2)
you just have to draw sprites in that order :
Sprite1
Sprite Object
Sprite2
You can do the same if the object is gonna pass through ring vertically
PS : this solution don't work if the object is going to passs through ring both Vertically and Horizontally
Hope this was helpfull
Good luck

Three.js ParticleSystem flickering with large data

Back story: I'm creating a Three.js based 3D graphing library. Similar to sigma.js, but 3D. It's called graphosaurus and the source can be found here. I'm using Three.js and using a single particle representing a single node in the graph.
This was the first task I had to deal with: given an arbitrary set of points (that each contain X,Y,Z coordinates), determine the optimal camera position (X,Y,Z) that can view all the points in the graph.
My initial solution (which we'll call Solution 1) involved calculating the bounding sphere of all the points and then scale the sphere to be a sphere of radius 5 around the point 0,0,0. Since the points will be guaranteed to always fall in that area, I can set a static position for the camera (assuming the FOV is static) and the data will always be visible. This works well, but it either requires changing the point coordinates the user specified, or duplicating all the points, neither of which are great.
My new solution (which we'll call Solution 2) involves not touching the coordinates of the inputted data, but instead just positioning the camera to match the data. I encountered a problem with this solution. For some reason, when dealing with really large data, the particles seem to flicker when positioned in front/behind of other particles.
Here are examples of both solutions. Make sure to move the graph around to see the effects:
Solution 1
Solution 2
You can see the diff for the code here
Let me know if you have any insight on how to get rid of the flickering. Thanks!
It turns out that my near value for the camera was too low and the far value was too high, resulting in "z-fighting". By narrowing these values on my dataset, the problem went away. Since my dataset is user dependent, I need to determine an algorithm to generate these values dynamically.
I noticed that in the sol#2 the flickering only occurs when the camera is moving. One possible reason can be that, when the camera position is changing rapidly, different transforms get applied to different particles. So if a camera moves from X to X + DELTAX during a time step, one set of particles get the camera transform for X while the others get the transform for X + DELTAX.
If you separate your rendering from the user interaction, that should fix the issue, assuming this is the issue. That means that you should apply the same transform to all the particles and the edges connecting them, by locking (not updating ) the transform matrix until the rendering loop is done.

Change perspective in POV-Ray? (less convergence)

Can you change the perspective in POV-Ray, so that convergence between parallel lines does not look so steep?
E.g. change this angle (the convergence of the checkered floor into the distance) here
To an angle like this
I want it to seem like you're looking at something nearby, so with a smaller angle of convergence in parallel lines.
To illustrate it more: instead of a view like this
Use a view like this
Move the camera backwards and zoom in (by making the angle smaller):
camera {
perspective
location <0,0,-15> // move this backwards
sky y
up y
angle 30 // make this smaller
right (image_width/image_height)*x
look_at <0,0,0>
}
You can go to the extreme by using an orthographic "camera":
camera {
orthographic
location <0,0,-15> // Move backwards, no matter how far
sky y
up y * h // where h = hight you want to cover
right x * w // where w = width you want to cover
look_at <0,0,0>
}
The other extreme is the fish-eye lens.
You need to reduce the field of view of your camera's view frustum. The larger the field of view, the more stuff you're trying to squeeze into the output of your camera's render and so they parallel lines will converge faster. So in your first example with a cube, the camera will be more focused on the cube and the areas just immediately around it, than the whole environment.
The other option is to make your far plane much closer to your near plane, so you don't see many things that are far off. So in you first image example, you'll only see the first four or five grids instead.

Can I remap mouse co-ordinates when using Gdiplus::SetPageScale using a GDI Function?

I want to add zoom capability to an app, which at its core is a spf graph app. Now I currently have no zoom, but the ability to select/move, multi-select objects on the graph in the graph window. I started to write my own code to do scaling of the objects and then work out mouse co-ordinates to map clicks and redraws correctly. I didnt complete this as I found the Gdiplus::SetPageScale function, which scales the window fine but I cannot see any GDI function I can use to map the mouse click co-ordinates from the world co-ord's to the page co-ords. I tried TransformPoints(Gdiplus::CoordinateSpaceWorld, ::Gdiplus::CoordinateSpacePage, points, 2) but this does not convert the points and the returned points are (0,0).
So is this even possible with Gdiplus or do I need to write this mapping myself? Any advice appreciated!
You don't want to use Graphics::SetPageScale() in this case. The much more general way is to use the Matrix class instead. Its Scale, Translate and Rotate methods are handy to get the matrix you need. You'll want to use the Scale() method here, possibly Translate() to change the origin.
Before you start drawing, activate the matrix with the Graphics::SetTransform() method. Anything you draw will now automatically be scaled according to the arguments you passed to the Matrix::Scale() method. Mapping a mouse position is now exceedingly simple with Matrix::TransformPoints() method, the exact same transform that was used while drawing is now applied to the mouse coordinates. Even going back from graph coordinates to mouse coordinates is simple, use the Matrix::Invert() method to obtain the inverse transform.
When GDI+ draws, it applies a world transform (which is controlled by Graphics::SetTransform, ScaleTransform, etc.) followed by the page transform (which is controlled by Graphics::SetPageScale and Graphics::SetPageUnit) to transform the points to device coordinates.
So it normally goes like this: World coordinates --[World transform]--> Page coordinates --[Page transform]--> Device coordinates
You can use Graphics::TransformPoints the way you wanted, to map mouse coordinates to world coordinates, but you have to specify Device coordinates as the source space and World coordinates as the destination space.
However, there are good reasons to do it as Hans describes with a Matrix you store separately, most notably that you shouldn't be holding on to your Graphics object for long enough to process mouse input (nor should there be a need to create one then).

Resources