Adding parallax layer in Godot offsets sprite - graphics

I added a parallax background and layer in my little world scene. The problem is, when I add a parallax layer, and set the motion scale to some value, the sprite gets offseted when I run the game. The weird offset seems to be related to the dimensions of the Window and the motion scale. I do not want this offset, I just want all my parallax layers to start from the top left corner (as it is in my setup) and then parallax from there. I have a snippet of my set up and the running game:

I think its related to an apparently persistent bug, but you can kinda compensate the offset.
Multiplying the scale by half your window size should give you the proper offset, multiply by -1 and apply it.
Im linking the github issue, comments may interest you.

The big offset jump is caused by the Parallax nodes calculating the offset when the Camera2D updates upon entering the scene tree or the next process frame.
This is a bummer when, you know, your camera doesn't start at (0, 0).
Here's a workaround to offset the parallax layers to the positions they were in editor:
extends ParallaxBackground
func revert_offset(layer: ParallaxLayer) -> void:
# Cancel out layer's offset. The layer's position already has
# its motion_scale applied.
var ofs := scroll_offset - layer.position
if not scroll_ignore_camera_zoom:
# When attention is given to the camera's zoom, we need to account for it.
# We can use viewport's canvas transform scale to which the camera has
# already applied its zoom.
var canvas_scale = get_viewport().canvas_transform.get_scale()
# This is taken from godot source: parallax_background.cpp
# I don't know why it works.
ofs /= canvas_scale.dot(Vector2(0.5, 0.5))
layer.motion_offset = ofs
func _ready() -> void:
for layer in get_children():
if layer is ParallaxLayer:
revert_offset(layer)
I can think of a couple caveats:
This only works if the Camera2D is after the ParallaxBackground node in the scene tree.
ParallaxBackground waits for moved events from the camera. If the camera enters the scene tree before ParallaxBackground, it will emit the moved event, but PB will not receive it to update its offset. Then when PB is ready it will try to revert offsets that haven't been applied yet.
Does not account for rotation (nor did I test it).
A last note: If the camera is moving and a parallax layer is moving too much and showing the clear color, look into Camera2D and ParallaxBackground limit properties.

Alright, I figured out a solution to this issue. I manually tried offsetting with different motion scale values to see what works with what and used desmos to figure out a simple linear expression. You are going to create a script and attach it to every parallax layer you have. In the script, calculate offset by with the following:
motion_offset.x = ((window_width / 2) * (motion_scale.x)) - (widow_width / 2)
Until the developers fix this issue, this is probably going to be your best bet imo.

Related

how to rotate the yellow cube towards the car?

how to rotate the yellow cube towards the car ? I have a spinning camera, I think this is the case
Are you trying to do with with code? I remind you StackOverflow is for programming. For other game related things there is gamedev.stackexchange.com.
If you are doing this with code - and given that I don't know how the scene tree looks like - I suggest using look_at. Something like this (code for the Camera):
look_at(car.global_transform.origin, car.global_transform.basis.y)
There car is a reference to the car. I can't tell you how to get one without looking at the scene tree, beyond that you can probably use get_node. So car.global_transform.origin is the position of the car in global coordinates. And car.global_transform.basis.y is the direction towards the up of the car.
The method look_at needs an up vector because there are infinite ways to look at a given point (rotate around the view line). Thus, we do not want an up vector that matches the view line. For example, Vector3.UP won't work if the camera is looking directly up or directly down.
And if you just want to rotate this in the designer. You can use the gizmo you see when you select it. You can drag the blue ring until it is aligned correctly.
The de facto standard for this gizmos is that x is red, y is green, and z is blue (this is true in Godot, Blender, and plenty of other software). So the blue ring rotates around the z axis. You can also find that rotation in the inspector panel, look for rotation degrees for the z under transform.
I remind you that if you place the Camera as a child node of another Spatial, it will keep its position and orientation relative to it. So placing the Camera as child of your player character (e.g. a KinematicBody) is often good enough for the early stages of development, as that guarantees that the Camera follows the player character. No coding necessary. You may want a more elaborate Camera control later, which would require some code.
Since you mention "spinning camera", perhaps you want a Camera that orbits around a point. The easier way to do this is to add an auxiliary Spatial for the point the Camera rotates around. Let us call it Pivot, and rotate that. For clarity, I'm suggesting a setup like this:
PlayerCharacter
└ Pivot
└ Camera
Here the Pivot follows the player character. And the Camera follows the Pivot. So moving the player character moves the Camera, and rotating the Pivot makes the Camera orbit. This is just lacking some code to make the Pivot rotate. For example something like this (code for Pivot):
global_transform.rotate_y(Input.get_axis("camera_left", "camera_right"))
Where "camera_left" and "camera_right" are actions configured in the Input Map (in Project settings). Which reminds me, you can set actions from code with Input.action_press, so there could be code somewhere else (e.g. _input) writing these actions from mouse movement.
Camera Control does not have to be hard.

co-ordinate frame reference frame for head transform (google cardboard)

I would like to know the co-ordinate frame of reference the HeadTransform class uses.
As per my limited understanding the headTransform represents the rotation of the head w.r.t the phone. But how are the x,y, and z axes setup?
Holding the phone in landscape mode with the home button to the right,
camera reference: +x to the right, +y up, +z coming towards the face
head reference: +x to the right, +y up, +z going away from the face
Is the above correct?
HeadTransform is a class that allows you to access various orientational data, What you probably want is this:
https://developers.google.com/cardboard/android/latest/reference/com/google/vrtoolkit/cardboard/HeadTransform#getQuaternion(float[], int)
(the above requires just a java float[] initalised as float[4])
One very important thing to understand, Is, This isn't movement in 3D space, Persay, It is rotation around a single point, Being your head. So instead of like X meaning moving left or moving right, It means rotating your head left or rotating it right, i.e looking left or right.
As for the frame of reference, It seems to just be some point in front of the screen which it assumes is your head. I hope this helped!

Three.js ParticleSystem flickering with large data

Back story: I'm creating a Three.js based 3D graphing library. Similar to sigma.js, but 3D. It's called graphosaurus and the source can be found here. I'm using Three.js and using a single particle representing a single node in the graph.
This was the first task I had to deal with: given an arbitrary set of points (that each contain X,Y,Z coordinates), determine the optimal camera position (X,Y,Z) that can view all the points in the graph.
My initial solution (which we'll call Solution 1) involved calculating the bounding sphere of all the points and then scale the sphere to be a sphere of radius 5 around the point 0,0,0. Since the points will be guaranteed to always fall in that area, I can set a static position for the camera (assuming the FOV is static) and the data will always be visible. This works well, but it either requires changing the point coordinates the user specified, or duplicating all the points, neither of which are great.
My new solution (which we'll call Solution 2) involves not touching the coordinates of the inputted data, but instead just positioning the camera to match the data. I encountered a problem with this solution. For some reason, when dealing with really large data, the particles seem to flicker when positioned in front/behind of other particles.
Here are examples of both solutions. Make sure to move the graph around to see the effects:
Solution 1
Solution 2
You can see the diff for the code here
Let me know if you have any insight on how to get rid of the flickering. Thanks!
It turns out that my near value for the camera was too low and the far value was too high, resulting in "z-fighting". By narrowing these values on my dataset, the problem went away. Since my dataset is user dependent, I need to determine an algorithm to generate these values dynamically.
I noticed that in the sol#2 the flickering only occurs when the camera is moving. One possible reason can be that, when the camera position is changing rapidly, different transforms get applied to different particles. So if a camera moves from X to X + DELTAX during a time step, one set of particles get the camera transform for X while the others get the transform for X + DELTAX.
If you separate your rendering from the user interaction, that should fix the issue, assuming this is the issue. That means that you should apply the same transform to all the particles and the edges connecting them, by locking (not updating ) the transform matrix until the rendering loop is done.

Game Programming - Billboard issue in front of 3D object

I'm starting to develop a poc with the main features of a turn-based RPG similar to Breath of Fire 4, a mixture of 3D environment with characters and items such as billboards.
I'm using an orthographic camera with an angle of 30 degrees on the X axis, I did my sprite to act as a billboard with the pivot in the center, the problem occurs when the sprite is nearing a 3D object such as a wall.
Check out the image:
I had tried the solution leaving the rotation matrix of the billboard "upright", worked well, but of course, depending on the height and angle of the camera toward the billboard it gets kinda flattened, I also changed the pivot to the bottom of the sprite but this problem appears with objects in front of the sprite too. I was thinking that the solution would be to create a fragment shader that relies on the depth texture of some previous pass, I tried to think in how to do it with shaders but I could not figure it out. Could you help me with some article or anything that puts me in the right direction? Thank you.
See what I am trying to achieve on this video.
You had got the right approach. Use the upright matrix, and scale up Z of billboards preparing flattened Z by your camera. The Z scaling should be about 1.1547. It is (1 / cos30), which makes billboards look like original size from the camera with the angle of 30 degrees. It seems a tricky way but developers of BoF4 on the video might use the same solution too.

Disable culling on an object

This question is actually for Unity3D, but it can also be a more general question, so therefore I'm going to make this question as general possible.
Suppose I have a scene with a camera (near = 0.3, far = 1000, fov = 60) and I want to draw a skydome that is 10000 units in radius.
The object is not culled by the frustum of the camera, because I'm inside of the dome. But the vertices are culled by some shader somehow and the end-result looks like this:
Now my question is:
what setting for any engine can I change to make sure that the complete object is drawn and not clipped by the far plane of the camera?
What I don't want is:
Change the far plane to 10000, because it makes the frustum less accurate
Change the near plane, because my game is actually on a very low scale
Change the scale of the dome, because this setting looks very realistic
I do not know how to do this in Unity but in DirectX and in OpenGL you switch off the zbuffer (both checks and writing) and draw the skybox first.
Then you switch on the zbuffer and draw the rest of the scene.
My guess is that Unity can do all this for you.
I have two solutions for my own problem. The first one doesn't solve everything. The second does, but is against my own design principles.
There was no possibility for me to change the shader's z-writing, which is a great solution from #Erno, because the shaders used are 3rd party.
Option 1
Just before the object is rendered, set the far plane to 100,000 and set it back to 1000 after drawing the sky.
Problem: The depth buffer is still filled with values between very low and 100,000. This decreases the accuracy of the depth buffer and gives problems with z-fighting and post-effects that depend on the depth buffer.
Option 2
Create two cameras that are linked to each other. Camera 1 renders the skydome first with a setting of far = 100000, near = 100. Camera 2 clears the depth buffer and draws the rest of the scene with a setting of far = 1000, near = 0.3. The depth buffer doesn't contain big values now, so that solves the problems of inaccurate depth buffers.
Problem: The cameras have to be linked by some polling system, because there are no change events on the camera class (e.g. when FoV changes). I like the fact that there is only one camera, but this doesn't seem possible quite easily.

Resources