How to ysort Sprite3D properly in Godot 3D - godot

How can one apply a similar effect to ysort in Godot2D but using 3D.
I have the camera setup like this but when I add 2 Sprite3D it looks like this
If I increase the Y by 0.0001 on the character sprite, it appears in front of the portal. I was wondering though if there was a node that does this automatically. As increasing the Y makes the character fly up, so for example doing Y + 1, when panning the camera the position of the character is way off.

Related

Adding parallax layer in Godot offsets sprite

I added a parallax background and layer in my little world scene. The problem is, when I add a parallax layer, and set the motion scale to some value, the sprite gets offseted when I run the game. The weird offset seems to be related to the dimensions of the Window and the motion scale. I do not want this offset, I just want all my parallax layers to start from the top left corner (as it is in my setup) and then parallax from there. I have a snippet of my set up and the running game:
I think its related to an apparently persistent bug, but you can kinda compensate the offset.
Multiplying the scale by half your window size should give you the proper offset, multiply by -1 and apply it.
Im linking the github issue, comments may interest you.
The big offset jump is caused by the Parallax nodes calculating the offset when the Camera2D updates upon entering the scene tree or the next process frame.
This is a bummer when, you know, your camera doesn't start at (0, 0).
Here's a workaround to offset the parallax layers to the positions they were in editor:
extends ParallaxBackground
func revert_offset(layer: ParallaxLayer) -> void:
# Cancel out layer's offset. The layer's position already has
# its motion_scale applied.
var ofs := scroll_offset - layer.position
if not scroll_ignore_camera_zoom:
# When attention is given to the camera's zoom, we need to account for it.
# We can use viewport's canvas transform scale to which the camera has
# already applied its zoom.
var canvas_scale = get_viewport().canvas_transform.get_scale()
# This is taken from godot source: parallax_background.cpp
# I don't know why it works.
ofs /= canvas_scale.dot(Vector2(0.5, 0.5))
layer.motion_offset = ofs
func _ready() -> void:
for layer in get_children():
if layer is ParallaxLayer:
revert_offset(layer)
I can think of a couple caveats:
This only works if the Camera2D is after the ParallaxBackground node in the scene tree.
ParallaxBackground waits for moved events from the camera. If the camera enters the scene tree before ParallaxBackground, it will emit the moved event, but PB will not receive it to update its offset. Then when PB is ready it will try to revert offsets that haven't been applied yet.
Does not account for rotation (nor did I test it).
A last note: If the camera is moving and a parallax layer is moving too much and showing the clear color, look into Camera2D and ParallaxBackground limit properties.
Alright, I figured out a solution to this issue. I manually tried offsetting with different motion scale values to see what works with what and used desmos to figure out a simple linear expression. You are going to create a script and attach it to every parallax layer you have. In the script, calculate offset by with the following:
motion_offset.x = ((window_width / 2) * (motion_scale.x)) - (widow_width / 2)
Until the developers fix this issue, this is probably going to be your best bet imo.

Three.js ParticleSystem flickering with large data

Back story: I'm creating a Three.js based 3D graphing library. Similar to sigma.js, but 3D. It's called graphosaurus and the source can be found here. I'm using Three.js and using a single particle representing a single node in the graph.
This was the first task I had to deal with: given an arbitrary set of points (that each contain X,Y,Z coordinates), determine the optimal camera position (X,Y,Z) that can view all the points in the graph.
My initial solution (which we'll call Solution 1) involved calculating the bounding sphere of all the points and then scale the sphere to be a sphere of radius 5 around the point 0,0,0. Since the points will be guaranteed to always fall in that area, I can set a static position for the camera (assuming the FOV is static) and the data will always be visible. This works well, but it either requires changing the point coordinates the user specified, or duplicating all the points, neither of which are great.
My new solution (which we'll call Solution 2) involves not touching the coordinates of the inputted data, but instead just positioning the camera to match the data. I encountered a problem with this solution. For some reason, when dealing with really large data, the particles seem to flicker when positioned in front/behind of other particles.
Here are examples of both solutions. Make sure to move the graph around to see the effects:
Solution 1
Solution 2
You can see the diff for the code here
Let me know if you have any insight on how to get rid of the flickering. Thanks!
It turns out that my near value for the camera was too low and the far value was too high, resulting in "z-fighting". By narrowing these values on my dataset, the problem went away. Since my dataset is user dependent, I need to determine an algorithm to generate these values dynamically.
I noticed that in the sol#2 the flickering only occurs when the camera is moving. One possible reason can be that, when the camera position is changing rapidly, different transforms get applied to different particles. So if a camera moves from X to X + DELTAX during a time step, one set of particles get the camera transform for X while the others get the transform for X + DELTAX.
If you separate your rendering from the user interaction, that should fix the issue, assuming this is the issue. That means that you should apply the same transform to all the particles and the edges connecting them, by locking (not updating ) the transform matrix until the rendering loop is done.

Game Programming - Billboard issue in front of 3D object

I'm starting to develop a poc with the main features of a turn-based RPG similar to Breath of Fire 4, a mixture of 3D environment with characters and items such as billboards.
I'm using an orthographic camera with an angle of 30 degrees on the X axis, I did my sprite to act as a billboard with the pivot in the center, the problem occurs when the sprite is nearing a 3D object such as a wall.
Check out the image:
I had tried the solution leaving the rotation matrix of the billboard "upright", worked well, but of course, depending on the height and angle of the camera toward the billboard it gets kinda flattened, I also changed the pivot to the bottom of the sprite but this problem appears with objects in front of the sprite too. I was thinking that the solution would be to create a fragment shader that relies on the depth texture of some previous pass, I tried to think in how to do it with shaders but I could not figure it out. Could you help me with some article or anything that puts me in the right direction? Thank you.
See what I am trying to achieve on this video.
You had got the right approach. Use the upright matrix, and scale up Z of billboards preparing flattened Z by your camera. The Z scaling should be about 1.1547. It is (1 / cos30), which makes billboards look like original size from the camera with the angle of 30 degrees. It seems a tricky way but developers of BoF4 on the video might use the same solution too.

Change perspective in POV-Ray? (less convergence)

Can you change the perspective in POV-Ray, so that convergence between parallel lines does not look so steep?
E.g. change this angle (the convergence of the checkered floor into the distance) here
To an angle like this
I want it to seem like you're looking at something nearby, so with a smaller angle of convergence in parallel lines.
To illustrate it more: instead of a view like this
Use a view like this
Move the camera backwards and zoom in (by making the angle smaller):
camera {
perspective
location <0,0,-15> // move this backwards
sky y
up y
angle 30 // make this smaller
right (image_width/image_height)*x
look_at <0,0,0>
}
You can go to the extreme by using an orthographic "camera":
camera {
orthographic
location <0,0,-15> // Move backwards, no matter how far
sky y
up y * h // where h = hight you want to cover
right x * w // where w = width you want to cover
look_at <0,0,0>
}
The other extreme is the fish-eye lens.
You need to reduce the field of view of your camera's view frustum. The larger the field of view, the more stuff you're trying to squeeze into the output of your camera's render and so they parallel lines will converge faster. So in your first example with a cube, the camera will be more focused on the cube and the areas just immediately around it, than the whole environment.
The other option is to make your far plane much closer to your near plane, so you don't see many things that are far off. So in you first image example, you'll only see the first four or five grids instead.

Algorithm for Polygon Image Fill

I want an efficient algorithm to fill polygon with an Image, I want to fill an Image into Trapezoid. currently I am doing it in two steps
1) First Perform StretchBlt on Image,
2) Perform Column by Column vertical StretchBlt,
Is there any better method to implement this? Is there any Generic and Fast algorithm which can fill any polygon?
Thanks,
Sunny
I can't help you with the distortion part, but filling polygons is pretty simple, especially if they are convex.
For each Y scan line have a table indexed by Y, containing a minX and maxX.
For each edge, run a DDA line-drawing algorithm, and use it to fill in the table entries.
For each Y line, now you have a minX and maxX, so you can just fill that segment of the scan line.
The hard part is a mental trick - do not think of coordinates as specifying pixels. Think of coordinates as lying between the pixels. In other words, if you have a rectangle going from point 0,0 to point 2,2, it should light up 4 pixels, not 9. Most problems with polygon-filling revolve around this issue.
ADDED: OK, it sounds like what you're really asking is how to stretch the image to a non-rectangular shape (but trapezoidal). I would do it in terms of parameters s and t, going from 0 to 1. In other words, a location in the original rectangle is (x + w0*s, y + h0*t). Then define a function such that s and t also map to positions in the trapezoid, such as ((x+t*a) + w0*s*(t-1) + w1*s*t, y + h1*t). This defines a coordinate mapping between the two shapes. Then just scan x and y, converting to s and t, and mapping points from one to the other. You probably want to have a little smoothing filter rather than a direct copy.
ADDED to try to give a better explanation:
I'm supposing both your rectangle and trapezoid have top and bottom edges parallel with the X axis. The lower-left corner of the rectangle is <x0,y0>, and the lower-left corner of the trapezoid is <x1,y1>. I assume the rectangle's width and height are <w,h>.
For the trapezoid, I assume it has height h1, and that it's lower width is w0, while it's upper width is w1. I assume it's left edge "slants" by a distance a, so that the position of its upper-left corner is <x1+a, y1+h1>. Now suppose you iterate <x,y> over the rectangle. At each point, compute s = (x-x0)/w, and t = (y-y0)/h, which are both in the range 0 to 1. (I'll let you figure out how to do that without using floating point.) Then convert that to a coordinate in the trapezoid, as xt = ((x1 + t*a) + s*(w0*(1-t) + w1*t)), and yt = y1 + h1*t. Then <xt,yt> is the point in the trapezoid corresponding to <x,y> in the rectangle. Now I'll let you figure out how to do the copying :-) Good luck.
P.S. And please don't forget - coordinates fall between pixels, not on them.
Would it be feasible to sidestep the problem and use OpenGL to do this for you? OpenGL can render to memory contexts and if you can take advantage of any hardware acceleration by doing this that'll completely dwarf any code tweaks you can make on the CPU (although on some older cards memory context rendering may not be able to take advantage of the hardware).
If you want to do this completely in software MESA may be an option.

Resources