Ultimate struggle with a full 3d space controller - godot

Sorry if i'm stupid or something, but i having a deep dread from a work on a "full 3d" space movement.
I'm trying to make a "space ship" KinematicBody controller which using basis vectors as a rotation point and have ability to strafe/move left,right,up,down based on it's facing direction.
The issue is i'm having that i want to use a Vector3 as a storage of all input variables, an input strength in particular, but i can't find a convenient way to orient or use this vector's variables to apply it to velocity.
I got a sort of cheap solution which i don't like with applying a rotation to an input vector so it will "corresponds" to one of the basis, but it's starting to brake at some angels.
Could please somebody suggest what i can change in my logic or maybe there is a way to
use quaternion/matrix related methods/formulas?

I'm not sure I fully understand what you want to do, but I can give you something to work with.
I'll assume that you already have the input as a Vector3. If not, you want to see Input.get_action_strength, Input.get_axis and Input.get_vector.
I'm also assuming that the braking situations you encountered are a case of gimbal lock. But since you are asking about applying velocity not rotation, I'll not go into that topic.
Since you are using a KinematicBody, I suppose you would be using move_and_slide or similar method, which work in global space. But you want the input to have to be based on the current orientation. Thus, you would consider your Vector3 which represents the input to be in local space. And the issue is how to go from that local space to the global space that move_and_slide et.al. need.
Transform
You might be familiar with to_local and to_global. Which would interpret the Vector3 as a position:
var global_input_vector:Vector3 = to_global(input_vector)
And the opposite operation would be:
input_vector = to_local(global_input_vector)
The problem with these is that since these consider the Vector3 to be positions, they will translate the vector depending where the KinematicBody is. We can undo that translation:
var global_vec:Vector3 = to_global(local_vec) - global_transform.orign
And the opposite operation would be:
local_vec = to_local(global_vec + global_transform.orign)
By the way this is another way to write the same code:
var global_vec:Vector3 = (global_transform * local_vec) - global_transform.orign
And the opposite operation would be:
local_vec = global_transform.affine_inverse() * (global_vec + global_transform.orign)
Which I'm mentioning because I want you to see the similarity with the following approach.
Basis
I would rather not consider the Vector3 to be positions. Just free vectors. So, we would transform it with only the Basis, like this:
var global_vec:Vector3 = global_transform.basis * local_vec
And the opposite operation would be:
local_vec = global_transform.affine_inverse().basis * global_vec
This approach will not have the translation problem.
You can think of the Basis as a 3 by 3 matrix, and the Transform is that same matrix augmented with a translation vector (origin).
Quat
However, if you only want rotation, let us se quaternions instead:
var global_vec:Vector3 = global_transform.basis.get_rotation_quat() * local_vec
And the opposite operation would be:
local_vec = global_transform.affine_inverse().basis.get_rotation_quat() * global_vec
Well, actually, let us invert just the quaternion:
local_vec = global_transform.basis.get_rotation_quat().inverse() * global_vec
These will only rotate the vector (no scaling, or any other transformation, just rotation) according to the current orientation of the KinematicBody.
Rotating a Transform
If you are trying to rotate a Transform, either…
Do this (quaternion):
transform = Transform(transform.basis * Basis(quaternion), transform.origin)
Or this (quaternion):
transform = transform * Transform(Basis(quaternion), Vector3.ZERO)
Or this (axis-angle):
transform = Transform(transform.basis.rotated(axis, angle), transform.origin)
Or this (axis-angle):
transform = transform * Transform.Identity.rotated(axis, angle)
Or this (Euler angles):
transform = Transform(transform.basis * Basis(pitch, yaw, roll), transform.origin)
Or this (Euler angles):
transform = transform * Transform(Basis(pitch, yaw, roll), Vector3.ZERO)
Avoid this:
transform = transform.rotated(axis, angle)
The reason is that this rotation is always before translation (i.e. this rotates around the global origin instead of the current position), and you will end up with an undesirable result.
And yes, you could use rotate_x, rotate_y and rotate_z, or set rotation of a Spatial. But sometimes you need to work with a Transform directly.
See also:
Godot/Gdscript rotate + translate from local to world space.
How to LERP between 2 angles going the longest route or path in Godot.

Related

Splitting quadratic Bezier curve into several unequal parts

I am dealing with clipping of quadratic Beziér curves. Clipping is a standard graphics task. Typically, no matter what we display on a screen, we only want to render the part that fits into the screen bounds, as an optimization.
For straight lines, there is something called Cohen-Sutherland algorithm, and a slightly extended version of this algorithm is the Sutherland–Hodgman algorithm, where the first solution is for dealing with lines and the second one for polygons.
Essentially, the algorithms split the computer screen into tik-tac-toe -like squares, where the central square is what fits on the screen, and we special case each of left/right and above/below. After, when one end of the line is right off the screen and the other is not, we replace the x coordinate for this point with the screen's max value of x, and calculate the y value for it. This becomes the new endpoint of the clipped line. Pretty simple and it works well.
With Beziér curves, the same approach can be taken, only in addition to the ends, we need to consider control points. In the case of a quadratic curve, there is only one control.
To clip the curve, we can do something very similar to Cohen-Sutherland. Only, depending on the situation, we might need to cut the original curve into up to five (5) pieces. Just like both ends of a straight line might be offscreen, while the center is visible, the same situation needs to be handled with curves, yet here we only need to deal with the hull [height] of the curve causing a mid-section to be invisible. Therefore, we might end up with two new curves, after the clipping.
Finding one of the coordinates for these curves is pretty easy. It is still the min/max coordinate for one of the axis, and the value of the other coordinate. There is prior art for this, for example even calculate x for y is a good starting point. We want to adopt the formula so vectors turn into separate x and y coordinates, but the rest is doable.
Next, however, we still have an unsolved problem these one or two new curves, are completely new quadratic curves and each will have therefore a new control point.
There is a thread at split quadratic curve into two where the author seems to be doing kind of what I need, albeit in a slightly different way. There is an accepted answer, yet I could not get the results to work.
I want to end-up with a function like:
function clipSegment(sx, sy, cx, cy, ex, ey, bounds) {
let curves: {sx, sy, cx, cy, ex, ey}[] = [];
...
return curves;
}
It should take coordinates and the bounds object that would have both min and max for both x and y coordinates. I think that Cohen-Sutherland approach with squares and bit-codes should work here just as well. We get more cases for curves, but everything is doable. My problem is the new control point coordinates. For example, we could calculating t from one of the coordinates, doing something like:
function getQuadraticPoint(t, sx, sy, cp1x, cp1y, ex, ey) {
const x = (1 - t) * (1 - t) * sx + 2 * (1 - t) * t * cp1x + t * t * ex;
const y = (1 - t) * (1 - t) * sy + 2 * (1 - t) * t * cp1y + t * t * ey;
return { x, y };
}
Once we have the new start and/or beginning, how do we get the new control points?
Some developers I found online, working on similar problems, recommended just working with t and changing the interval from t from 0 to 1 to 0 to t. This however won't work easily for Canvas 2D API. The 2D Path thing needs the control point and the end point [after the pen move to the beginning with moveTo].
I believe that the quadratic Beziér case should have a closed-form solution. Yet, I have not figured out what it is. Any ideas?

Is there a graph-drawing tool that will allow me to constrain x, and automatically lay out y?

I am looking for a tool similar to graphviz that can render graphs, but that will allow me to constrain just the x coordinate of each node. Then, the tool will automatically choose y coordinates to make the graph look neat.
Basically, I want to make a timeline.
Language / platform / rendering medium are not very important.
If you want a neat-looking graph a force-directed algorithm is going to be your best bet. One of the best ones is SFDP (developed by AT&T, included in graphviz) though I can't seem to find pseudocode or an easy implementation. I don't think there are any algorithms this specialized. Thankfully, it's easy to code your own. I'll present some pseudocode mostly lifted form Wikipedia, but with suitably one-dimensional modifications. I'll assume you have n vertices and the vector of x-positions is x, subscripted by x.i.
set all vertex velocities to (0,0)
set all vertex positions to (x.i, random)
while (KE > epsilon)
KE = 0
for each vertex v
force = (0,0)
for each vertex u != v
force = force + (0, coulomb(u, v).y)
if u is incident to v
force = force + (0, hooke(u, v).y)
v.velocity = (v.velocity + timestep * force) * damping
v.position = v.position + timestep * v.velocity
KE = KE + |v.velocity| ^ 2
here the .y denotes getting the y-component of the force. This ensures that the x-components of the positions of the vertices never change from what you set them to be. The epsilon parameter is to be set by you, and should be something small compared to what you expect KE (the kinetic energy) to be. Also, |v| denotes the magnitude of the vector v (all computations are of 2-vectors in the above, except the KE). Note I set the mass of all the nodes to be 1, but you can change that if you want.
The Hooke and Coulomb functions calculate the respective forces between nodes; the first is linear in distance between vertices, the second is quadratic, so there is a guaranteed equilibrium. These functions look something like
def hooke(u, v)
return -k * |u.position - v.position|
def coulomb(u, v)
return C * |u.position - v.position|
where again most computations are in vector form. C and k have real values but experiment to get the graph you want. This isn't usually necessary because the scaling factors will, in two dimensions, pretty much expand or contract the whole graph, but here the x-distances are set so to get a good-looking graph you will have to change the values a bit.

Why won't my raytracer recreate the "mount" scene?

I'm trying to render the "mount" scene from Eric Haines' Standard Procedural Database (SPD), but the refraction part just doesn't want to co-operate. I've tried everything I can think of to fix it.
This one is my render (with Watt's formula):
(source: philosoraptor.co.za)
This is my render using the "normal" formula:
(source: philosoraptor.co.za)
And this one is the correct render:
(source: philosoraptor.co.za)
As you can see, there are only a couple of errors, mostly around the poles of the spheres. This makes me think that refraction, or some precision error is to blame.
Please note that there are actually 4 spheres in the scene, their NFF definitions (s x_coord y_coord z_coord radius) are:
s -0.8 0.8 1.20821 0.17
s -0.661196 0.661196 0.930598 0.17
s -0.749194 0.98961 0.930598 0.17
s -0.98961 0.749194 0.930598 0.17
That is, there is a fourth sphere behind the more obvious three in the foreground. It can be seen in the gap left between these three spheres.
Here is a picture of that fourth sphere alone:
(source: philosoraptor.co.za)
And here is a picture of the first sphere alone:
(source: philosoraptor.co.za)
You'll notice that many of the oddities present in both my version and the correct version is missing. We can conclude that these effects are the result of interactions between the spheres, the question is which interactions?
What am I doing wrong? Below are some of the potential errors I've already considered:
Refraction vector formula.
As far as I can tell, this is correct. It's the same formula used by several websites and I verified the derivation personally. Here's how I calculate it:
double sinI2 = eta * eta * (1.0f - cosI * cosI);
Vector transmit = (v * eta) + (n * (eta * cosI - sqrt(1.0f - sinI2)));
transmit = transmit.normalise();
I found an alternate formula in 3D Computer Graphics, 3rd Ed by Alan Watt. It gives a closer approximation to the correct image:
double etaSq = eta * eta;
double sinI2 = etaSq * (1.0f - cosI * cosI);
Vector transmit = (v * eta) + (n * (eta * cosI - (sqrt(1.0f - sinI2) / etaSq)));
transmit = transmit.normalise();
The only difference is that I'm dividing by eta^2 at the end.
Total internal reflection.
I tested for this, using the following conditional before the rest of my intersection code:
if (sinI2 <= 1)
Calculation of eta.
I use a stack-like approach for this problem:
/* Entering object. */
if (r.normal.dot(r.dir) < 0)
{
double eta1 = r.iorStack.back();
double eta2 = m.ior;
eta = eta1 / eta2;
r.iorStack.push_back(eta2);
}
/* Exiting object. */
else
{
double eta1 = r.iorStack.back();
r.iorStack.pop_back();
double eta2 = r.iorStack.back();
eta = eta1 / eta2;
}
As you can see, this stores the previous objects that contained this ray in a stack. When exiting the code pops the current IOR off the stack and uses that, along with the IOR under it to compute eta. As far as I know this is the most correct way to do it.
This works for nested transmitting objects. However, it breaks down for intersecting transmitting objects. The problem here is that you need to define the IOR for the intersection independently, which the NFF file format does not do. It's unclear then, what the "correct" course of action is.
Moving the new ray's origin.
The new ray's origin has to be moved slightly along the transmitted path so that it doesn't intersect at the same point as the previous one.
p = r.intersection + transmit * 0.0001f;
p += transmit * 0.01f;
I've tried making this value smaller (0.001f) and (0.0001f) but that makes the spheres appear solid. I guess these values don't move the rays far enough away from the previous intersection point.
EDIT: The problem here was that the reflection code was doing the same thing. So when an object is reflective as well as refractive then the origin of the ray ends up in completely the wrong place.
Amount of ray bounces.
I've artificially limited the amount of ray bounces to 4. I tested raising this limit to 10, but that didn't fix the problem.
Normals.
I'm pretty sure I'm calculating the normals of the spheres correctly. I take the intersection point, subtract the centre of the sphere and divide by the radius.
Just a guess based on doing a image diff (and without reading the rest of your question). The problem looks to me to be the refraction on the back side of the sphere. You might be:
doing it backwards: e.g. reversing (or not reversing) the indexes of refraction.
missing it entirely?
One way to check for this would be to look at the mount through a cube that is almost facing the camera. If the refraction is correct, the picture should be offset slightly but otherwise un-altered. If it's not right, then the picture will seem slightly tilted.
So after more than I year, I finally figured out what was going on here. Clear minds and all that. I was completely off track with the formula. I'm instead using a formula by Heckbert now, which I am sure is correct because I proved it myself using geometry and discrete math.
Here's the correct vector calculation:
double c1 = v.dot(n) * -1;
double c1Sq = pow(c1, 2);
/* Heckbert's formula requires eta to be eta2 / eta1, so I have to flip it here. */
eta = 1 / eta;
double etaSq = pow(eta, 2);
if (etaSq + c1Sq >= 1)
{
Vector transmit = (v / eta) + (n / eta) * (c1 - sqrt(etaSq - 1 + c1Sq));
transmit = transmit.normalise();
...
}
else
{
/* Total internal reflection. */
}
In the code above, eta is eta1 (the IOR of the surface from which the ray is coming) over eta2 (the IOR of the destination surface), v is the incident ray and n is the normal.
There was another problem, which confused the problem some more. I had to flip the normal when exiting an object (which is obvious - I missed it because the other errors were obscuring it).
Lastly, my line of sight algorithm (to determine whether a surface is illuminated by a point light source) was not properly passing through transparent surfaces.
So now my images line up properly :)

Rotating 3D cube perspective problem

Since I was 13 and playing around with AMOS 3D I've been wanting to learn how to code 3D graphics. Now, 10 years later, I finally think I have have accumulated enough maths to give it a go.
I have followed various tutorials, and defined screenX (and screenY, equivalently) as
screenX = (pointX * cameraX) / distance
(Plus offsets and scaling.)
My problem is with what the distance variable actually refers to. I have seen distance being defined as the difference in z between the camera and the point. However, that cannot be completely right though, since x and y have the same effect as z on the actual distance from the camera to the point. I implemented distance as the actual distance, but the result gives a somewhat skewed perspective, as if it had "too much" perspective.
My "actual distance" implementation was along the lines of:
distance = new Vector(pointX, pointY, cameraZ - pointZ).magnitude()
Playing around with the code, I added an extra variable to my equation, a perspectiveCoefficient as follows:
distance = new Vector(pointX * perspectiveCoefficient,
pointY * perspectiveCoefficient, cameraZ - pointZ).magnitude()
For some reason, that is beyond me, I tend to get the best result setting the perspectiveCoefficient to 1/sqrt(2).
My 3D test cube is at http://vega.soi.city.ac.uk/~abdv866/3dcubetest/3dtest.svg. (Tested in Safari and FF.) It prompts you for a perspectiveCoefficient, where 0 gives a perspective without taking x/y distance into consideration, and 1 gives you a perspective where x, y and z distance is equally considered. It defaults to 1/sqrt(2). The cube can be rotated about x and y using the arrow keys. (For anyone interested, the relevant code is in update() in the View.js file.)
Grateful for any ideas on this.
Usually, projection is done on the Z=0 plane from an eye position behind this plane. The projected point is the intersection of the line (Pt,Eye) with the Z=0 plane. At the end you get something like:
screenX = scaling * pointX / (1 + pointZ/eyeDist)
screenY = scaling * pointY / (1 + pointZ/eyeDist)
I assume here the camera is at (0,0,0) and eye at (0,0,-eyeDist). If eyeDist becomes infinite, you obtain a parallel projection.

Is it possible to do an algebraic curve fit with just a single pass of the sample data?

I would like to do an algebraic curve fit of 2D data points, but for various reasons - it isn't really possible to have much of the sample data in memory at once, and iterating through all of it is an expensive process.
(The reason for this is that actually I need to fit thousands of curves simultaneously based on gigabytes of data which I'm reading off disk, and which is therefore sloooooow).
Note that the number of polynomial coefficients will be limited (perhaps 5-10), so an exact fit will be extremely unlikely, but this is ok as I'm trying to find an underlying pattern in data with a lot of random noise.
I understand how one can use a genetic algorithm to fit a curve to a dataset, but this requires many passes through the sample data, and thus isn't practical for my application.
Is there a way to fit a curve with a single pass of the data, where the state that must be maintained from sample to sample is minimal?
I should add that the nature of the data is that the points may lie anywhere on the X axis between 0.0 and 1.0, but the Y values will always be either 1.0 or 0.0.
So, in Java, I'm looking for a class with the following interface:
public interface CurveFit {
public void addData(double x, double y);
public List<Double> getBestFit(); // Returns the polynomial coefficients
}
The class that implements this must not need to keep much data in its instance fields, no more than a kilobyte even for millions of data points. This means that you can't just store the data as you get it to do multiple passes through it later.
edit: Some have suggested that finding an optimal curve in a single pass may be impossible, however an optimal fit is not required, just as close as we can get it in a single pass.
The bare bones of an approach might be if we have a way to start with a curve, and then a way to modify it to get it slightly closer to new data points as they come in - effectively a form of gradient descent. It is hoped that with sufficient data (and the data will be plentiful), we get a pretty good curve. Perhaps this inspires someone to a solution.
Yes, it is a projection. For
y = X beta + error
where lowercased terms are vectors, and X is a matrix, you have the solution vector
\hat{beta} = inverse(X'X) X' y
as per the OLS page. You almost never want to compute this directly but rather use LR, QR or SVD decompositions. References are plentiful in the statistics literature.
If your problem has only one parameter (and x is hence a vector as well) then this reduces to just summation of cross-products between y and x.
If you don't mind that you'll get a straight line "curve", then you only need six variables for any amount of data. Here's the source code that's going into my upcoming book; I'm sure that you can figure out how the DataPoint class works:
Interpolation.h:
#ifndef __INTERPOLATION_H
#define __INTERPOLATION_H
#include "DataPoint.h"
class Interpolation
{
private:
int m_count;
double m_sumX;
double m_sumXX; /* sum of X*X */
double m_sumXY; /* sum of X*Y */
double m_sumY;
double m_sumYY; /* sum of Y*Y */
public:
Interpolation();
void addData(const DataPoint& dp);
double slope() const;
double intercept() const;
double interpolate(double x) const;
double correlate() const;
};
#endif // __INTERPOLATION_H
Interpolation.cpp:
#include <cmath>
#include "Interpolation.h"
Interpolation::Interpolation()
{
m_count = 0;
m_sumX = 0.0;
m_sumXX = 0.0;
m_sumXY = 0.0;
m_sumY = 0.0;
m_sumYY = 0.0;
}
void Interpolation::addData(const DataPoint& dp)
{
m_count++;
m_sumX += dp.getX();
m_sumXX += dp.getX() * dp.getX();
m_sumXY += dp.getX() * dp.getY();
m_sumY += dp.getY();
m_sumYY += dp.getY() * dp.getY();
}
double Interpolation::slope() const
{
return (m_sumXY - (m_sumX * m_sumY / m_count)) /
(m_sumXX - (m_sumX * m_sumX / m_count));
}
double Interpolation::intercept() const
{
return (m_sumY / m_count) - slope() * (m_sumX / m_count);
}
double Interpolation::interpolate(double X) const
{
return intercept() + slope() * X;
}
double Interpolation::correlate() const
{
return m_sumXY / sqrt(m_sumXX * m_sumYY);
}
Why not use a ring buffer of some fixed size (say, the last 1000 points) and do a standard QR decomposition-based least squares fit to the buffered data? Once the buffer fills, each time you get a new point you replace the oldest and re-fit. That way you have a bounded working set that still has some data locality, without all the challenges of live stream (memoryless) processing.
Are you limiting the number of polynomial coefficients (i.e. fitting to a max power of x in your polynomial)?
If not, then you don't need a "best fit" algorithm - you can always fit N data points EXACTLY to a polynomial of N coefficients.
Just use matrices to solve N simultaneous equations for N unknowns (the N coefficients of the polynomial).
If you are limiting to a max number of coefficients, what is your max?
Following your comments and edit:
What you want is a low-pass filter to filter out noise, not fit a polynomial to the noise.
Given the nature of your data:
the points may lie anywhere on the X axis between 0.0 and 1.0, but the Y values will always be either 1.0 or 0.0.
Then you don't need even a single pass, as these two lines will pass exactly through every point:
X = [0.0 ... 1.0], Y = 0.0
X = [0.0 ... 1.0], Y = 1.0
Two short line segments, unit length, and every point falls on one line or the other.
Admittedly, an algorithm to find a good curve fit for arbitrary points in a single pass is interesting, but (based on your question), that's not what you need.
Assuming that you don't know which point should belong to which curve, something like a Hough Transform might provide what you need.
The Hough Transform is a technique that allows you to identify structure within a data set. One use is for computer vision, where it allows easy identification of lines and borders within the field of sight.
Advantages for this situation:
Each point need be considered only once
You don't need to keep a data structure for each candidate line, just one (complex, multi-dimensional) structure
Processing of each line is simple
You can stop at any point and output a set of good matches
You never discard any data, so it's not reliant on any accidental locality of references
You can trade off between accuracy and memory requirements
Isn't limited to exact matches, but will highlight partial matches too.
An approach
To find cubic fits, you'd construct a 4-dimensional Hough space, into which you'd project each of your data-points. Hotspots within Hough space would give you the parameters for the cubic through those points.
You need the solution to an overdetermined linear system. The popular methods are Normal Equations (not usually recommended), QR factorization, and singular value decomposition (SVD). Wikipedia has decent explanations, Trefethen and Bau is very good. Your options:
Out-of-core implementation via the normal equations. This requires the product A'A where A has many more rows than columns (so the result is very small). The matrix A is completely defined by the sample locations so you don't have to store it, thus computing A'A is reasonably cheap (very cheap if you don't need to hit memory for the node locations). Once A'A is computed, you get the solution in one pass through your input data, but the method can be unstable.
Implement an out-of-core QR factorization. Classical Gram-Schmidt will be fastest, but you have to be careful about stability.
Do it in-core with distributed memory (if you have the hardware available). Libraries like PLAPACK and SCALAPACK can do this, the performance should be much better than 1. The parallel scalability is not fantastic, but will be fine if it's a problem size that you would even think about doing in serial.
Use iterative methods to compute an SVD. Depending on the spectral properties of your system (maybe after preconditioning) this could converge very fast and does not require storage for the matrix (which in your case has 5-10 columns each of which are the size of your input data. A good library for this is SLEPc, you only have to find a the product of the Vandermonde matrix with a vector (so you only need to store the sample locations). This is very scalable in parallel.
I believe I found the answer to my own question based on a modified version of this code. For those interested, my Java code is here.

Resources