I'm currently attempting to create a first-person space flight camera.
First, allow me to define what I mean by that.
Notice that I am currently using Row-Major matrices in my math library (meaning, the basis vectors in my 4x4 matrices are laid out in rows, and the affine translation part is in the fourth row). Hopefully this helps clarify the order in which I multiply my matrices.
What I have so Far
So far, I have successfully implemented a simple first-person camera view. The code for this is as follows:
fn fps_camera(&mut self) -> beagle_math::Mat4 {
let pitch_matrix = beagle_math::Mat4::rotate_x(self.pitch_in_radians);
let yaw_matrix = beagle_math::Mat4::rotate_y(self.yaw_in_radians);
let view_matrix = yaw_matrix.get_transposed().mul(&pitch_matrix.get_transposed());
let translate_matrix = beagle_math::Mat4::translate(&self.position.mul(-1.0));
translate_matrix.mul(&view_matrix)
}
This works as expected. I am able to walk around and look around with the mouse.
What I am Attempting to do
However, an obvious limitation of this implementation is that since pitch and yaw is always defined relative to a global "up" direction, the moment I pitch more than 90 degrees, getting the world to essentially being upside-down, my yaw movement is inverted.
What I would like to attempt to implement is what could be seen more as a first-person "space flight" camera. That is, no matter what your current orientation is, pitching up and down with the mouse will always translate into up and down in the game, relative to your current orientation. And yawing left and right with your mouse will always translate into a left and right direction, relative to your current orientation.
Unfortunately, this problem has got me stuck for days now. Bear with me, as I am new to the field of linear algebra and matrix transformations. So I must be misunderstanding or overlooking something fundamental. What I've implemented so far might thus look... stupid and naive :) Probably because it is.
What I've Tried so far
The way that I always end up coming back to thinking about this problem is to basically redefine the world's orientation every frame. That is, in a frame, you translate, pitch, and yaw the world coordinate space using your view matrix. You then somehow redefine this orientation as being the new default or zero-rotation. By doing this, you can then, in your next frame apply new pitch and yaw rotations based on this new default orientation, which (by my thinking, anyways), would mean that mouse movement will always translate directly to up, down, left, and right, no matter how you are oriented, because you are basically always redefining the world coordinate space in terms relative to what your previous orientation was, as opposed to the simple first-person camera, which always starts from the same initial coordinate space.
The latest code I have which attempts to implement my idea is as follows:
fn space_camera(&mut self) -> beagle_math::Mat4 {
let previous_pitch_matrix = beagle_math::Mat4::rotate_x(self.previous_pitch);
let previous_yaw_matrix = beagle_math::Mat4::rotate_y(self.previous_yaw);
let previous_view_matrix = previous_yaw_matrix.get_transposed().mul(&previous_pitch_matrix.get_transposed());
let pitch_matrix = beagle_math::Mat4::rotate_x(self.pitch_in_radians);
let yaw_matrix = beagle_math::Mat4::rotate_y(self.yaw_in_radians);
let view_matrix = yaw_matrix.get_transposed().mul(&pitch_matrix.get_transposed());
let translate_matrix = beagle_math::Mat4::translate(&self.position.mul(-1.0));
// SAVES
self.previous_pitch += self.pitch_in_radians;
self.previous_yaw += self.yaw_in_radians;
// RESETS
self.pitch_in_radians = 0.0;
self.yaw_in_radians = 0.0;
translate_matrix.mul(&(previous_view_matrix.mul(&view_matrix)))
}
This, however, does nothing to solve the issue. It actually gives the exact same result and problem as the fps camera.
My thinking behind this code is basically: Always keep track of an accumulated pitch and yaw (in the code that is the previous_pitch and previous_yaw) based on deltas each frame. The deltas are pitch_in_radians and pitch_in_yaw, as they are always reset each frame.
I then start off by constructing a view matrix that would represent how the world was orientated previously, that is the previous_view_matrix. I then construct a new view matrix based on the deltas of this frame, that is the view_matrix.
I then attempt to do a view matrix that does this:
Translate the world in the opposite direction of what represents the camera's current position. Nothing is different here from the FPS camera.
Orient that world according to what my orientation has been so far (using the previous_view_matrix. What I would want this to represent is the default starting point for the deltas of my current frame's movement.
Apply the deltas of the current frame using the current view matrix, represented by view_matrix
My hope was that in step 3, the previous orientation would be seen as a starting point for a new rotation. That if the world was upside-down in the previous orientation, the view_matrix would apply a yaw in terms of the camera's "up", which would then avoid the problem of inverted controls.
I must surely be either attacking the problem from the wrong angle, or misunderstanding essential parts of matrix multiplication with rotations.
Can anyone help pin-point where I'm going wrong?
[EDIT] - Rolling even when you only pitch and yaw the camera
For anyone just stumbling upon this, I fixed it by a combination of the marked answer and Locke's answer (ultimately, in the example given in my question, I also messed up the matrix multiplication order).
Additionally, when you get your camera right, you may stumble upon the odd side-effect that holding the camera stationary, and just pitching and yawing it about (such as moving your mouse around in a circle), will result in your world slowly rolling as well.
This is not a mistake, this is how rotations work in 3D. Kevin added a comment in his answer that explains it, and additionally, I also found this GameDev Stack Exchange answer explaining it in further detail.
The problem is that two numbers, pitch and yaw, provide insufficient degrees of freedom to represent consistent free rotation behavior in space without any “horizon”. Two numbers can represent a look-direction vector but they cannot represent the third component of camera orientation, called roll (rotation about the “depth” axis of the screen). As a consequence, no matter how you implement the controls, you will find that in some orientations the camera rolls strangely, because the effect of trying to do the math with this information is that every frame the roll is picked/reconstructed based on the pitch and yaw.
The minimal solution to this is to add a roll component to your camera state. However, this approach (“Euler angles”) is both tricky to compute with and has numerical stability issues (“gimbal lock”).
Instead, you should represent your camera/player orientation as a quaternion, a mathematical structure that is good for representing arbitrary rotations. Quaternions are used somewhat like rotation matrices, but have fewer components; you'll multiply quaternions by quaternions to apply player input, and convert quaternions to matrices to render with.
It is very common for general purpose game engines to use quaternions for describing objects' rotations. I haven't personally written quaternion camera code (yet!) but I'm sure the internet contains many examples and longer explanations you can work from.
It looks like a lot of the difficulty you are having is due to trying to normalize the transformation to apply the new translation. It seems like this is probably a large part of what is tripping you up. I would suggest changing how you store your position and rotation. Instead, try letting your view matrix define your position.
/// Apply rotation based on the change in mouse position
pub fn on_mouse_move(&mut self, dx: f32, dy: f32) {
// I think this is correct, but it might need tweaking
let rotation_matrix = Mat4::rotate_xy(-y, x);
self.apply_movement(&rotation_matrix, &Vec3::zero())
}
/// Append axis-aligned movement relative to the camera and rotation
pub fn apply_movement(&mut self, rotation: &Mat4<f32>, translation: &Vec3<f32>) {
// Create transformation matrix for translation
let translation = Mat4::translate(translation);
// Append translation and rotation to existing view matrix
self.view_matrix = self.view_matrix * translation * rotation;
}
/// You can get the position from the last column [x, y, z, w] of your view matrix.
pub fn translation(&self) -> Vec3<f32> {
self.view_matrix.column(3).into()
}
I made a couple assumptions about the library:
Mat4 implements Mul<Self> so you do not need to call x.mul(y) explicitly and can instead use x * y. Same goes for Sub.
There exists a Mat4::rotate_xy function. If there isn't one, it would be equivalent to Mat4::rotate_xyz(delta_pitch, delta_yaw, 0.0) or Mat4::rotate_x(delta_pitch) * Mat4::rotate_y(delta_yaw).
I'm somewhat eyeballing the equations so hopefully this is correct. The main idea is to take the delta from the previous inputs and create matrices from that which can then be added on top of the previous view_matrix. If you attempt to take the difference after creating transformation matrices it will only be more work for you (and your processor).
As a side note I see you are using self.position.mul(-1.0). This tells me that your projection matrix is probably backwards. You likely want to adjust your projection matrix by scaling it by a factor of -1 in the z axis.
I am making a platformer game with all the basic code that is usualy found in godot platformers(kinimaticbody2d,move_and_slide, etc). the problem is that the character can climb really steep slopes which I dont like.
for example the character can move up the slope shown in the picture simply by pressing left(no jumping) he just slowly slides up
but when i make the slope just a little bit steeper the character cant scale the slope.
my question is there a way to set the maximum angle of a slope that a character can climb? thx in advance
If you are using move_and_slide you need to:
Specify an up_direction vector, which goes into the second parameter of move_and_slide. Without it, everything is considered a wall, and no sliding happens. Since, you are experiencing sliding, I suspect you already have this.
And specify floor_max_angle float, which goes into the fifth parameter of move_and_slide. This controls how inclined can be a surface and still be considered a floor/slope. If a surface is more steep that this angle, it will be considerad a wall, and no sliding happens. The angle is measured between the normal of the surface and the up_direction vector provided.
By default the value of floor_max_angle is 0.785398 (it is in radians, by the way) which is equivalent to 45º.
The situation you are suggest you are using a number closer to a quarter turn (i.e π/2 - approx 1.57 - radians). Probably something like 1.2 or 1.3? You want a smaller value.
By the way, you can convert from radians to degrees with rad2deg and viceversa with deg2rad.
I'll quickly go over a couple more issues you may find along the way:
The character slides down when idle due to gravity. To prevent this, you want to set stop_on_slope to true. This is the third parameter of move_and_slide.
The character have jitter and move more than desired when going down the slope with user input (caused by very little jumps usually perceived as jitter). And this is why you would want to move_and_slide_with_snap, which has an extra snap vector parameter which allows you to specify a direction the character should stick to the ground while moving. By the way, you may want to use get_slide_collision to figure out on what surface the character did slide, if any, and what is its normal.
The function insertObservation in COccupancyGridMap2D takes in two parameters which are the CPose3D and CObservation2DRangeScan values, even though both of these values are accurate with no noise, the grid is producing warped boundaries. The only thing I can think of is the scan.aperture settings might be producing this effect but these are correct with a range of 2*PI and other visual aides for point clouds show no warpage at all. Below is an illustration of this.
On the right the occupancy grid is warped compared to the ground truth square boundary. The left points look fine and are using the same aperture and load FromVectors settings.
Here is example code to try to verify the warp effect your self.
COccupancyGridMap2D gridmap;
gridmap.setSize(-4.0,4.0,-4.0,4.0,0.025f);
#define SCANS_SIZE 100
char SCAN_VALID[] = {1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1};
CPose3D transform = CPose3D(0,0,0,0,0,0);
CObservation2DRangeScan read_scan;
read_scan.aperture = 2*M_PIf;
read_scan.rightToLeft = true;
vector<float> landmark = {2.9f,2.906f,2.924f,2.953f,2.996f,3.052f,3.124f,3.212f,3.319f,3.447f,3.601f,3.786f,4.007f,3.948f,3.736f,3.560f,3.413f,3.290f,3.188f,3.104f,3.037f,2.984f,2.945f,2.918f,2.903f,2.900f,2.909f,2.930f,2.963f,3.009f,3.069f,3.144f,3.237f,3.349f,3.483f,3.644f,3.837f,4.069f,3.891f,3.689f,3.521f,3.380f,3.263f,3.166f,3.086f,3.022f,2.973f,2.937f,2.913f,2.901f,2.901f,2.913f,2.937f,2.973f,3.022f,3.086f,3.166f,3.263f,3.380f,3.521f,3.689f,3.891f,4.069f,3.837f,3.644f,3.483f,3.349f,3.237f,3.144f,3.069f,3.009f,2.963f,2.930f,2.909f,2.900f,2.903f,2.918f,2.945f,2.984f,3.037f,3.104f,3.188f,3.290f,3.413f,3.560f,3.736f,3.948f,4.007f,3.786f,3.601f,3.447f,3.319f,3.212f,3.124f,3.052f,2.996f,2.953f,2.924f,2.906f,2.900f};
float *SCAN_RANGES = &landmark[0];
read_scan.loadFromVectors(SCANS_SIZE, SCAN_RANGES,SCAN_VALID);
gridmap.insertObservation(&read_scan,&transform);
CSimplePointsMap m3;
m3.insertObservation(&read_scan);
m3.getAllPoints(map_xs,map_ys,map_zs);
Here is a image of the simplePointsMap plot (red points) vs the OccupanyGrid
The angles being casted from the occupany grid look correct, with a consistent interval, but the angle is still off from simplepoints map, length looks ok and it seems each ray could be rotated to match with one of the red points. Possibly what could be happening is a mapping issue, and since we try to make the angles into discrete horizontal and vertical steps this causes the misalignment. I've tried increasing the resolution but this does not help, I guess that makes sense since scaling a horizontal/vertical ratio would still result in the same ratio and mismatch. I might be missing something though, what else could be causing this distortion, is this expected and the best we can do? Thank you for any help.
It seems to me that the problem is in the assumption of which are the angles of each scan "ray".
Take a look at the class mrpt::obs::CSinCosLookUpTableFor2DScans, generate one such sin/cos LUT for your specific scan object, and double check if the sin/cos values coincide with yours, as used to generate the scan.
By the way, COccupancyGridMap2D has one method to simulate a 2D scan from a gridmap image, give it a try, and if that one generates warped results, please fill up a bug report (!) ;-)
Cheers.
I just realized what was going on, CSimplePointsMap and COccupancyGridMap2D use two slightly different references for point angles. CSimplePointsMap is expecting an overlap between the first and last point while COccupancyGridMap2D is not. The simple fix to all of this then is to read in one less scan for the COccupancyGridMap2D and then everything lines up. This is if your angles are being defined as so, which is fine for CSimplePointsMap.
for (int i = 0; i < Raysize; i++)
{
float angle = -angle_range / 2 + i * (angle_range) / (Raysize-1);
Here is the fix for OccupancyGridMap2D insertObservation using SCANS_SIZE-1 instead and CSimplePointsMap can still use SCANS_SIZE.
read_scan.loadFromVectors(SCANS_SIZE-1, SCAN_RANGES,SCAN_VALID);
gridmap.insertObservation(&read_scan,&transform);
Back story: I'm creating a Three.js based 3D graphing library. Similar to sigma.js, but 3D. It's called graphosaurus and the source can be found here. I'm using Three.js and using a single particle representing a single node in the graph.
This was the first task I had to deal with: given an arbitrary set of points (that each contain X,Y,Z coordinates), determine the optimal camera position (X,Y,Z) that can view all the points in the graph.
My initial solution (which we'll call Solution 1) involved calculating the bounding sphere of all the points and then scale the sphere to be a sphere of radius 5 around the point 0,0,0. Since the points will be guaranteed to always fall in that area, I can set a static position for the camera (assuming the FOV is static) and the data will always be visible. This works well, but it either requires changing the point coordinates the user specified, or duplicating all the points, neither of which are great.
My new solution (which we'll call Solution 2) involves not touching the coordinates of the inputted data, but instead just positioning the camera to match the data. I encountered a problem with this solution. For some reason, when dealing with really large data, the particles seem to flicker when positioned in front/behind of other particles.
Here are examples of both solutions. Make sure to move the graph around to see the effects:
Solution 1
Solution 2
You can see the diff for the code here
Let me know if you have any insight on how to get rid of the flickering. Thanks!
It turns out that my near value for the camera was too low and the far value was too high, resulting in "z-fighting". By narrowing these values on my dataset, the problem went away. Since my dataset is user dependent, I need to determine an algorithm to generate these values dynamically.
I noticed that in the sol#2 the flickering only occurs when the camera is moving. One possible reason can be that, when the camera position is changing rapidly, different transforms get applied to different particles. So if a camera moves from X to X + DELTAX during a time step, one set of particles get the camera transform for X while the others get the transform for X + DELTAX.
If you separate your rendering from the user interaction, that should fix the issue, assuming this is the issue. That means that you should apply the same transform to all the particles and the edges connecting them, by locking (not updating ) the transform matrix until the rendering loop is done.
Can you change the perspective in POV-Ray, so that convergence between parallel lines does not look so steep?
E.g. change this angle (the convergence of the checkered floor into the distance) here
To an angle like this
I want it to seem like you're looking at something nearby, so with a smaller angle of convergence in parallel lines.
To illustrate it more: instead of a view like this
Use a view like this
Move the camera backwards and zoom in (by making the angle smaller):
camera {
perspective
location <0,0,-15> // move this backwards
sky y
up y
angle 30 // make this smaller
right (image_width/image_height)*x
look_at <0,0,0>
}
You can go to the extreme by using an orthographic "camera":
camera {
orthographic
location <0,0,-15> // Move backwards, no matter how far
sky y
up y * h // where h = hight you want to cover
right x * w // where w = width you want to cover
look_at <0,0,0>
}
The other extreme is the fish-eye lens.
You need to reduce the field of view of your camera's view frustum. The larger the field of view, the more stuff you're trying to squeeze into the output of your camera's render and so they parallel lines will converge faster. So in your first example with a cube, the camera will be more focused on the cube and the areas just immediately around it, than the whole environment.
The other option is to make your far plane much closer to your near plane, so you don't see many things that are far off. So in you first image example, you'll only see the first four or five grids instead.