Change perspective in POV-Ray? (less convergence) - graphics

Can you change the perspective in POV-Ray, so that convergence between parallel lines does not look so steep?
E.g. change this angle (the convergence of the checkered floor into the distance) here
To an angle like this
I want it to seem like you're looking at something nearby, so with a smaller angle of convergence in parallel lines.
To illustrate it more: instead of a view like this
Use a view like this

Move the camera backwards and zoom in (by making the angle smaller):
camera {
perspective
location <0,0,-15> // move this backwards
sky y
up y
angle 30 // make this smaller
right (image_width/image_height)*x
look_at <0,0,0>
}
You can go to the extreme by using an orthographic "camera":
camera {
orthographic
location <0,0,-15> // Move backwards, no matter how far
sky y
up y * h // where h = hight you want to cover
right x * w // where w = width you want to cover
look_at <0,0,0>
}
The other extreme is the fish-eye lens.

You need to reduce the field of view of your camera's view frustum. The larger the field of view, the more stuff you're trying to squeeze into the output of your camera's render and so they parallel lines will converge faster. So in your first example with a cube, the camera will be more focused on the cube and the areas just immediately around it, than the whole environment.
The other option is to make your far plane much closer to your near plane, so you don't see many things that are far off. So in you first image example, you'll only see the first four or five grids instead.

Related

How to create a first-person "space flight" camera

I'm currently attempting to create a first-person space flight camera.
First, allow me to define what I mean by that.
Notice that I am currently using Row-Major matrices in my math library (meaning, the basis vectors in my 4x4 matrices are laid out in rows, and the affine translation part is in the fourth row). Hopefully this helps clarify the order in which I multiply my matrices.
What I have so Far
So far, I have successfully implemented a simple first-person camera view. The code for this is as follows:
fn fps_camera(&mut self) -> beagle_math::Mat4 {
let pitch_matrix = beagle_math::Mat4::rotate_x(self.pitch_in_radians);
let yaw_matrix = beagle_math::Mat4::rotate_y(self.yaw_in_radians);
let view_matrix = yaw_matrix.get_transposed().mul(&pitch_matrix.get_transposed());
let translate_matrix = beagle_math::Mat4::translate(&self.position.mul(-1.0));
translate_matrix.mul(&view_matrix)
}
This works as expected. I am able to walk around and look around with the mouse.
What I am Attempting to do
However, an obvious limitation of this implementation is that since pitch and yaw is always defined relative to a global "up" direction, the moment I pitch more than 90 degrees, getting the world to essentially being upside-down, my yaw movement is inverted.
What I would like to attempt to implement is what could be seen more as a first-person "space flight" camera. That is, no matter what your current orientation is, pitching up and down with the mouse will always translate into up and down in the game, relative to your current orientation. And yawing left and right with your mouse will always translate into a left and right direction, relative to your current orientation.
Unfortunately, this problem has got me stuck for days now. Bear with me, as I am new to the field of linear algebra and matrix transformations. So I must be misunderstanding or overlooking something fundamental. What I've implemented so far might thus look... stupid and naive :) Probably because it is.
What I've Tried so far
The way that I always end up coming back to thinking about this problem is to basically redefine the world's orientation every frame. That is, in a frame, you translate, pitch, and yaw the world coordinate space using your view matrix. You then somehow redefine this orientation as being the new default or zero-rotation. By doing this, you can then, in your next frame apply new pitch and yaw rotations based on this new default orientation, which (by my thinking, anyways), would mean that mouse movement will always translate directly to up, down, left, and right, no matter how you are oriented, because you are basically always redefining the world coordinate space in terms relative to what your previous orientation was, as opposed to the simple first-person camera, which always starts from the same initial coordinate space.
The latest code I have which attempts to implement my idea is as follows:
fn space_camera(&mut self) -> beagle_math::Mat4 {
let previous_pitch_matrix = beagle_math::Mat4::rotate_x(self.previous_pitch);
let previous_yaw_matrix = beagle_math::Mat4::rotate_y(self.previous_yaw);
let previous_view_matrix = previous_yaw_matrix.get_transposed().mul(&previous_pitch_matrix.get_transposed());
let pitch_matrix = beagle_math::Mat4::rotate_x(self.pitch_in_radians);
let yaw_matrix = beagle_math::Mat4::rotate_y(self.yaw_in_radians);
let view_matrix = yaw_matrix.get_transposed().mul(&pitch_matrix.get_transposed());
let translate_matrix = beagle_math::Mat4::translate(&self.position.mul(-1.0));
// SAVES
self.previous_pitch += self.pitch_in_radians;
self.previous_yaw += self.yaw_in_radians;
// RESETS
self.pitch_in_radians = 0.0;
self.yaw_in_radians = 0.0;
translate_matrix.mul(&(previous_view_matrix.mul(&view_matrix)))
}
This, however, does nothing to solve the issue. It actually gives the exact same result and problem as the fps camera.
My thinking behind this code is basically: Always keep track of an accumulated pitch and yaw (in the code that is the previous_pitch and previous_yaw) based on deltas each frame. The deltas are pitch_in_radians and pitch_in_yaw, as they are always reset each frame.
I then start off by constructing a view matrix that would represent how the world was orientated previously, that is the previous_view_matrix. I then construct a new view matrix based on the deltas of this frame, that is the view_matrix.
I then attempt to do a view matrix that does this:
Translate the world in the opposite direction of what represents the camera's current position. Nothing is different here from the FPS camera.
Orient that world according to what my orientation has been so far (using the previous_view_matrix. What I would want this to represent is the default starting point for the deltas of my current frame's movement.
Apply the deltas of the current frame using the current view matrix, represented by view_matrix
My hope was that in step 3, the previous orientation would be seen as a starting point for a new rotation. That if the world was upside-down in the previous orientation, the view_matrix would apply a yaw in terms of the camera's "up", which would then avoid the problem of inverted controls.
I must surely be either attacking the problem from the wrong angle, or misunderstanding essential parts of matrix multiplication with rotations.
Can anyone help pin-point where I'm going wrong?
[EDIT] - Rolling even when you only pitch and yaw the camera
For anyone just stumbling upon this, I fixed it by a combination of the marked answer and Locke's answer (ultimately, in the example given in my question, I also messed up the matrix multiplication order).
Additionally, when you get your camera right, you may stumble upon the odd side-effect that holding the camera stationary, and just pitching and yawing it about (such as moving your mouse around in a circle), will result in your world slowly rolling as well.
This is not a mistake, this is how rotations work in 3D. Kevin added a comment in his answer that explains it, and additionally, I also found this GameDev Stack Exchange answer explaining it in further detail.
The problem is that two numbers, pitch and yaw, provide insufficient degrees of freedom to represent consistent free rotation behavior in space without any “horizon”. Two numbers can represent a look-direction vector but they cannot represent the third component of camera orientation, called roll (rotation about the “depth” axis of the screen). As a consequence, no matter how you implement the controls, you will find that in some orientations the camera rolls strangely, because the effect of trying to do the math with this information is that every frame the roll is picked/reconstructed based on the pitch and yaw.
The minimal solution to this is to add a roll component to your camera state. However, this approach (“Euler angles”) is both tricky to compute with and has numerical stability issues (“gimbal lock”).
Instead, you should represent your camera/player orientation as a quaternion, a mathematical structure that is good for representing arbitrary rotations. Quaternions are used somewhat like rotation matrices, but have fewer components; you'll multiply quaternions by quaternions to apply player input, and convert quaternions to matrices to render with.
It is very common for general purpose game engines to use quaternions for describing objects' rotations. I haven't personally written quaternion camera code (yet!) but I'm sure the internet contains many examples and longer explanations you can work from.
It looks like a lot of the difficulty you are having is due to trying to normalize the transformation to apply the new translation. It seems like this is probably a large part of what is tripping you up. I would suggest changing how you store your position and rotation. Instead, try letting your view matrix define your position.
/// Apply rotation based on the change in mouse position
pub fn on_mouse_move(&mut self, dx: f32, dy: f32) {
// I think this is correct, but it might need tweaking
let rotation_matrix = Mat4::rotate_xy(-y, x);
self.apply_movement(&rotation_matrix, &Vec3::zero())
}
/// Append axis-aligned movement relative to the camera and rotation
pub fn apply_movement(&mut self, rotation: &Mat4<f32>, translation: &Vec3<f32>) {
// Create transformation matrix for translation
let translation = Mat4::translate(translation);
// Append translation and rotation to existing view matrix
self.view_matrix = self.view_matrix * translation * rotation;
}
/// You can get the position from the last column [x, y, z, w] of your view matrix.
pub fn translation(&self) -> Vec3<f32> {
self.view_matrix.column(3).into()
}
I made a couple assumptions about the library:
Mat4 implements Mul<Self> so you do not need to call x.mul(y) explicitly and can instead use x * y. Same goes for Sub.
There exists a Mat4::rotate_xy function. If there isn't one, it would be equivalent to Mat4::rotate_xyz(delta_pitch, delta_yaw, 0.0) or Mat4::rotate_x(delta_pitch) * Mat4::rotate_y(delta_yaw).
I'm somewhat eyeballing the equations so hopefully this is correct. The main idea is to take the delta from the previous inputs and create matrices from that which can then be added on top of the previous view_matrix. If you attempt to take the difference after creating transformation matrices it will only be more work for you (and your processor).
As a side note I see you are using self.position.mul(-1.0). This tells me that your projection matrix is probably backwards. You likely want to adjust your projection matrix by scaling it by a factor of -1 in the z axis.

How to change x and y velocities in response to a wall bounce

I 'program' simple hyper casual mobile games in my free time using a sudo programming language software called construct 3, as I am still learning actual languages and can't yet use them well enough to make games.
Essentially I am writing my own super simple bouncing ball physics engine. I have up to 3 balls in this little pinball game of mine at any time. I have given each ball an x velocity and y velocity instance variable.
Here is my question: how do the x and y velocities change when the ball bounces off of a surface with any angle? I know that if the floor is flat and it hits that, x stays the same and y flips it's polarity. I know the opposite happens with hitting a wall. But I have no idea how to calculate any other angle besides the 4 main axes. I'm sure it is a simple trig function. Oh, and dumb your answer down to the most simple sudo-code response you can make.
For any collision of an object against a flat surface of an angle alpha, your object will bounce back with an angle -alpha. Also, your have what's called a conservation of momentum, which means if your surface doesn't move and does not absorb anything, the total velocity of your object will not change either.
That being said, "all you need to do" is to parameter both the angle of your surface to the horizontal and the angle of your object incoming to your surface, so you can easily register an angle alpha. This way, you will be able to get a -alpha angle between your object and the surface after the collision in the frame of your surface, and you will then need to go back to the "horizontal frame" by simply adding the angle of your surface.
As far as your implementation should go, this is what I suggest:
Start with a function horizontalToAngularFrame that will takes one or more parameter depending if you're in 2D or 3D, so you can define the angle
Code another function AngularFrameToHorizontal with the same number of parameter
When an object enters in collision, just treat is as you would treat an object in the horizontal frame, and use the 2 previously coded functions to bring the angles back to your horizontal frame

Three.js ParticleSystem flickering with large data

Back story: I'm creating a Three.js based 3D graphing library. Similar to sigma.js, but 3D. It's called graphosaurus and the source can be found here. I'm using Three.js and using a single particle representing a single node in the graph.
This was the first task I had to deal with: given an arbitrary set of points (that each contain X,Y,Z coordinates), determine the optimal camera position (X,Y,Z) that can view all the points in the graph.
My initial solution (which we'll call Solution 1) involved calculating the bounding sphere of all the points and then scale the sphere to be a sphere of radius 5 around the point 0,0,0. Since the points will be guaranteed to always fall in that area, I can set a static position for the camera (assuming the FOV is static) and the data will always be visible. This works well, but it either requires changing the point coordinates the user specified, or duplicating all the points, neither of which are great.
My new solution (which we'll call Solution 2) involves not touching the coordinates of the inputted data, but instead just positioning the camera to match the data. I encountered a problem with this solution. For some reason, when dealing with really large data, the particles seem to flicker when positioned in front/behind of other particles.
Here are examples of both solutions. Make sure to move the graph around to see the effects:
Solution 1
Solution 2
You can see the diff for the code here
Let me know if you have any insight on how to get rid of the flickering. Thanks!
It turns out that my near value for the camera was too low and the far value was too high, resulting in "z-fighting". By narrowing these values on my dataset, the problem went away. Since my dataset is user dependent, I need to determine an algorithm to generate these values dynamically.
I noticed that in the sol#2 the flickering only occurs when the camera is moving. One possible reason can be that, when the camera position is changing rapidly, different transforms get applied to different particles. So if a camera moves from X to X + DELTAX during a time step, one set of particles get the camera transform for X while the others get the transform for X + DELTAX.
If you separate your rendering from the user interaction, that should fix the issue, assuming this is the issue. That means that you should apply the same transform to all the particles and the edges connecting them, by locking (not updating ) the transform matrix until the rendering loop is done.

Which stage of pipeline should I do culling and clipping and How should I reconstruct triangles after clipping

I'm trying to implement graphic pipeline in software level. I have some problems with clipping and culling now.
Basically, there are two main concerns:
When should back-face culling take place? Eye coordinate, clipping coordinate or window coordinate? I initially made culling process in eye coordinate, thinking this way could relieve the burden of clipping process since many back-facing vertices have already been discarded. But later I realized that in this way vertices need to take 2 matrix multiplications , namely left multiply model-view matrix --> culling --> left multiply perspective matrix, which increases the overhead to some extent.
How do I do clipping and reconstruct triangle? As far as I know, clipping happens in clipping coordinate(after perspective transformation), in another word homogeneous coordinate in which every vertex is being determined whether no not it should be discarded by comparing its x, y, z components with w component. So far so good, right? But after that I need to reconstruct those triangles which have one or two vertices been discarded. I googled that Liang-Barsky algorithm would be helpful in this case, but in clipping coordinate what clipping plane should I use? Should I just record clipped triangles and reconstruct them in NDC?
Any idea will be helpful. Thanks.
(1)
Back-face culling can occur wherever you want.
On the 3dfx hardware, and probably the other cards that rasterised only, it was implemented in window coordinates. As you say that leaves you processing some vertices you don't ever use but you need to weigh that up against your other costs.
You can also cull in world coordinates; you know the location of the camera so you know a vector from the camera to the face — just go to any of the edge vertices. So you can test the dot product of that against the normal.
When I was implementing a software rasteriser for a z80-based micro I went a step beyond that and transformed the camera into model space. So you get the inverse of the model matrix (which was cheap in this case because they were guaranteed to be orthonormal, so the transpose would do), apply that to the camera and then cull from there. It's still a vector difference and a dot product but if you're using the surface normals only for culling then it saves having to transform each and every one of them for the benefit of the camera. For that particular renderer I was then able to work forward from which faces are visible to determine which vertices are visible and transform only those to window coordinates.
(2)
A variant on Sutherland-Cohen is the thing I remember seeing most often. You'd do a forward scan around the outside of the polygon checking each edge in turn and adjusting appropriately.
So e.g. you start with the convex polygon between points (V1, V2, V3). For each clipping plane in turn you'd do something like:
for(Vn in input vertices)
{
if(Vn is on the good side of the plane)
add Vn to output vertices
if(edge from Vn to Vn+1 intersects plane) // or from Vn to 0 if this is the last edge
{
find point of intersection, I
add I to output vertices
}
}
And repeat for each plane. If you're worried about repeated costs then you either need to adopt a structure with an extra level of indirection between faces and edges or just keep a cache. You'd probably do something like dash round the vertices once marking them as in or out, then cache the point of intersection per edge, looked up via the key (v1, v2). If you've set yourself up with the extra level of indirection then store the result in the edge object.

Defining Up in the Direct3D View Matrix when Camera Is Constantly Moving

In my Direct3D application, the camera can be moved using the mouse or arrow keys. But if I hard code (0,1,0) as the up direction vector in LookAtLH, the frame goes blank at some orientations of the camera.
I just learned the hard way that when looking along the Y-axis, (0,1,0) no longer works as the Up direction (seems obvious?). I am thinking of switching my up direction to something else for each of these special cases. Is there a more graceful way to handle this?
Assuming you can calculate a vector pointing forward (what you are looking at - your position) and a vector pointing right (always on the XZ-plane unless you can roll). Normalize both these vectors, then up is forward x right (where x is cross product).
In general, you can plug in your yaw, pitch and roll into a rotation matrix and rotate the axis vectors to get right, up and forward, but I guess that's what you are using LookAtLH to avoid.
See http://en.wikipedia.org/wiki/Rotation_matrix#The_3-dimensional_rotation_matricies
The graceful way to handle this is to use Unit Quaternions. A quaternion is a vector of 4 values that encodes an orientation in 3D space (not a rotation as some articles assert) and a unit quaternion is one where the vector length sqrt(x^2+y^2+z^2+w^2) is 1.0. There are a set of mathematical operations for working with quaternions that are analogous to using matrices to encode rotations, with the added bonus that quaternions can never represent an degenerate orientation. You can freely convert quaternions to a 3x3 or 4x4 matrix when you need to feed the result to a GPU.
Your problem is that, while you are moving your camera, you will introduce a little twist into the camera's up direction. By forcing the camera to re-center itself on the (0,1,0) vector every iteration, you are in effect rotating the camera and then clamping the camera's orientation to remain on the surface of a sphere, but when your camera hits the pole of this sphere there is no good direction to call "up" and your matrix goes singular and gives you zero-sized polygons (hence the black screen). Quaternions have the ability to interpolate through these poles and come out the other side just fine, leaving you with a valid matrix at all times. all you have to do is control the "twist".
To measure this twist you should read Ken Shoemake's article "Fiber Bundle Twist Reduction" in the book Graphics Gems 4. He shows a good way to measure this accumulated twist and how to remove it when it is offensive.

Resources