How to change x and y velocities in response to a wall bounce - trigonometry

I 'program' simple hyper casual mobile games in my free time using a sudo programming language software called construct 3, as I am still learning actual languages and can't yet use them well enough to make games.
Essentially I am writing my own super simple bouncing ball physics engine. I have up to 3 balls in this little pinball game of mine at any time. I have given each ball an x velocity and y velocity instance variable.
Here is my question: how do the x and y velocities change when the ball bounces off of a surface with any angle? I know that if the floor is flat and it hits that, x stays the same and y flips it's polarity. I know the opposite happens with hitting a wall. But I have no idea how to calculate any other angle besides the 4 main axes. I'm sure it is a simple trig function. Oh, and dumb your answer down to the most simple sudo-code response you can make.

For any collision of an object against a flat surface of an angle alpha, your object will bounce back with an angle -alpha. Also, your have what's called a conservation of momentum, which means if your surface doesn't move and does not absorb anything, the total velocity of your object will not change either.
That being said, "all you need to do" is to parameter both the angle of your surface to the horizontal and the angle of your object incoming to your surface, so you can easily register an angle alpha. This way, you will be able to get a -alpha angle between your object and the surface after the collision in the frame of your surface, and you will then need to go back to the "horizontal frame" by simply adding the angle of your surface.
As far as your implementation should go, this is what I suggest:
Start with a function horizontalToAngularFrame that will takes one or more parameter depending if you're in 2D or 3D, so you can define the angle
Code another function AngularFrameToHorizontal with the same number of parameter
When an object enters in collision, just treat is as you would treat an object in the horizontal frame, and use the 2 previously coded functions to bring the angles back to your horizontal frame

Related

Triangulate camera position and orientation in regards to known objects

I made an object tracker that calculates the position of an object recorded in a live camera feed using stereoscopic cameras. The math was simple, once you know the camera distance and orientation. However, now I thought it would be nice to allow me to quickly extract all these parameters, so when I change my setup or cameras I will be able to quickly calibrate it again.
To calculate the object position I made some simplifications/assumptions, which made the math easier: the cameras are in the same YZ plane, so there is only a distance in x between them. Their tilt is also just in the XY plane.
To reverse the triangulation I thought a test pattern (square) of 4 points of which I know the distances to each other would suffice. Ideally I would like to get the cameras' positions (distances to test pattern and each other), their rotation in X (and maybe Y and Z if applicable/possible), as well as their view angle (to translate pixel position to real world distances - that should be a camera constant, but in case I change cameras, it is quite a bit to define accurately)
I started with the same trigonometric calculations, but always miss parameters. I am wondering if there is an existing solution or a solid approach. If I need to add parameter (like distances, they are easy enough to measure), it's no problem (my calculations didn't give me any simple equations with that possibility though).
I also read about Homography in opencv, but it seems it applies to 2D space only, or not?
Any help is appreciated!

Arcade physics rotation and angularVelocity

So ... what exactly are the parameters of body.rotation and body.angularVelocity in Phaser arcade physics?
The documentation for body.rotation just says "the amount the Body is rotated", without specifying units (radians or degrees), the zero vector (X axis?), nor the direction that's positive.
Docs for body.angle says "angle in radians" ... but again doesn't say which axis is the 0 rotation vector, nor which direction is positive.
The documentation for angularVelocity says "angular velocity in pixels per second squared" which doesn't make ANY SENSE AT ALL. You can't measure rotation in pixels.
I'm trying to sync up a phaser front-end with a server-based physics model that has its own coordinate system, so some clarity on the documentation would really make my life easier!
As far as I know "body.rotation" is given in radians and if using degrees you should use "body.angle".
For the rotation direction a higher value rotates the sprite clockwise. If the angle is 0 and the sprite is pointing up it will point to the right after entering the body.angle = 90.
angularVelocity is not for rotating your sprite. The name says "angularVELOCITY" so what it's used for is to set an angular velocity. It's mainly used when you want the sprite to move in the direction it's facing.

Change perspective in POV-Ray? (less convergence)

Can you change the perspective in POV-Ray, so that convergence between parallel lines does not look so steep?
E.g. change this angle (the convergence of the checkered floor into the distance) here
To an angle like this
I want it to seem like you're looking at something nearby, so with a smaller angle of convergence in parallel lines.
To illustrate it more: instead of a view like this
Use a view like this
Move the camera backwards and zoom in (by making the angle smaller):
camera {
perspective
location <0,0,-15> // move this backwards
sky y
up y
angle 30 // make this smaller
right (image_width/image_height)*x
look_at <0,0,0>
}
You can go to the extreme by using an orthographic "camera":
camera {
orthographic
location <0,0,-15> // Move backwards, no matter how far
sky y
up y * h // where h = hight you want to cover
right x * w // where w = width you want to cover
look_at <0,0,0>
}
The other extreme is the fish-eye lens.
You need to reduce the field of view of your camera's view frustum. The larger the field of view, the more stuff you're trying to squeeze into the output of your camera's render and so they parallel lines will converge faster. So in your first example with a cube, the camera will be more focused on the cube and the areas just immediately around it, than the whole environment.
The other option is to make your far plane much closer to your near plane, so you don't see many things that are far off. So in you first image example, you'll only see the first four or five grids instead.

Arkanoid Collision Detection... Again

I have read dozens of questions here on SO (and not only) regarding arkanoid collision detection, namely, moving circle collision against a stationery rectangle, but all of them ask how to detect a collision or how to detect which side of the rectangle the ball hits. My question is a bit different - it concerns the calculation of the new speed direction in case when the ball hits the angle of the rectangle.
For simplicity's sake let's assume that Vx >= 0 and Vy <= 0, that is, the ball is coming from the below from the left upward and rightward and also suppose I know it's gonna hit the lower side of the rectangle. The green arrow shows the approximate direction of the ball and the blue dot shows the first point on the line containing the lower side of the rectangle that the ball hits. If that point lies strictly within the lower side of the rectangle then all is trivial - just change Vy to -Vy. However when that point lies outside the lower side it means that the first point of the rectangle that the ball will touch is going to be its lower-left corner, in which case I don't think that changing the Vy to -Vy is correct. I think that the new velocity angle must be dependent on the distance of the blue point to the corner. Also I think that not only Vy but also Vx must change (preserving, possibly, the length of the V vector).
So, how do we calculate the new Vx and Vy when we hit an angle? If you know any good links that address this question I'd be delighted to know them. Also note that I am more interested in the absolute physical model of this rather than easy-to-code optimized approximations. You can assume there is no rotation involved. Thank you very much in advance
You know how to bounce off a horizontal wall. Do you know how to bounce off a wall that is at some other angle?
When the circle hits the wall, it makes contact at a single point. That single point is all the circle "knows" about the wall; the location of that point gives you enough information to calculate the new V. When the circle hits the corner, it makes contact at a single point (i.e. the corner) so it bounces just as if it hit a wall at that point.
Is that enough to go on, or would you like some math? (And if so, how comfortable are you with vector algebra?)
A simpliciation that is often made when modelling rock blocks in rock mechanics is to assume that the corner of the rectangle has a small radius. The calculation is then one of two curved surfaces contacting. This approach tends to give more consistent behaviour than modelling corners as right angles an so is preferred where consistent results are needed.

Defining Up in the Direct3D View Matrix when Camera Is Constantly Moving

In my Direct3D application, the camera can be moved using the mouse or arrow keys. But if I hard code (0,1,0) as the up direction vector in LookAtLH, the frame goes blank at some orientations of the camera.
I just learned the hard way that when looking along the Y-axis, (0,1,0) no longer works as the Up direction (seems obvious?). I am thinking of switching my up direction to something else for each of these special cases. Is there a more graceful way to handle this?
Assuming you can calculate a vector pointing forward (what you are looking at - your position) and a vector pointing right (always on the XZ-plane unless you can roll). Normalize both these vectors, then up is forward x right (where x is cross product).
In general, you can plug in your yaw, pitch and roll into a rotation matrix and rotate the axis vectors to get right, up and forward, but I guess that's what you are using LookAtLH to avoid.
See http://en.wikipedia.org/wiki/Rotation_matrix#The_3-dimensional_rotation_matricies
The graceful way to handle this is to use Unit Quaternions. A quaternion is a vector of 4 values that encodes an orientation in 3D space (not a rotation as some articles assert) and a unit quaternion is one where the vector length sqrt(x^2+y^2+z^2+w^2) is 1.0. There are a set of mathematical operations for working with quaternions that are analogous to using matrices to encode rotations, with the added bonus that quaternions can never represent an degenerate orientation. You can freely convert quaternions to a 3x3 or 4x4 matrix when you need to feed the result to a GPU.
Your problem is that, while you are moving your camera, you will introduce a little twist into the camera's up direction. By forcing the camera to re-center itself on the (0,1,0) vector every iteration, you are in effect rotating the camera and then clamping the camera's orientation to remain on the surface of a sphere, but when your camera hits the pole of this sphere there is no good direction to call "up" and your matrix goes singular and gives you zero-sized polygons (hence the black screen). Quaternions have the ability to interpolate through these poles and come out the other side just fine, leaving you with a valid matrix at all times. all you have to do is control the "twist".
To measure this twist you should read Ken Shoemake's article "Fiber Bundle Twist Reduction" in the book Graphics Gems 4. He shows a good way to measure this accumulated twist and how to remove it when it is offensive.

Resources