Simple bouncing ball in haskell keeps bouncing higher - haskell

So, I did a simple simulation of a bouncing ball to learn haskell. The ball is supposed to bounce against the image borders and to be accelerated downward by gravity.
The problem is that the ball is "magically" bouncing higher as time passes. I would expect it to keep the same maximum height instead.
I suspect there is something wrong with bounce and not with move because bounce "teleports" the ball back inside the frame, which is not physically accurate. I tried different way to simulate a correct behavior but could not find anything working right.
What would be a correct way to do this?
Here is the code, running on CodeWorld:
main = simulationOf initial step draw
data World = Ball Point Vector
radius = 40
border = 250 - radius
g = -500
initial (x:y:vx:vy:_) = Ball (400*x - 200, 400*y - 200)
(400*vx - 200, 400*vy - 200)
step t world = bounce (move t world)
move t (Ball (x,y) (vx,vy)) = Ball (x + vx*t, y + vy*t) (vx, vy + g*t)
bounce (Ball (x,y) (vx,vy)) = Ball (nx,ny) (nvx, nvy)
where nx = fence (-border) border x
ny = fence (-border) border y
nvx = if nx /= x then -vx else vx
nvy = if ny /= y then -vy else vy
fence lo hi x = max lo (min hi x)
draw (Ball (x,y) _) = translate x y (solidCircle radius)

This is a well-known artifact of the algorithm you're using to integrate the movement's differential equation.
The "real physics" is (I will only discuss the y component)
∂y/∂t = v(t)
∂v/∂t = g
You model this by a discrete sequence of heights and velocities
yi = yi − 1 + vi − 1 ⋅ Δt
vi = vi − 1 + g ⋅ Δt
This sure resembles the differential equations, just written as difference quotients – but it's not the same thing (except in the limit Δt → 0): in reality, the velocity itself changes during the time step, so it's not quite correct to alter the position just according the constant v value before that time step. Simply ignoring that complication is an approximation called Euler's method, and it's known to suck rather badly.
The much more accurate standard alternative is the fourth-order Runge-Kutta method, give that a try.
n.m.'s points about the bouncing is also valid, though to do it really properly you should calculate the exact time of impact, to neither neglect too much acceleration nor get too much that actually never happened.

Suppose the ball hits the ground exactly in the middle of the quantum of time. In reality (or rather in the frictionless "reality") the absolute value of velocity stays the same in the beginning and in the end of the quantum, but in your model it increases.
One should handle acceleration in bounce. The simplest possible way to do that is to do this:
move t (Ball (x,y) (vx,vy)) = Ball (x + vx*t, y + vy*t) (vx, vy)
step t world = bounce (move t world) t
bounce (Ball (x,y) (vx,vy)) t = Ball (nx,ny) (nvx, nvy)
...
nvy = if ny /= y then -vy else vy + g * t
The velocity won't increase during the bounce.
This is still not entirely accurate, the velocity still creeps up, but slower.

Related

How can I scale a 2D rotation vector without trig functions?

I have a normalized 2D vector that I am using to rotate other 2D vectors. In one instance it indicates "spin" (or "angular momentum") and is used to rotate the "orientation" of a simple polygon. My vector class contains this method:
rotateByXY(x, y) {
let rotX = x * this.x - y * this.y;
let rotY = y * this.x + x * this.y;
this.x = rotX;
this.y = rotY;
}
So far, this is all efficient and uses no trig whatsoever.
However, I want the "spin" to decay over time. This means that the angle of the spin should tend towards zero. And here I'm at a loss as to how to do this without expensive trig calls like this:
let angle = Math.atan2(spin.y, spin.x);
angle *= SPIN_DECAY;
spin = new Vector2D(Math.cos(angle), Math.sin(angle));
Is there a better/faster way to accomplish this?
If it's really the trigonometric functions what is slowing down your computation, you might try to approximate them with their Taylor expansions.
For x close to zero the following identities hold:
cos(x) = 1 - (x^2)/2! + (x^4)/4! - (x^6)/6! + ...
sin(x) = x - (x^3)/3! + (x^5)/5! - (x^7)/7! + ...
atan(x) = x - (x^3)/3 + (x^5)/5 - (x^7)/7 + ...
Based on the degree of accuracy you need for your application you can trim the series. For instance,
cos(x) = 1 - (x^2)/2
with an error of the order of x^3 (actually, x^4, as the term with x^3 is zero anyway).
However, I don't think that this is going to solve your problem: the actual implementation of atan is likely to be already using the same trick, written by someone with lots of experience of speeding these things up. So this is not really a proper answer but I hope it could still be useful.

Charged Particle Trajectories in Magnetic Fields

I've been trying to plot the trajectories of charged particles in the field of a magnetic dipole in an attempt to give a rough pictorial representation of the northern lights. While the spiraling appears to be what I would have expected, it looks as though the spirals start out tight and get wider, as if the particles are somehow gaining energy. I'm not sure what the issue is in the code, and I would appreciate any pointers!
Shown below are the contents of the main loop (initial conditions and all were set outside).
# polar to cartesian
r = np.sqrt(X*X+Y*Y+Z*Z)
theta = np.arccos(Z/r)
phi = np.arctan(Y/X)
# magnetic field/mass
Bx = K*(1/(r**(3)))*(2*np.cos(theta)*np.sin(theta)*np.cos(phi)+np.sin(theta)*np.cos(theta)*np.cos(phi))
By = K*(1/(r**(3)))*(2*np.cos(theta)*np.sin(theta)*np.sin(phi)+np.sin(theta)*np.cos(theta)*np.sin(phi))
Bz = K*(1/(r**(3)))*(2*np.cos(theta)*np.cos(theta)-np.sin(theta)*np.sin(theta))
# acceleration components
ax = (1.6*10**(-19))*(vy*Bz - vz*By)
ay = (1.6*10**(-19))*(vz*Bx - vx*Bz)
az = (1.6*10**(-19))*(vx*By - vy*Bx)
# velocity components
vx = vx + ax*dt
vy = vy + ay*dt
vz = vz + az*dt
# position components
X = X + vx*dt + 0.5*ax*dt*dt
Y = Y + vy*dt + 0.5*ay*dt*dt
Z = Z + vz*dt + 0.5*az*dt*dt
# add position values to position vectors
x1.append(X)
y1.append(Y)
z1.append(Z)
And a figure of what the current trajectories look like (the particles are starting out at the top in tighter spirals before gradually increasing their radii):
figure
(I'm using Python 3.6 for this project)
Turns out it was just a matter of the timestep being too large. After decreasing the time step size, the trajectories are much more reasonable.

Determining "fall-line" vector using 3-axis accelerometer

I am building a tilt-based Arduino device that needs to detect the "fall-line" vector of the device once it is tilted in a particular orientation. By "fall-line" I'll use the following example:
Imagine a frictionless plane with a point mass in the the middle of it and a 3-axis accelerometer mounted in the plane so that the x and y axes of the accelerometer are parallel to the plane. At rest, the plane is flat and the point mass does not move. Once the plane is tilted, the point mass will move in a particular direction at a given acceleration due to gravity. I need to calculate the angle in the x-y plane that the mass will move toward and a magnitude measure corresponding to the acceleration in that direction.
I realise this is probably simple Newtonian mechanics, but I have no idea how to work this out.
The direction of the "fall-line" and the magnitude of the acceleration are both determined by the projection of the gravitational pull vector onto the plane. If the plane has a normal vector n, then the projector operator is P( n ) = 1 - nn, where 1 is the identity operator and nn is the outer (tensor) product of the normal vector with itself. The projection of the gravitational pull vector g is simply g' = P( n ).g = (1 - nn) g = g - (n . g) n, where the dot denotes inner (dot) product. Now you only have to choose a suitable orthonormal reference frame (ex, ey, ez), where ei is a unit vector along direction i. In this reference frame:
n = nx ex + ny ey + nz ez
g = gx ex + gy ey + gz ez
The dot product n . g is then:
n . g = nx * gx + ny * gy + nz * gz
A very suitable choice of a reference frame is one where ez is collinear with n. Then nx = 0 and ny = 0 and nz = ||n|| = 1, because normal vectors are of unit length. In this frame n . g is simply gz. The components of the projection of g are then:
g'x = gx
g'y = gy
g'z = 0
The direction of g' in the XY plane can be determined by the fact that for the dot product in orthonormal reference frames a . b = ||a|| ||b|| cos(a, b), where ||a|| denotes the norm (length) of a and cos(a, b) is the cosine of the angle between a and b. If you measure the angle from the X direction, then:
g' . ex = (gx ex + gy ey) . ex = gx = ||g'|| ||ex|| cos(g', ex) = g' cos(g', ex)
where g' = ||g'|| = sqrt(gx^2 + gy^2). The angle is simply arccos(gx/g'), i.e. arc-cosine of the ratio between the X component of the gravity pull vector and the magnitude of its projection onto the XY plane:
angle = arccos[gx / sqrt(gx^2 + gy^2)]
The magnitude of the acceleration is proportional to the magnitude of g', which is (once again):
g' = ||g'|| = sqrt(gx^2 + gy^2)
Now the nice thing is that all accelerometers measure the components of the gravity field in a reference frame that usually have ex aligned with the height (or the width) of the device, the ex aligned with the width (or the height) of the device and ez is perpendicular to the surface of the device, which matches exactly the reference frame, where ez is collinear with the plane normal. If this is not the case with your Arduino device, simply rotate the accelerometer and align it as needed.

Issues with bullet entry points for "shoulder mounted" guns

I'm making a SHMUP game that has a space ship. That space ship currently fires a main cannon from its center point. The sprite that represents the ship has a center based registration point. 0,0 is center of the ship.
When I fire the main cannon i make a bullet and assign make its x & y coordinates match the avatar and add it to the display list. This works fine.
I then made two new functions called fireLeftCannon, fireRightCannon. These create a bullet and add it to the display list but the x, y values are this.y + 15 and this.y +(-) 10. This creates a sort of triangle of bullet entry points.
Similar to this:
   ▲
▲   ▲
the game tick function will adjust the avatar's rotation to always point at the cursor. This is my aiming method. When I shoot straight up all 3 bullets fire up in the expected pattern. However when i rotate and face the right the entry points do not rotate. This is not an issue for the center point main cannon.
My question is how do i use the current center position ( this.x, this.y ) and adjust them based on my current rotation to place a new bullet so that it is angled correctly.
Thanks a lot in advance.
Tyler
EDIT
OK i tried your solution and it didn't work. Here is my bullet move code:
var pi:Number = Math.PI
var _xSpeed:Number = Math.cos((_rotation - 90) * (pi/180) );
var _ySpeed:Number = Math.sin((_rotation - 90) * (pi / 180) );
this.x += (_xSpeed * _bulletSpeed );
this.y += (_ySpeed * _bulletSpeed );
And i tried adding your code to the left shoulder cannon:
_bullet.x = this.x + Math.cos( StaticMath.ToRad(this.rotation) ) * ( this.x - 10 ) - Math.sin( StaticMath.ToRad(this.rotation)) * ( this.x - 10 );
_bullet.y = this.y + Math.sin( StaticMath.ToRad(this.rotation)) * ( this.y + 15 ) + Math.cos( StaticMath.ToRad(this.rotation)) * ( this.y + 15 );
This is placing the shots a good deal away from the ship and sometimes off screen.
How am i messing up the translation code?
What you need to start with is, to be precise, the coordinates of your cannons in the ship's coordinate system (or “frame of reference”). This is like what you have now but starting from 0, not the ship's position, so they would be something like:
(0, 0) -- center
(10, 15) -- left shoulder
(-10, 15) -- right shoulder
Then what you need to do is transform those coordinates into the coordinate system of the world/scene; this is the same kind of thing your graphics library is doing to draw the sprite.
In your particular case, the intervening transformations are
world ←translation→ ship position ←rotation→ ship positioned and rotated
So given that you have coordinates in the third frame (how the ship's sprite is drawn), you need to apply the rotation, and then apply the translation, at which point you're in the first frame. There are two approaches to this: one is matrix arithmetic, and the other is performing the transformations individually.
For this case, it is simpler to skip the matrices unless you already have a matrix library handy already, in which case you should use it — calculate "ship's coordinate transformation matrix" once per frame and then use it for all bullets etc.
I'll now explain doing it directly.
The general method of applying a rotation to coordinates (in two dimensions) is this (where (x1,y1) is the original point and (x2,y2) is the new point):
x2 = cos(angle)*x1 - sin(angle)*y1
y2 = sin(angle)*x1 + cos(angle)*y1
Whether this is a clockwise or counterclockwise rotation will depend on the “handedness” of your coordinate system; just try it both ways (+angle and -angle) until you have the right result. Don't forget to use the appropriate units (radians or degrees, but most likely radians) for your angles given the trig functions you have.
Now, you need to apply the translation. I'll continue using the same names, so (x3,y3) is the rotated-and-translated point. (dx,dy) is what we're translating by.
x3 = dx + x2
y3 = dy + x2
As you can see, that's very simple; you could easily combine it with the rotation formulas.
I have described transformations in general. In the particular case of the ship bullets, it works out to this in particular:
bulletX = shipPosX + cos(shipAngle)*gunX - sin(shipAngle)*gunY
bulletY = shipPosY + sin(shipAngle)*gunX + cos(shipAngle)*gunY
If your bullets are turning the wrong direction, negate the angle.
If you want to establish a direction-dependent initial velocity for your bullets (e.g. always-firing-forward guns) then you just apply the rotation but not the translation to the velocity (gunVelX, gunVelY).
bulletVelX = cos(shipAngle)*gunVelX - sin(shipAngle)*gunVelY
bulletVelY = sin(shipAngle)*gunVelX + cos(shipAngle)*gunVelY
If you were to use vector and matrix math, you would be doing all the same calculations as here, but they would be bundled up in single objects rather than pairs of x's and y's and four trig functions. It can greatly simplify your code:
shipTransform = translate(shipX, shipY)*rotate(shipAngle)
bulletPos = shipTransform*gunPos
I've given the explicit formulas because knowing how the bare arithmetic works is useful to the conceptual understanding.
Response to edit:
In the code you edited into your question, you are adding what I assume is the ship position into the coordinates you multiply by sin/cos. Don't do that — just multiply the offset of the gun position from the ship center by sin/cos and only then add that to the ship position. Also, you are using x x; y y on the two lines, where you should be using x y; x y. Here is your code edited to fix those two things:
_bullet.x = this.x + Math.cos( StaticMath.ToRad(this.rotation)) * (-10) - Math.sin( StaticMath.ToRad(this.rotation)) * (+15);
_bullet.y = this.y + Math.sin( StaticMath.ToRad(this.rotation)) * (-10) + Math.cos( StaticMath.ToRad(this.rotation)) * (+15);
This is the code for a gun at offset (-10, 15).

Projective transformation

Given two image buffers (assume it's an array of ints of size width * height, with each element a color value), how can I map an area defined by a quadrilateral from one image buffer into the other (always square) image buffer? I'm led to understand this is called "projective transformation".
I'm also looking for a general (not language- or library-specific) way of doing this, such that it could be reasonably applied in any language without relying on "magic function X that does all the work for me".
An example: I've written a short program in Java using the Processing library (processing.org) that captures video from a camera. During an initial "calibrating" step, the captured video is output directly into a window. The user then clicks on four points to define an area of the video that will be transformed, then mapped into the square window during subsequent operation of the program. If the user were to click on the four points defining the corners of a door visible at an angle in the camera's output, then this transformation would cause the subsequent video to map the transformed image of the door to the entire area of the window, albeit somewhat distorted.
Using linear algebra is much easier than all that geometry! Plus you won't need to use sine, cosine, etc, so you can store each number as a rational fraction and get the exact numerical result if you need it.
What you want is a mapping from your old (x,y) co-ordinates to your new (x',y') co-ordinates. You can do it with matrices. You need to find the 2-by-4 projection matrix P such that P times the old coordinates equals the new co-ordinates. We'll assume that you're mapping lines to lines (not, for instance, straight lines to parabolas). Because you have a projection (parallel lines don't stay parallel) and translation (sliding), you need a factor of (xy) and (1), too. Drawn as matrices:
[x ]
[a b c d]*[y ] = [x']
[e f g h] [x*y] [y']
[1 ]
You need to know a through h so solve these equations:
a*x_0 + b*y_0 + c*x_0*y_0 + d = i_0
a*x_1 + b*y_1 + c*x_1*y_1 + d = i_1
a*x_2 + b*y_2 + c*x_2*y_2 + d = i_2
a*x_3 + b*y_3 + c*x_3*y_3 + d = i_3
e*x_0 + f*y_0 + g*x_0*y_0 + h = j_0
e*x_1 + f*y_1 + g*x_1*y_1 + h = j_1
e*x_2 + f*y_2 + g*x_2*y_2 + h = j_2
e*x_3 + f*y_3 + g*x_3*y_3 + h = j_3
Again, you can use linear algebra:
[x_0 y_0 x_0*y_0 1] [a e] [i_0 j_0]
[x_1 y_1 x_1*y_1 1] * [b f] = [i_1 j_1]
[x_2 y_2 x_2*y_2 1] [c g] [i_2 j_2]
[x_3 y_3 x_3*y_3 1] [d h] [i_3 j_3]
Plug in your corners for x_n,y_n,i_n,j_n. (Corners work best because they are far apart to decrease the error if you're picking the points from, say, user-clicks.) Take the inverse of the 4x4 matrix and multiply it by the right side of the equation. The transpose of that matrix is P. You should be able to find functions to compute a matrix inverse and multiply online.
Where you'll probably have bugs:
When computing, remember to check for division by zero. That's a sign that your matrix is not invertible. That might happen if you try to map one (x,y) co-ordinate to two different points.
If you write your own matrix math, remember that matrices are usually specified row,column (vertical,horizontal) and screen graphics are x,y (horizontal,vertical). You're bound to get something wrong the first time.
EDIT
The assumption below of the invariance of angle ratios is incorrect. Projective transformations instead preserve cross-ratios and incidence. A solution then is:
Find the point C' at the intersection of the lines defined by the segments AD and CP.
Find the point B' at the intersection of the lines defined by the segments AD and BP.
Determine the cross-ratio of B'DAC', i.e. r = (BA' * DC') / (DA * B'C').
Construct the projected line F'HEG'. The cross-ratio of these points is equal to r, i.e. r = (F'E * HG') / (HE * F'G').
F'F and G'G will intersect at the projected point Q so equating the cross-ratios and knowing the length of the side of the square you can determine the position of Q with some arithmetic gymnastics.
Hmmmm....I'll take a stab at this one. This solution relies on the assumption that ratios of angles are preserved in the transformation. See the image for guidance (sorry for the poor image quality...it's REALLY late). The algorithm only provides the mapping of a point in the quadrilateral to a point in the square. You would still need to implement dealing with multiple quad points being mapped to the same square point.
Let ABCD be a quadrilateral where A is the top-left vertex, B is the top-right vertex, C is the bottom-right vertex and D is the bottom-left vertex. The pair (xA, yA) represent the x and y coordinates of the vertex A. We are mapping points in this quadrilateral to the square EFGH whose side has length equal to m.
Compute the lengths AD, CD, AC, BD and BC:
AD = sqrt((xA-xD)^2 + (yA-yD)^2)
CD = sqrt((xC-xD)^2 + (yC-yD)^2)
AC = sqrt((xA-xC)^2 + (yA-yC)^2)
BD = sqrt((xB-xD)^2 + (yB-yD)^2)
BC = sqrt((xB-xC)^2 + (yB-yC)^2)
Let thetaD be the angle at the vertex D and thetaC be the angle at the vertex C. Compute these angles using the cosine law:
thetaD = arccos((AD^2 + CD^2 - AC^2) / (2*AD*CD))
thetaC = arccos((BC^2 + CD^2 - BD^2) / (2*BC*CD))
We map each point P in the quadrilateral to a point Q in the square. For each point P in the quadrilateral, do the following:
Find the distance DP:
DP = sqrt((xP-xD)^2 + (yP-yD)^2)
Find the distance CP:
CP = sqrt((xP-xC)^2 + (yP-yC)^2)
Find the angle thetaP1 between CD and DP:
thetaP1 = arccos((DP^2 + CD^2 - CP^2) / (2*DP*CD))
Find the angle thetaP2 between CD and CP:
thetaP2 = arccos((CP^2 + CD^2 - DP^2) / (2*CP*CD))
The ratio of thetaP1 to thetaD should be the ratio of thetaQ1 to 90. Therefore, calculate thetaQ1:
thetaQ1 = thetaP1 * 90 / thetaD
Similarly, calculate thetaQ2:
thetaQ2 = thetaP2 * 90 / thetaC
Find the distance HQ:
HQ = m * sin(thetaQ2) / sin(180-thetaQ1-thetaQ2)
Finally, the x and y position of Q relative to the bottom-left corner of EFGH is:
x = HQ * cos(thetaQ1)
y = HQ * sin(thetaQ1)
You would have to keep track of how many colour values get mapped to each point in the square so that you can calculate an average colour for each of those points.
I think what you're after is a planar homography, have a look at these lecture notes:
http://www.cs.utoronto.ca/~strider/vis-notes/tutHomography04.pdf
If you scroll down to the end you'll see an example of just what you're describing. I expect there's a function in the Intel OpenCV library which will do just this.
There is a C++ project on CodeProject that includes source for projective transformations of bitmaps. The maths are on Wikipedia here. Note that so far as i know, a projective transformation will not map any arbitrary quadrilateral onto another, but will do so for triangles, you may also want to look up skewing transforms.
If this transformation has to look good (as opposed to the way a bitmap looks if you resize it in Paint), you can't just create a formula that maps destination pixels to source pixels. Values in the destination buffer have to be based on a complex averaging of nearby source pixels or else the results will be highly pixelated.
So unless you want to get into some complex coding, use someone else's magic function, as smacl and Ian have suggested.
Here's how would do it in principle:
map the origin of A to the origin of B via a traslation vector t.
take unit vectors of A (1,0) and (0,1) and calculate how they would be mapped onto the unit vectors of B.
this gives you a transformation matrix M so that every vector a in A maps to M a + t
invert the matrix and negate the traslation vector so for every vector b in B you have the inverse mapping b -> M-1 (b - t)
once you have this transformation, for each point in the target area in B, find the corresponding in A and copy.
The advantage of this mapping is that you only calculate the points you need, i.e. you loop on the target points, not the source points. It was a widely used technique in the "demo coding" scene a few years back.

Resources