Why I can't make this 2 N-body system orbit without shifting (Center of mass portion code not working?) - vpython

I am trying to make this 2 n body diagram to work in vpython, it seems that is working but something is wrong with my center or mass, or something, i don't really know. The 2 n body system is shifting and is not staying still.
from visual import*
mt= 1.99e30 #Kg
G=6.67e-11 #N*(m/kg)^2
#Binary System stars
StarA=sphere(pos= (3.73e11,0,0),radius=5.28e10,opacity=0.5,color=color.green,make_trail=True, interval=10)
StarB=sphere(pos=(-3.73e11,0,0),radius=4.86e10,opacity=0.5, color=color.blue,make_trail=True, interval=10)
#StarC=sphere(pos=(3.44e13,0,0),radius=2.92e11,opacity=0.5, color=color.yellow,make_trail=True, interval=10)
#mass of binary stars
StarA.mass= 1.45e30 #kg
StarB.mass= 1.37e30 #kg
#StarC.mass= 6.16e29 #kg
#initial velocities of binary system
StarA.velocity =vector(0,8181.2,0)
StarB.velocity =vector(0,-8181.2,0)
#StarC.velocity= vector(0,1289.4,0)
#Time step for each binary star
dt=1e5
StarA.pos= StarA.pos + StarA.velocity*dt
StarB.pos= StarB.pos + StarB.velocity*dt
#StarC.pos= StarC.pos + StarC.velocity*dt
#Lists
objects=[]
objects.append(StarA)
objects.append(StarB)
#objects.append(StarC)
#center of mass
Ycm=0
Xcm=0
Zcm=0
Vcmx=0
Vcmy=0
Vcmz=0
TotalMass=0
for each in objects:
TotalMass=TotalMass+each.mass
Xcm= Xcm + each.pos.x*each.mass
Ycm= Ycm + each.pos.y*each.mass
Zcm= Zcm + each.pos.z*each.mass
Vcmx= Vcmx + each.velocity.x*each.mass
Vcmy= Vcmy + each.velocity.y*each.mass
Vcmz= Vcmz + each.velocity.z*each.mass
for each in objects:
Xcm=Xcm/TotalMass
Ycm=Ycm/TotalMass
Zcm=Zcm/TotalMass
Vcmx=Vcmx/TotalMass
Vcmy=Vcmy/TotalMass
Vcmz=Vcmz/TotalMass
each.pos=each.pos-vector(Xcm,Ycm,Zcm)
each.velocity=each.velocity-vector(Vcmx,Vcmy,Vcmz)
#Code for Triple system
firstStep=0
while True:
rate(200)
for i in objects:
i.acceleration = vector(0,0,0)
for j in objects:
if i != j:
dist = j.pos - i.pos
i.acceleration = i.acceleration + G * j.mass * dist / mag(dist)**3
for i in objects:
if firstStep==0:
i.velocity = i.velocity + i.acceleration*dt
firstStep=1
else:
i.velocity = i.velocity + i.acceleration*dt
i.pos = i.pos + i.velocity*dt

Two points I'd like to make about your problem:
You're using Euler's method of integration. That is the "i.velocity = i.velocity + i.acceleration*dt" part of your code. This method is not very accurate, especially with oscillatory problems like this one. That's part of the reason you're noticing the drift in your system. I'd recommend using the RK4 method of integration. It's a bit more complicated but gives very good results, although Verlet integration may be more useful for you here.
Your system may have a net momentum in some direction. You have them starting with the same speed, but have different masses, so the net momentum is non-zero. To correct for this transform the system into a zero momentum reference frame. To do this add up the momentum (mass times velocity vector) of all the stars, then divide by the total mass of the system. This will give you a velocity vector to subtract off from all your initial velocity vectors of the stars, and so your center of mass will not move.
Hope this helps!

This is wrong:
for each in objects:
Xcm=Xcm/TotalMass
...
This division should only happen once. And then the averages should be removed from the objects, as in
Xcm=Xcm/TotalMass
...
for each in objects:
each.pos.x -= Xcm
...

Related

How do I rotate smoothly in Godot?

Below is my code (ignore the input variable it is just for movement):
var input = Vector3()
if Input.is_action_pressed("move_forward") and Input.is_action_pressed("move_left"):
rotation_degrees.y = lerp_angle(rotation_degrees.y, atan2(1, -1), delta * 7)
input[2] = 1
input[0] = 1
elif Input.is_action_pressed("move_forward") and Input.is_action_pressed("move_right"):
rotation_degrees.y = lerp_angle(rotation_degrees.y, atan2(1, 1), delta * 7)
input[2] = 1
input[0] = -1
elif Input.is_action_pressed("move_backward") and Input.is_action_pressed("move_left"):
rotation_degrees.y = lerp_angle(rotation_degrees.y, atan2(-1, -1), delta * 7)
input[2] = -1
input[0] = 1
elif Input.is_action_pressed("move_backward") and Input.is_action_pressed("move_right"):
rotation_degrees.y = lerp_angle(rotation_degrees.y, atan2(-1, 1), delta * 7)
input[2] = -1
input[0] = -1
else:
if Input.is_action_pressed("move_forward"):
rotation_degrees.y = lerp_angle(rotation_degrees.y, atan2(1, 0), delta * 7)
input[2] = 1
if Input.is_action_pressed("move_backward"):
rotation_degrees.y = lerp_angle(rotation_degrees.y, atan2(-1, 0), delta * 7)
input[2] = -1
if Input.is_action_pressed("move_left"):
rotation_degrees.y = lerp_angle(rotation_degrees.y, atan2(0, -1), delta * 7)
input[0] = 1
if Input.is_action_pressed("move_right"):
rotation_degrees.y = lerp_angle(rotation_degrees.y, atan2(0, 1), delta * 7)
input[0] = -1
For some reason, the player is hardly moving at all. Before I just set rotation_degrees.y to the direction but that didn't make a smooth movement. Any help is appreciated!
Getting Input
We have a better way to get a vector from input, let us start there:
var input := Vector3(
Input.get_action_strength("move_left") - Input.get_action_strength("move_right"),
0,
Input.get_action_strength("move_forward") - Input.get_action_strength("move_backward")
)
Addendum: This is the order you had. However, notice that we usually thing of x growing to the right, and - in Godot - forward is negative z. So this is backwards, I don't know if that is intentional.
What is going on here is that get_action_strength gives us the "strength" of the input, which is a number between 0 and 1. It will be 0 or 1 for a digital input, but it could be a value in between for analog input.
Then to get a component of the vector I take the input that would make it positive and subtract the negative. That gives you a value in the range from -1 to 1. Which also means we don't get a unit vector, depending on what you are doing, you might or might not want to normalize it.
Next you want the angle on the xz plane. Well, if you just used a Vector2, that would be trivial. So, instead of the code above, let us use a Vector2:
var input := Vector2(
Input.get_action_strength("move_left") - Input.get_action_strength("move_right"),
Input.get_action_strength("move_forward") - Input.get_action_strength("move_backward")
)
By the way, in Godot 3.4+ and Godot 4.0+ you will be able to use Input.get_vector, which simplifies this further.
It should not be hard to get a Vector3 from that if you need it for something else: Vector3(input.x, 0, input.y)
And now get the angle:
var input_angle := input.angle()
Addendum: You may need to do an operation on the angle to make it usable in 3D. The reason being that the angle is measured from the x axis, but no rotation is looking down the z. I had overlooked that when I wrote this answer. Although I tested all the code here. Without motion I didn't notice the orientation didn't match the direction of motion.
Smooth Rotation
Now, we want to smoothly change rotation_degrees.y to input_angle. There are multiple ways to do it, here are a few.
Before any of them. You are going to need to declare the target angle as a field, so the rotation does not reset when there is no input.
var _target_angle:float
And we are going to store it, whenever there is input:
var input := Vector2(
Input.get_action_strength("ui_left") - Input.get_action_strength("ui_right"),
Input.get_action_strength("ui_up") - Input.get_action_strength("ui_down")
)
if input.length_squared() > 0:
_target_angle = input.angle()
Note: I'm checking length_squared instead of length to avoid a square root.
lerp_angle
Yes, you should be able to use lerp_angle. Even thought, I don't advocate that.
I'll go ahead and add that 7 magic constant you have:
const _rotation_amount:float = 7.0
Then you would do it something like this (which I admit is convenient and short code):
rotation.y = lerp_angle(rotation.y, _target_angle, delta * _rotation_amount)
Please notice I changed rotation_degrees to rotation, that is because you get the angle in radians. Which would have been a problem using atan2 too. You can use deg2rad and rad2deg to do the conversions if you prefer.
With this approach, you have very little control over the interpolation. You do not specify how fast it rotates, nor how much time the rotation should take.
Please notice that the weight of a linear interpolation is not time. You are not telling Godot "go from this value to this value in this amount of time" (you can do that with a Tween, see below).
Also notice that it will advance a more or less stable proportion of the difference of the values each frame. Thus it moves more at the start, and less at the end. That is, this is not linear at all. It has a deceleration effect.
Angular speed
If we are going to work with angular speed, it follows that we need a field to hold the angular speed:
const _angular_speed:float = TAU
Here TAU (which is PI * 2) represent one rotation per second (four rotations per second would be 4 * TAU, and a quarter rotation per second could be 0.25 * TAU, you can associate TAU with one Turn as mnemonic).
Now, we are going to figure out the angle difference (shortest path), to do that we can do this:
var angle_diff := wrapf(_target_angle - rotation.y, -PI, PI)
The wrapf function will "wrap" the value in the range (-PI to PI in this case), so that if the values goes beyond that range, it is as if it entered from the opposite edge (if you say wrapf(14, 0, 10) you get 4, and if you say wrapf(-4, 0, 10) you get 6). I hope that makes sense.
To apply the angular speed, in theory, we would only need the sign:
rotation.y += delta * _angular_speed * sign(angle_diff)
However, we don't want to overshoot. We can solve this with clamp:
rotation.y += clamp(delta * _angular_speed, 0, abs(angle_diff)) * sign(angle_diff)
Notice I have separated the sign from the magnitude of angle_diff. I'm clamping how much the angle would change according to angular speed to the magnitude of angle_diff (so it is never more than angle_diff, i.e. it does not overshoot). Once clamped, I apply the sign so it turns in the correct direction.
And, of course, if you really wanted to, you could work with angular acceleration and deceleration too.
Tween
Using a Tween node is the most powerful option sans using an AnimationPlayer. And it is also very easy to use.
We, of course, need to define a Tween object and add it as a child:
var _tween:Tween
func _ready() -> void:
_tween = Tween.new()
add_child(_tween)
This is because the Tween object will continue to gradually change the rotation each frame (so we don't have to worry about it). And to do that, it needs to persist and to get frame notifications, so it needs to be in the scene tree somewhere.
In Godot 4.0+, you will be able to do get_tree().create_tween() which returns a rooted Tween, so you don't have to worry about keeping a reference or adding it to the scene tree.
And then you can do this:
var input := Vector2(
Input.get_action_strength("ui_left") - Input.get_action_strength("ui_right"),
Input.get_action_strength("ui_up") - Input.get_action_strength("ui_down")
)
if input.length_squared() > 0:
var rotation_time := 0.2
_tween.interpolate_property(self, "rotation:y", rotation.y, input.angle(), rotation_time)
_tween.start()
Do not forget to call start. Well, In Godot 4.0+, all tween will automatically start by default.
As you can see, the code is telling the Tween to interpolate the y of the rotation from the value of rotation.y (its current value) to the value of input.angle(), and we specify a rotation time (which I made a variable just so I can give it a name). And that's it.
In Godot 4.0+, interpolate_property is gone in favor of the new tween_property API, with which you don't need to specify the start value.
Oh, wait, we can specify how do you want to ease these values:
_tween.interpolate_property(
self,
"rotation:y",
rotation.y,
input.angle(),
rotation_time,
Tween.TRANS_QUAD,
Tween.EASE_OUT
)
See this cheatsheet by wandomPewlin to pick your easing:
I believe Tween.TRANS_QUAD and Tween.EASE_OUT gives you something close to what you get from lerp_angle, except stable. And look, you have more options!
I'll also mention that you can ask Tween what interpolation it currently is doing, and also remove them, which is useful in some cases. Yet, you don't need to go into that for simple cases like this one. And, well, there is AnimationPlayer if you need something more complex. By the way, yes, AnimationPlayer can operate on a subproperty (such as rotation.y, I have an explanation elsewhere).

Does computing screen space for reflections/refractions require second partial derivatives?

I have written a basic raytracer which keeps track of screen space. Each fragment has an associated pixel radius. When a ray dir hits the geometry when extruded by some distance from an eye, I compute the normal vector N for the hit, and combine it with four more rays. In pseudocode:
def distance := shortestDistanceToSurface(sdf, eye, dir, pixelRadius)
def p := eye + dir * distance
def N := estimateNormal(sdf, p)
def glance := distance * glsl.dot(dir, N)
def dx := (dirPX / glsl.dot(dirPX, N) - dirNX / glsl.dot(dirNX, N)) * glance
def dy := (dirPY / glsl.dot(dirPY, N) - dirNY / glsl.dot(dirNY, N)) * glance
Here, dirPX, dirNX, dirPY, and dirNY are rays which are offset by dir by the pixel radius in screen space in each of the four directions, but still aiming at the same reference point. This gives dx and dy, which are partial derivatives across the pixel indicating the rate at which the hit moves along the surface of the geometry as the rays move through screen space.
Because I track screen space, I can use pre-filtered samplers, as discussed by Inigo Quilez. They look great. However, now I want to add reflection (and refraction), which means that I need to recurse, and I'm not sure how to compute these rays and track screen space.
The essential problem is that, in order to figure out what color of light is being reflected at a certain place on the geometry, I need to not just take a point sample, but examine the whole screen space which is reflected. I can use the partial derivatives to give me four new points on the geometry which approximate an ellipse which is the projection of the original pixel from the screen:
def px := dx * pixelRadius
def py := dy * pixelRadius
def pPX := p + px
def pNX := p - px
def pPY := p + py
def pNY := p - py
And I can compute an approximate pixel radius by smushing the ellipse into a circle. I know that this ruins certain kinds of desirable anisotropic blur, but what's a raytracer without cheating?
def nextRadius := (glsl.length(dx) * glsl.length(dy)).squareRoot() * pixelRadius
However, I don't know where to reflect those points into the geometry; I don't know where to focus their rays. If I have to make a choice of focus, then it will be arbitrary, and depending on where the geometry reflects its own image, then this could arbitrarily blur or moiré the reflected images.
Do I need to take the second partial derivatives? I can approximate them just like the first derivatives, and then I can use them to adjust the normal N with slight changes, just like with the hit p. The normals then guide the focus of the ellipse, and map it to an approximate conic section. I'm worried about three things:
I worry about the cost of doing a couple extra vector additions and multiplications, which is probably negligible;
And also about whether the loss in precision, which is already really bad when doing these cheap derivatives, is going to be too lossy over multiple reflections;
And finally, how I'm supposed to handle situations where screen space explodes; when I have a mirrored sphere, how am I supposed to sample over big wedges of reflected space and e.g. average a checkerboard pattern into a grey?
And while it's not a worry, I simply don't know how to take four vectors and quickly fit a convincing cone to them, but this might be a mere problem of spending some time doing algebra on a whiteboard.
Edit: In John Amanatides' 1984 paper Ray Tracing with Cones, the curvature information is indeed computed, and used to fit an estimated cone onto the reflected ray. In Homan Igehy's 1999 paper Tracing Ray Differentials, only the first-order derivatives are used, and second derivatives are explicitly ignored.
Are there other alternatives, maybe? I've experimented with discarding the pixel radius after one reflection and just taking point samples, and they look horrible, with lots of aliasing and noise. Perhaps there is a field-of-view or depth-of-field approximation which can be computed on a per-material basis. As usual, multisampling can help, but I want an analytic solution so I don't waste so much CPU needlessly.
(sdf is a signed distance function and I am doing sphere tracing; the same routine both computes distance and also normals. glsl is the GLSL standard library.)
I won't accept my own answer, but I'll explain what I've done so that I can put this question down for now. I ended up going with an approach similar to Amanatides, computing a cone around each ray.
Each time I compute a normal vector, I also compute the mean curvature. Normal vectors are computed using a well-known trick. Let p be a zero of the SDF as in my question, let epsilon be a reasonable offset to use for numerical differentiation, and then let vp and vn be vectors whose components are evaluations of the SDF near p but perturbed at each component by epsilon. In pseudocode:
def justX := V(1.0, 0.0, 0.0)
def justY := V(0.0, 1.0, 0.0)
def justZ := V(0.0, 0.0, 1.0)
def vp := V(sdf(p + justX * epsilon), sdf(p + justY * epsilon), sdf(p + justZ * epsilon))
def vn := V(sdf(p - justX * epsilon), sdf(p - justY * epsilon), sdf(p - justZ * epsilon))
Now, by clever abuse of finite difference coefficients, we can compute both first and second derivatives at the same time. The third coefficient, sdf(p), is already zero by assumption. This gives our normal vector N, which is the gradient, and our mean curvature H, which is the Laplacian.
def N := glsl.normalize(vp - vn)
def H := sum(vp + vn) / (epsilon * epsilon)
We can estimate Gaussian curvature from mean curvature. While mean curvature tells our cone how much to expand or contract, Gaussian curvature is always non-negative and measures how much extra area (spherical excess) is added to the cone's area of intersection. Gaussian curvature is given with K instead of H, and after substituting:
def K := H * H
Now we're ready to adjust the computation of fragment width. Let's assume that, in addition to the pixelRadius in screen space and distance from the camera to the geometry, we also have dradius, the rate of change in pixel radius over distance. We can take a dot product to compute a glance factor just like before, and the trigonometry is similar:
def glance := glsl.dot(dir, N).abs().reciprocal()
def fradius := (pixelRadius + dradius * distance) * glance * (1.0 + K)
def fwidth := fradius * 2.0
And now we have fwidth just like in GLSL. Finally, when it comes time to reflect, we'll want to adjust the change in radius by integrating our second-derivative curvature into our first derivative:
def dradiusNew := dradius + H * fradius
The results are satisfactory. The fragments might be a little too big; it's hard to tell if something is overly blurry or just properly antialiased. I wonder whether Amanatides used a different set of equations to handle curvature; I spent far too long at a whiteboard deriving what ended up being pleasantly simple operations.
This image was not supersampled; each pixel had one fragment with one ray and one cone.

Numerical differentiation using Cauchy (CIF)

I am trying to create a module with a mathematical class for Taylor series, to have it easily accessible for other projects. Hence I wish to optimize it as far as I can.
For those who are not too familiar with Taylor series, it will be a necessity to be able to differentiate a function in a point many times. Given that the normal definition of the mathematical derivative of a function will require immense precision for higher order derivatives, I've decided to use Cauchy's integral formula instead. With a little bit of work, I've managed to rearrange the formula a little bit, as you can see on this picture: Rearranged formula. This provided me with much more accurate results on higher order derivatives than the traditional definition of the derivative. Here is the function i am currently using to differentiate a function in a point:
def myDerivative(f, x, dTheta, degree):
riemannSum = 0
theta = 0
while theta < 2*np.pi:
functionArgument = np.complex128(x + np.exp(1j*theta))
secondFactor = np.complex128(np.exp(-1j * degree * theta))
riemannSum += f(functionArgument) * secondFactor * dTheta
theta += dTheta
return factorial(degree)/(2*np.pi) * riemannSum.real
I've tested this function in my main function with a carefully thought out mathematical function which I know the derivatives of, namely f(x) = sin(x).
def main():
print(myDerivative(f, 0, 2*np.pi/(4*4096), 16))
pass
These derivatives seems to freak out at around the derivative of degree 16. I've also tried to play around with dTheta, but with no luck. I would like to have higher orders as well, but I fear I've run into some kind of machine precission.
My question is in it's simplest form: What can I do to improve this function in order to get higher order of my derivatives?
I seem to have come up with a solution to the problem. I did this by rearranging Cauchy's integral formula in a different way, by exploiting that the initial contour integral can be an arbitrarily large circle around the point of differentiation. Be aware that it is very important that the function is analytic in the complex plane for this to be valid.
New formula
Also this gives a new function for differentiation:
def myDerivative(f, x, dTheta, degree, contourRadius):
riemannSum = 0
theta = 0
while theta < 2*np.pi:
functionArgument = np.complex128(x + contourRadius*np.exp(1j*theta))
secondFactor = (1/contourRadius)**degree*np.complex128(np.exp(-1j * degree * theta))
riemannSum += f(functionArgument) * secondFactor * dTheta
theta += dTheta
return factorial(degree) * riemannSum.real / (2*np.pi)
This gives me a very accurate differentiation of high orders. For instance I am able to differentiate f(x)=e^x 50 times without a problem.
Well, since you are working with a discrete approximation of the derivative (via dTheta), sooner or later you must run into trouble. I'm surprised you were able to get at least 15 accurate derivatives -- good work! But to get derivatives of all orders, either you have to put a limit on what you're willing to accept and say it's good enough, or else compute the derivatives symbolically. Take a look at Sympy for that. Sympy probably has some functions for computing Taylor series too.

Bezier curve through three points

I have read similar topics in order to find solution, but with no success.
What I'm trying to do is make the tool same as can be found in CorelDraw, named "Pen Tool". I did it by connecting Bezier cubic curves, but still missing one feature, which is dragging curve (not control point) in order to edit its shape.
I can successfully determine the "t" parameter on the curve where dragging should begin, but don't know how to recalculate control points of that curve.
Here I want to higlight some things related to CorelDraw''s PenTool behaviour that may be used as constaints. I've noticed that when dragging curve strictly vertically, or horizontally, control points of that Bezier curve behave accordingly, i.e. they move on their verticals, or horizontals, respectively.
So, how can I recalculate positions of control points while curve dragging?
Ive just look into Inkspace sources and found such code, may be it help you:
// Magic Bezier Drag Equations follow!
// "weight" describes how the influence of the drag should be distributed
// among the handles; 0 = front handle only, 1 = back handle only.
double weight, t = _t;
if (t <= 1.0 / 6.0) weight = 0;
else if (t <= 0.5) weight = (pow((6 * t - 1) / 2.0, 3)) / 2;
else if (t <= 5.0 / 6.0) weight = (1 - pow((6 * (1-t) - 1) / 2.0, 3)) / 2 + 0.5;
else weight = 1;
Geom::Point delta = new_pos - position();
Geom::Point offset0 = ((1-weight)/(3*t*(1-t)*(1-t))) * delta;
Geom::Point offset1 = (weight/(3*t*t*(1-t))) * delta;
first->front()->move(first->front()->position() + offset0);
second->back()->move(second->back()->position() + offset1);
In you case "first->front()" and "second->back()" would mean two control points
The bezier curve is nothing more then two polynomials: X(t), Y(t).
The cubic one:
x = ax*t^3 + bx*t^2 + cx*t + dx
0 <= t <= 1
y = ay*t^3 + by*t^2 + cy*t + dy
So if you have a curve - you have the poly coefficients. If you move your point and you know it's t parameter - then you can simply recalculate the poly's coefficients - it will be a system of 6 linear equations for coefficients (for each of the point). The system is subdivided per two systems (x and y) and can be solved exactly or using some numerical methods - they are not hard too.
So your task now is to calculate control points of your curve when you know the explicit equation of your curve.
It can be also brought to the linear system. I don't know how to do it for generalized Bezier curve, but it is not hard for cubic or quadric curves.
The cubic curve via control points:
B(t) = (1-t)^3*P0 + 3(1-t)^2*t*P1 + 3(1-t)*t^2*P2 + t^3*P3
Everything you have to do is to produce the standard polynomial form (just open the brackets) and to equate the coefficients. That will provide the final system for control points!
When you clicks on curve, you already know position of current control point. So you can calculate offset X and offset Y from that point to mouse position. In case of mouse move, you would be able to recalculate new control point with help of X/Y offsets.
Sorry for my english

Ball to Ball Collision - Detection and Handling

With the help of the Stack Overflow community I've written a pretty basic-but fun physics simulator.
You click and drag the mouse to launch a ball. It will bounce around and eventually stop on the "floor".
My next big feature I want to add in is ball to ball collision. The ball's movement is broken up into a x and y speed vector. I have gravity (small reduction of the y vector each step), I have friction (small reduction of both vectors each collision with a wall). The balls honestly move around in a surprisingly realistic way.
I guess my question has two parts:
What is the best method to detect ball to ball collision?
Do I just have an O(n^2) loop that iterates over each ball and checks every other ball to see if it's radius overlaps?
What equations do I use to handle the ball to ball collisions? Physics 101
How does it effect the two balls speed x/y vectors? What is the resulting direction the two balls head off in? How do I apply this to each ball?
Handling the collision detection of the "walls" and the resulting vector changes were easy but I see more complications with ball-ball collisions. With walls I simply had to take the negative of the appropriate x or y vector and off it would go in the correct direction. With balls I don't think it is that way.
Some quick clarifications: for simplicity I'm ok with a perfectly elastic collision for now, also all my balls have the same mass right now, but I might change that in the future.
Edit: Resources I have found useful
2d Ball physics with vectors: 2-Dimensional Collisions Without Trigonometry.pdf
2d Ball collision detection example: Adding Collision Detection
Success!
I have the ball collision detection and response working great!
Relevant code:
Collision Detection:
for (int i = 0; i < ballCount; i++)
{
for (int j = i + 1; j < ballCount; j++)
{
if (balls[i].colliding(balls[j]))
{
balls[i].resolveCollision(balls[j]);
}
}
}
This will check for collisions between every ball but skip redundant checks (if you have to check if ball 1 collides with ball 2 then you don't need to check if ball 2 collides with ball 1. Also, it skips checking for collisions with itself).
Then, in my ball class I have my colliding() and resolveCollision() methods:
public boolean colliding(Ball ball)
{
float xd = position.getX() - ball.position.getX();
float yd = position.getY() - ball.position.getY();
float sumRadius = getRadius() + ball.getRadius();
float sqrRadius = sumRadius * sumRadius;
float distSqr = (xd * xd) + (yd * yd);
if (distSqr <= sqrRadius)
{
return true;
}
return false;
}
public void resolveCollision(Ball ball)
{
// get the mtd
Vector2d delta = (position.subtract(ball.position));
float d = delta.getLength();
// minimum translation distance to push balls apart after intersecting
Vector2d mtd = delta.multiply(((getRadius() + ball.getRadius())-d)/d);
// resolve intersection --
// inverse mass quantities
float im1 = 1 / getMass();
float im2 = 1 / ball.getMass();
// push-pull them apart based off their mass
position = position.add(mtd.multiply(im1 / (im1 + im2)));
ball.position = ball.position.subtract(mtd.multiply(im2 / (im1 + im2)));
// impact speed
Vector2d v = (this.velocity.subtract(ball.velocity));
float vn = v.dot(mtd.normalize());
// sphere intersecting but moving away from each other already
if (vn > 0.0f) return;
// collision impulse
float i = (-(1.0f + Constants.restitution) * vn) / (im1 + im2);
Vector2d impulse = mtd.normalize().multiply(i);
// change in momentum
this.velocity = this.velocity.add(impulse.multiply(im1));
ball.velocity = ball.velocity.subtract(impulse.multiply(im2));
}
Source Code: Complete source for ball to ball collider.
If anyone has some suggestions for how to improve this basic physics simulator let me know! One thing I have yet to add is angular momentum so the balls will roll more realistically. Any other suggestions? Leave a comment!
To detect whether two balls collide, just check whether the distance between their centers is less than two times the radius. To do a perfectly elastic collision between the balls, you only need to worry about the component of the velocity that is in the direction of the collision. The other component (tangent to the collision) will stay the same for both balls. You can get the collision components by creating a unit vector pointing in the direction from one ball to the other, then taking the dot product with the velocity vectors of the balls. You can then plug these components into a 1D perfectly elastic collision equation.
Wikipedia has a pretty good summary of the whole process. For balls of any mass, the new velocities can be calculated using the equations (where v1 and v2 are the velocities after the collision, and u1, u2 are from before):
If the balls have the same mass then the velocities are simply switched. Here's some code I wrote which does something similar:
void Simulation::collide(Storage::Iterator a, Storage::Iterator b)
{
// Check whether there actually was a collision
if (a == b)
return;
Vector collision = a.position() - b.position();
double distance = collision.length();
if (distance == 0.0) { // hack to avoid div by zero
collision = Vector(1.0, 0.0);
distance = 1.0;
}
if (distance > 1.0)
return;
// Get the components of the velocity vectors which are parallel to the collision.
// The perpendicular component remains the same for both fish
collision = collision / distance;
double aci = a.velocity().dot(collision);
double bci = b.velocity().dot(collision);
// Solve for the new velocities using the 1-dimensional elastic collision equations.
// Turns out it's really simple when the masses are the same.
double acf = bci;
double bcf = aci;
// Replace the collision velocity components with the new ones
a.velocity() += (acf - aci) * collision;
b.velocity() += (bcf - bci) * collision;
}
As for efficiency, Ryan Fox is right, you should consider dividing up the region into sections, then doing collision detection within each section. Keep in mind that balls can collide with other balls on the boundaries of a section, so this may make your code much more complicated. Efficiency probably won't matter until you have several hundred balls though. For bonus points, you can run each section on a different core, or split up the processing of collisions within each section.
Well, years ago I made the program like you presented here.
There is one hidden problem (or many, depends on point of view):
If the speed of the ball is too
high, you can miss the collision.
And also, almost in 100% cases your new speeds will be wrong. Well, not speeds, but positions. You have to calculate new speeds precisely in the correct place. Otherwise you just shift balls on some small "error" amount, which is available from the previous discrete step.
The solution is obvious: you have to split the timestep so, that first you shift to correct place, then collide, then shift for the rest of the time you have.
You should use space partitioning to solve this problem.
Read up on
Binary Space Partitioning
and
Quadtrees
As a clarification to the suggestion by Ryan Fox to split the screen into regions, and only checking for collisions within regions...
e.g. split the play area up into a grid of squares (which will will arbitrarily say are of 1 unit length per side), and check for collisions within each grid square.
That's absolutely the correct solution. The only problem with it (as another poster pointed out) is that collisions across boundaries are a problem.
The solution to this is to overlay a second grid at a 0.5 unit vertical and horizontal offset to the first one.
Then, any collisions that would be across boundaries in the first grid (and hence not detected) will be within grid squares in the second grid. As long as you keep track of the collisions you've already handled (as there is likely to be some overlap) you don't have to worry about handling edge cases. All collisions will be within a grid square on one of the grids.
A good way of reducing the number of collision checks is to split the screen into different sections. You then only compare each ball to the balls in the same section.
One thing I see here to optimize.
While I do agree that the balls hit when the distance is the sum of their radii one should never actually calculate this distance! Rather, calculate it's square and work with it that way. There's no reason for that expensive square root operation.
Also, once you have found a collision you have to continue to evaluate collisions until no more remain. The problem is that the first one might cause others that have to be resolved before you get an accurate picture. Consider what happens if the ball hits a ball at the edge? The second ball hits the edge and immediately rebounds into the first ball. If you bang into a pile of balls in the corner you could have quite a few collisions that have to be resolved before you can iterate the next cycle.
As for the O(n^2), all you can do is minimize the cost of rejecting ones that miss:
1) A ball that is not moving can't hit anything. If there are a reasonable number of balls lying around on the floor this could save a lot of tests. (Note that you must still check if something hit the stationary ball.)
2) Something that might be worth doing: Divide the screen into a number of zones but the lines should be fuzzy--balls at the edge of a zone are listed as being in all the relevant (could be 4) zones. I would use a 4x4 grid, store the zones as bits. If an AND of the zones of two balls zones returns zero, end of test.
3) As I mentioned, don't do the square root.
I found an excellent page with information on collision detection and response in 2D.
http://www.metanetsoftware.com/technique.html (web.archive.org)
They try to explain how it's done from an academic point of view. They start with the simple object-to-object collision detection, and move on to collision response and how to scale it up.
Edit: Updated link
You have two easy ways to do this. Jay has covered the accurate way of checking from the center of the ball.
The easier way is to use a rectangle bounding box, set the size of your box to be 80% the size of the ball, and you'll simulate collision pretty well.
Add a method to your ball class:
public Rectangle getBoundingRect()
{
int ballHeight = (int)Ball.Height * 0.80f;
int ballWidth = (int)Ball.Width * 0.80f;
int x = Ball.X - ballWidth / 2;
int y = Ball.Y - ballHeight / 2;
return new Rectangle(x,y,ballHeight,ballWidth);
}
Then, in your loop:
// Checks every ball against every other ball.
// For best results, split it into quadrants like Ryan suggested.
// I didn't do that for simplicity here.
for (int i = 0; i < balls.count; i++)
{
Rectangle r1 = balls[i].getBoundingRect();
for (int k = 0; k < balls.count; k++)
{
if (balls[i] != balls[k])
{
Rectangle r2 = balls[k].getBoundingRect();
if (r1.Intersects(r2))
{
// balls[i] collided with balls[k]
}
}
}
}
I see it hinted here and there, but you could also do a faster calculation first, like, compare the bounding boxes for overlap, and THEN do a radius-based overlap if that first test passes.
The addition/difference math is much faster for a bounding box than all the trig for the radius, and most times, the bounding box test will dismiss the possibility of a collision. But if you then re-test with trig, you're getting the accurate results that you're seeking.
Yes, it's two tests, but it will be faster overall.
This KineticModel is an implementation of the cited approach in Java.
I implemented this code in JavaScript using the HTML Canvas element, and it produced wonderful simulations at 60 frames per second. I started the simulation off with a collection of a dozen balls at random positions and velocities. I found that at higher velocities, a glancing collision between a small ball and a much larger one caused the small ball to appear to STICK to the edge of the larger ball, and moved up to around 90 degrees around the larger ball before separating. (I wonder if anyone else observed this behavior.)
Some logging of the calculations showed that the Minimum Translation Distance in these cases was not large enough to prevent the same balls from colliding in the very next time step. I did some experimenting and found that I could solve this problem by scaling up the MTD based on the relative velocities:
dot_velocity = ball_1.velocity.dot(ball_2.velocity);
mtd_factor = 1. + 0.5 * Math.abs(dot_velocity * Math.sin(collision_angle));
mtd.multplyScalar(mtd_factor);
I verified that before and after this fix, the total kinetic energy was conserved for every collision. The 0.5 value in the mtd_factor was the approximately the minumum value found to always cause the balls to separate after a collision.
Although this fix introduces a small amount of error in the exact physics of the system, the tradeoff is that now very fast balls can be simulated in a browser without decreasing the time step size.
Improving the solution to detect circle with circle collision detection given within the question:
float dx = circle1.x - circle2.x,
dy = circle1.y - circle2.y,
r = circle1.r + circle2.r;
return (dx * dx + dy * dy <= r * r);
It avoids the unnecessary "if with two returns" and the use of more variables than necessary.
After some trial and error, I used this document's method for 2D collisions : https://www.vobarian.com/collisions/2dcollisions2.pdf
(that OP linked to)
I applied this within a JavaScript program using p5js, and it works perfectly. I had previously attempted to use trigonometrical equations and while they do work for specific collisions, I could not find one that worked for every collision no matter the angle at the which it happened.
The method explained in this document uses no trigonometrical functions whatsoever, it's just plain vector operations, I recommend this to anyone trying to implement ball to ball collision, trigonometrical functions in my experience are hard to generalize. I asked a Physicist at my university to show me how to do it and he told me not to bother with trigonometrical functions and showed me a method that is analogous to the one linked in the document.
NB : My masses are all equal, but this can be generalised to different masses using the equations presented in the document.
Here's my code for calculating the resulting speed vectors after collision :
//you just need a ball object with a speed and position vector.
class TBall {
constructor(x, y, vx, vy) {
this.r = [x, y];
this.v = [0, 0];
}
}
//throw two balls into this function and it'll update their speed vectors
//if they collide, you need to call this in your main loop for every pair of
//balls.
function collision(ball1, ball2) {
n = [ (ball1.r)[0] - (ball2.r)[0], (ball1.r)[1] - (ball2.r)[1] ];
un = [n[0] / vecNorm(n), n[1] / vecNorm(n) ] ;
ut = [ -un[1], un[0] ];
v1n = dotProd(un, (ball1.v));
v1t = dotProd(ut, (ball1.v) );
v2n = dotProd(un, (ball2.v) );
v2t = dotProd(ut, (ball2.v) );
v1t_p = v1t; v2t_p = v2t;
v1n_p = v2n; v2n_p = v1n;
v1n_pvec = [v1n_p * un[0], v1n_p * un[1] ];
v1t_pvec = [v1t_p * ut[0], v1t_p * ut[1] ];
v2n_pvec = [v2n_p * un[0], v2n_p * un[1] ];
v2t_pvec = [v2t_p * ut[0], v2t_p * ut[1] ];
ball1.v = vecSum(v1n_pvec, v1t_pvec); ball2.v = vecSum(v2n_pvec, v2t_pvec);
}
I would consider using a quadtree if you have a large number of balls. For deciding the direction of bounce, just use simple conservation of energy formulas based on the collision normal. Elasticity, weight, and velocity would make it a bit more realistic.
Here is a simple example that supports mass.
private void CollideBalls(Transform ball1, Transform ball2, ref Vector3 vel1, ref Vector3 vel2, float radius1, float radius2)
{
var vec = ball1.position - ball2.position;
float dis = vec.magnitude;
if (dis < radius1 + radius2)
{
var n = vec.normalized;
ReflectVelocity(ref vel1, ref vel2, ballMass1, ballMass2, n);
var c = Vector3.Lerp(ball1.position, ball2.position, radius1 / (radius1 + radius2));
ball1.position = c + (n * radius1);
ball2.position = c - (n * radius2);
}
}
public static void ReflectVelocity(ref Vector3 vel1, ref Vector3 vel2, float mass1, float mass2, Vector3 intersectionNormal)
{
float velImpact1 = Vector3.Dot(vel1, intersectionNormal);
float velImpact2 = Vector3.Dot(vel2, intersectionNormal);
float totalMass = mass1 + mass2;
float massTransfure1 = mass1 / totalMass;
float massTransfure2 = mass2 / totalMass;
vel1 += ((velImpact2 * massTransfure2) - (velImpact1 * massTransfure2)) * intersectionNormal;
vel2 += ((velImpact1 * massTransfure1) - (velImpact2 * massTransfure1)) * intersectionNormal;
}

Resources