How to assign a float to a int64_t - game-center

To upload a score to the game center, they require you to have a value which is of type int64_t.
Is there a way to simply convert my float to int64_t?
I have built my whole game around the score and i need an easy solution any ideas?

I'll take a stab at this. Someone else's answer (from http://www.cocos2d-iphone.org/forum/topic/14839 )
Every score has to be submitted as an int64. So you need to convert your float to match the setting of your leaderboard. So with a fixed 3
dp you need to multiply the float by 1000 to get the 3rd dp into the
int - then submit.
so
int64_t gcScore = (int64_t)(score * 1000.0f);
gkscore.value = gcScore;
With some rounding coming into play it is important to make sure what
gets submitted is what has also been shown to the player - we had some
problems of being 1 out on the GC and in game display at times - just
had to go through every display & conversion of the score values to
make sure they took care to display properly.
All the changes to the leaderboard settings take a while it seems, and
submitted scores can often not show up for a while too. The Game
Center sandbox is pretty awful to be honest. Once you go live it is
better in responding to new scores, but you can't make any changes to
the leaderboard format so you have to persevere in the Sandbox to get
it right.

If your floats are limited in size (less than 9,223,372,036,854,775,807), the conversion is
int64_t myInt = (int64_t) myFloat;
If you want to "save" decimals, you can scale results (multiply the float with 10 or 100):
int64_t myInt_scaled = (int64_t) (myFloat * 100.0);

You will need to cast the float into an int64_t and then check for overflow. This cast will truncate the number, i.e. 5.655 will become 5.
try {
score = (int64_t)floatScore;
}
catch(OverflowException e) {
// Print error
}

Related

How to make sure strings would not overlap each other in java processing?

I'm having a problem that I need to make the words I took from an external file "NOT" overlap each other. I have over 50 words that have random text sizes and places when you run it but they overlap.
How can I make them "NOT" overlap each other? the result would probably look like a word cloud.
if you think my codes would help here they are
String [] words;
int index = 0;
void setup ()
{
size (500,500);
background (255);
String [] lines = loadStrings ("alice_just_text.txt");
String entireplay = join(lines, " "); //splits it by line
words = splitTokens (entireplay, ",.?!:-;:()03 "); //splits it by word
for (int i = 0; i < 50; i++) {
float x = random(width);
float y = random(height);
int index = int(random(words.length));
textSize (random(60)); //random font size
fill (0);
textAlign (CENTER);
text (words[index], x, y, width/2, height/2);
println(words[index]);
index++ ;
}
}
Stack Overflow isn't really designed for general "how do I do this" type questions. You'll have much better luck if you post a more specific "I tried X, expected Y, but got Z instead" type question. But I'll try to help in a general sense:
You need to break your problem down into smaller pieces and then take on those pieces one at a time.
For example, you can isolate your problem to making sure rectangles don't overlap, which you can break down even further. There are a number of ways to do that:
You could use a grid to lay out your rectangles. Figure out how many squares a line of text takes up, then find a place in your grid where that word will fit. You could use something like a 2D array of boolean values, for example.
Or you could generate a random location, and then check whether there's already a rectangle there. If so, pick a new random location until you find a clear spot.
In any case, you'll probably need to use collision detection (either point-rectangle or rectangle-rectangle) to determine whether your rectangles are overlapping.
Start small. Create a small example program that just shows two rectangles on the screen. Hardcode their positions at first, but make it so they turn red if they're colliding. Work your way up from there. Make it so you can add rectangles using the mouse, but only let the user add them if there is no overlap. Then add the random location choosing. If you get stuck on a specific step, then post a MCVE and we'll go from there. Good luck.

How does Audacity mix audio samples?

So let's say I want to mix these 2 audio tracks:
In Audacity, I can use the "Mix and Render" option to mix them together, and I'll get this:
However, when I try to write my own code to mix, I get this:
This is essentially how I mix the samples:
private function mixSamples(sample1:UInt, sample2:UInt):UInt
{
return (sample1 + sample2) & 0xFF;
}
(The syntax is Haxe but it should be easy to follow if you don't know it.)
These are 8-bit sample audio files, and I want the product to be 8-bit as well, hence the & 0xFF.
I do understand that by simply adding the samples, I should expect clipping. My issue is that mixing in Audacity doesn't cause clipping (at least not to the extent that my code does), and by looking at the "tail" of the second (longer) track, it doesn't seem to reduce the amplitude. It doesn't sound any softer either.
So basically, my question is this: what's Audacity doing that I'm not? I want to mix tracks to sound exactly as if they're being played on top of one another, but I (obviously) don't want this horrendous clipping.
EDIT:
Here is what I get if I sign the values before I add, then unsign the sum value, as suggested by Radiodef:
As you can see it's much better than before, but is still quite distorted and noisy compared to the result Audacity produces. So my problem still stands, Audacity must be doing something differently.
EDIT2:
I mixed the first track on itself, both with my code and Audacity, and compared the points where distortion occurs. This is Audacity's result:
And this is my result:
I think what is happening is you are summing them as unsigned. A typical sound wave is both positive and negative which is why they add together the way they do (some parts cancel). If you have some 8-bit sample that is -96 and another that is 96 and you sum them you will get 0. If what you have is unsigned audio you will instead have the samples 32 and 224 summed = 256 (offset and overflow).
What you need to do is sign them before summing. To sign 8-bit samples convert them to a signed int type and subtract 128 from all of them. I assume what you have are WAV files and you will need to unsign them again after the sum.
Audacity probably does floating point processing. I've heard some real dubious claims about floating point like that it has "infinite dynamic range" and garbage like that but it doesn't clip in the same determinate and obvious way as integers do. Floating point has a finite range of values same as integers but the largest and smallest values are much farther apart. (That's about the simplest way to put it.) Floating point can allow much greater amplitude changes in the audio but the catch is the overall signal to noise ratio is lower than integers.
With the weird distortion my best guess is it is from the mask you are doing with & 0xFF. If you want to actually clip instead of getting overflow you will need to do so yourself.
for (int i = 0; i < samplesLength; i++) {
if (samples[i] > 127) {
samples[i] = 127;
} else if (samples[i] < -128) {
samples[i] = -128;
}
}
Otherwise say you have two samples that are 125, summing gets you 250 (11111010). Then you unsign (add 128) and get 378 (101111010). An & will get you 1111010 which is 122. Other numbers might get you results that are effectively negative or close to 0.
If you want to clip at something other than 8-bit, full scale for a bit depth n will be positive (2 ^ (n - 1)) - 1 and negative 2 ^ (n - 1) so for example 32767 and -32768 for 16-bit.
Another thing you can do instead of clipping is to search for clipping and normalize. Something like:
double[] normalize(double[] samples, int length, int destBits) {
double fsNeg = -pow(2, destBits - 1);
double fsPos = -fsNeg - 1;
double peak = 0;
double norm = 1;
for (int i = 0; i < length; i++) {
// find highest clip if there is one
if (samples[i] < fsNeg || samples[i] > fsPos) {
norm = abs(samples[i]);
if (norm > peak) {
norm = peak;
}
}
}
if (peak != 0) {
// ratio to reduce to where there is not a clip
norm = -fsNeg / peak;
for (int i = 0; i < length; i++) {
samples[i] *= norm;
}
}
return samples;
}
It's a lot simpler than you think; although your original files are 8-bit, Audacity handles them internally as 32-bit floating point. You can see this in the screenshot, in the information panel to the left of each track. This means that adding 2 tracks together means adding two floating point samples at each point, and will simply yield sample values from -2.0 to +2.0, which are then clamped to the -1 to +1 range. By comparison, adding two 8-bit integers together will yield another 8-bit number where the value overflows and wraps around. (This can apply whether you use signed or unsigned values.)

Is it possible to do an algebraic curve fit with just a single pass of the sample data?

I would like to do an algebraic curve fit of 2D data points, but for various reasons - it isn't really possible to have much of the sample data in memory at once, and iterating through all of it is an expensive process.
(The reason for this is that actually I need to fit thousands of curves simultaneously based on gigabytes of data which I'm reading off disk, and which is therefore sloooooow).
Note that the number of polynomial coefficients will be limited (perhaps 5-10), so an exact fit will be extremely unlikely, but this is ok as I'm trying to find an underlying pattern in data with a lot of random noise.
I understand how one can use a genetic algorithm to fit a curve to a dataset, but this requires many passes through the sample data, and thus isn't practical for my application.
Is there a way to fit a curve with a single pass of the data, where the state that must be maintained from sample to sample is minimal?
I should add that the nature of the data is that the points may lie anywhere on the X axis between 0.0 and 1.0, but the Y values will always be either 1.0 or 0.0.
So, in Java, I'm looking for a class with the following interface:
public interface CurveFit {
public void addData(double x, double y);
public List<Double> getBestFit(); // Returns the polynomial coefficients
}
The class that implements this must not need to keep much data in its instance fields, no more than a kilobyte even for millions of data points. This means that you can't just store the data as you get it to do multiple passes through it later.
edit: Some have suggested that finding an optimal curve in a single pass may be impossible, however an optimal fit is not required, just as close as we can get it in a single pass.
The bare bones of an approach might be if we have a way to start with a curve, and then a way to modify it to get it slightly closer to new data points as they come in - effectively a form of gradient descent. It is hoped that with sufficient data (and the data will be plentiful), we get a pretty good curve. Perhaps this inspires someone to a solution.
Yes, it is a projection. For
y = X beta + error
where lowercased terms are vectors, and X is a matrix, you have the solution vector
\hat{beta} = inverse(X'X) X' y
as per the OLS page. You almost never want to compute this directly but rather use LR, QR or SVD decompositions. References are plentiful in the statistics literature.
If your problem has only one parameter (and x is hence a vector as well) then this reduces to just summation of cross-products between y and x.
If you don't mind that you'll get a straight line "curve", then you only need six variables for any amount of data. Here's the source code that's going into my upcoming book; I'm sure that you can figure out how the DataPoint class works:
Interpolation.h:
#ifndef __INTERPOLATION_H
#define __INTERPOLATION_H
#include "DataPoint.h"
class Interpolation
{
private:
int m_count;
double m_sumX;
double m_sumXX; /* sum of X*X */
double m_sumXY; /* sum of X*Y */
double m_sumY;
double m_sumYY; /* sum of Y*Y */
public:
Interpolation();
void addData(const DataPoint& dp);
double slope() const;
double intercept() const;
double interpolate(double x) const;
double correlate() const;
};
#endif // __INTERPOLATION_H
Interpolation.cpp:
#include <cmath>
#include "Interpolation.h"
Interpolation::Interpolation()
{
m_count = 0;
m_sumX = 0.0;
m_sumXX = 0.0;
m_sumXY = 0.0;
m_sumY = 0.0;
m_sumYY = 0.0;
}
void Interpolation::addData(const DataPoint& dp)
{
m_count++;
m_sumX += dp.getX();
m_sumXX += dp.getX() * dp.getX();
m_sumXY += dp.getX() * dp.getY();
m_sumY += dp.getY();
m_sumYY += dp.getY() * dp.getY();
}
double Interpolation::slope() const
{
return (m_sumXY - (m_sumX * m_sumY / m_count)) /
(m_sumXX - (m_sumX * m_sumX / m_count));
}
double Interpolation::intercept() const
{
return (m_sumY / m_count) - slope() * (m_sumX / m_count);
}
double Interpolation::interpolate(double X) const
{
return intercept() + slope() * X;
}
double Interpolation::correlate() const
{
return m_sumXY / sqrt(m_sumXX * m_sumYY);
}
Why not use a ring buffer of some fixed size (say, the last 1000 points) and do a standard QR decomposition-based least squares fit to the buffered data? Once the buffer fills, each time you get a new point you replace the oldest and re-fit. That way you have a bounded working set that still has some data locality, without all the challenges of live stream (memoryless) processing.
Are you limiting the number of polynomial coefficients (i.e. fitting to a max power of x in your polynomial)?
If not, then you don't need a "best fit" algorithm - you can always fit N data points EXACTLY to a polynomial of N coefficients.
Just use matrices to solve N simultaneous equations for N unknowns (the N coefficients of the polynomial).
If you are limiting to a max number of coefficients, what is your max?
Following your comments and edit:
What you want is a low-pass filter to filter out noise, not fit a polynomial to the noise.
Given the nature of your data:
the points may lie anywhere on the X axis between 0.0 and 1.0, but the Y values will always be either 1.0 or 0.0.
Then you don't need even a single pass, as these two lines will pass exactly through every point:
X = [0.0 ... 1.0], Y = 0.0
X = [0.0 ... 1.0], Y = 1.0
Two short line segments, unit length, and every point falls on one line or the other.
Admittedly, an algorithm to find a good curve fit for arbitrary points in a single pass is interesting, but (based on your question), that's not what you need.
Assuming that you don't know which point should belong to which curve, something like a Hough Transform might provide what you need.
The Hough Transform is a technique that allows you to identify structure within a data set. One use is for computer vision, where it allows easy identification of lines and borders within the field of sight.
Advantages for this situation:
Each point need be considered only once
You don't need to keep a data structure for each candidate line, just one (complex, multi-dimensional) structure
Processing of each line is simple
You can stop at any point and output a set of good matches
You never discard any data, so it's not reliant on any accidental locality of references
You can trade off between accuracy and memory requirements
Isn't limited to exact matches, but will highlight partial matches too.
An approach
To find cubic fits, you'd construct a 4-dimensional Hough space, into which you'd project each of your data-points. Hotspots within Hough space would give you the parameters for the cubic through those points.
You need the solution to an overdetermined linear system. The popular methods are Normal Equations (not usually recommended), QR factorization, and singular value decomposition (SVD). Wikipedia has decent explanations, Trefethen and Bau is very good. Your options:
Out-of-core implementation via the normal equations. This requires the product A'A where A has many more rows than columns (so the result is very small). The matrix A is completely defined by the sample locations so you don't have to store it, thus computing A'A is reasonably cheap (very cheap if you don't need to hit memory for the node locations). Once A'A is computed, you get the solution in one pass through your input data, but the method can be unstable.
Implement an out-of-core QR factorization. Classical Gram-Schmidt will be fastest, but you have to be careful about stability.
Do it in-core with distributed memory (if you have the hardware available). Libraries like PLAPACK and SCALAPACK can do this, the performance should be much better than 1. The parallel scalability is not fantastic, but will be fine if it's a problem size that you would even think about doing in serial.
Use iterative methods to compute an SVD. Depending on the spectral properties of your system (maybe after preconditioning) this could converge very fast and does not require storage for the matrix (which in your case has 5-10 columns each of which are the size of your input data. A good library for this is SLEPc, you only have to find a the product of the Vandermonde matrix with a vector (so you only need to store the sample locations). This is very scalable in parallel.
I believe I found the answer to my own question based on a modified version of this code. For those interested, my Java code is here.

Microsoft.DirectX.Vector3.Normalize() inconsistency

Two ways to normalize a Vector3 object; by calling Vector3.Normalize() and the other by normalizing from scratch:
class Tester {
static Vector3 NormalizeVector(Vector3 v)
{
float l = v.Length();
return new Vector3(v.X / l, v.Y / l, v.Z / l);
}
public static void Main(string[] args)
{
Vector3 v = new Vector3(0.0f, 0.0f, 7.0f);
Vector3 v2 = NormalizeVector(v);
Debug.WriteLine(v2.ToString());
v.Normalize();
Debug.WriteLine(v.ToString());
}
}
The code above produces this:
X: 0
Y: 0
Z: 1
X: 0
Y: 0
Z: 0.9999999
Why?
(Bonus points: Why Me?)
Look how they implemented it (e.g. in asm).
Maybe they wanted to be faster and produced something like:
l = 1 / v.length();
return new Vector3(v.X * l, v.Y * l, v.Z * l);
to trade 2 divisions against 3 multiplications (because they thought mults were faster than divs (which is for modern fpus most often not valid)). This introduced one level more of operation, so the less precision.
This would be the often cited "premature optimization".
Don't care about this. There's always some error involved when using floats. If you're curious, try changing to double and see if this still happens.
You should expect this when using floats, the basic reason being that the computer processes in binary and this doesn't map exactly to decimal.
For an intuitive example of issues between different bases consider the fraction 1/3. It cannot be represented exactly in Decimal (it's 0.333333.....) but can be in Terniary (as 0.1).
Generally these issues are a lot less obvious with doubles, at the expense of computing costs (double the number of bits to manipulate). However in view of the fact that a float level of precision was enough to get man to the moon then you really shouldn't obsess :-)
These issues are sort of computer theory 101 (as opposed to programming 101 - which you're obviously well beyond), and if your heading towards Direct X code where similar things can come up regularly I'd suggest it might be a good idea to pick up a basic computer theory book and read it quickly.
You have here an interesting discussion about String formatting of floats.
Just for reference:
Your number requires 24 bits to be represented, which means that you are using up the whole mantissa of a float (23bits + 1 implied bit).
Single.ToString () is ultimately implemented by a native function, so I cannot tell for sure what is going on, but my guess is that it uses the last digit to round the whole mantissa.
The reason behind this could be that you often get numbers that cannot be represented exactly in binary, so you would get a long mantissa; for instance, 0.01 is represented internally as 0.00999... as you can see by writing:
float f = 0.01f;
Console.WriteLine ("{0:G}", f);
Console.WriteLine ("{0:G}", (double) f);
by rounding at the seventh digit, you will get back "0.01", which is what you would have expected.
For what seen above, numbers with only 7 digits will not show this problem, as you already saw.
Just to be clear: the rounding is taking place only when you convert your number to a string: your calculations, if any, will use all the available bits.
Floats have a precision of 7 digits externally (9 internally), so if you go above that then rounding (with potential quirks) is automatic.
If you drop the float down to 7 digits (for instance, 1 to the left, 6 to the right) then it will work out and the string conversion will as well.
As for the bonus points:
Why you ? Because this code was 'eager to blow on you'.
(Vulcan... blow... ok.
Lamest.
Punt.
Ever)
If your code is broken by minute floating point rounding errors, then I'm afraid you need to fix it, as they're just a fact of life.

Ball to Ball Collision - Detection and Handling

With the help of the Stack Overflow community I've written a pretty basic-but fun physics simulator.
You click and drag the mouse to launch a ball. It will bounce around and eventually stop on the "floor".
My next big feature I want to add in is ball to ball collision. The ball's movement is broken up into a x and y speed vector. I have gravity (small reduction of the y vector each step), I have friction (small reduction of both vectors each collision with a wall). The balls honestly move around in a surprisingly realistic way.
I guess my question has two parts:
What is the best method to detect ball to ball collision?
Do I just have an O(n^2) loop that iterates over each ball and checks every other ball to see if it's radius overlaps?
What equations do I use to handle the ball to ball collisions? Physics 101
How does it effect the two balls speed x/y vectors? What is the resulting direction the two balls head off in? How do I apply this to each ball?
Handling the collision detection of the "walls" and the resulting vector changes were easy but I see more complications with ball-ball collisions. With walls I simply had to take the negative of the appropriate x or y vector and off it would go in the correct direction. With balls I don't think it is that way.
Some quick clarifications: for simplicity I'm ok with a perfectly elastic collision for now, also all my balls have the same mass right now, but I might change that in the future.
Edit: Resources I have found useful
2d Ball physics with vectors: 2-Dimensional Collisions Without Trigonometry.pdf
2d Ball collision detection example: Adding Collision Detection
Success!
I have the ball collision detection and response working great!
Relevant code:
Collision Detection:
for (int i = 0; i < ballCount; i++)
{
for (int j = i + 1; j < ballCount; j++)
{
if (balls[i].colliding(balls[j]))
{
balls[i].resolveCollision(balls[j]);
}
}
}
This will check for collisions between every ball but skip redundant checks (if you have to check if ball 1 collides with ball 2 then you don't need to check if ball 2 collides with ball 1. Also, it skips checking for collisions with itself).
Then, in my ball class I have my colliding() and resolveCollision() methods:
public boolean colliding(Ball ball)
{
float xd = position.getX() - ball.position.getX();
float yd = position.getY() - ball.position.getY();
float sumRadius = getRadius() + ball.getRadius();
float sqrRadius = sumRadius * sumRadius;
float distSqr = (xd * xd) + (yd * yd);
if (distSqr <= sqrRadius)
{
return true;
}
return false;
}
public void resolveCollision(Ball ball)
{
// get the mtd
Vector2d delta = (position.subtract(ball.position));
float d = delta.getLength();
// minimum translation distance to push balls apart after intersecting
Vector2d mtd = delta.multiply(((getRadius() + ball.getRadius())-d)/d);
// resolve intersection --
// inverse mass quantities
float im1 = 1 / getMass();
float im2 = 1 / ball.getMass();
// push-pull them apart based off their mass
position = position.add(mtd.multiply(im1 / (im1 + im2)));
ball.position = ball.position.subtract(mtd.multiply(im2 / (im1 + im2)));
// impact speed
Vector2d v = (this.velocity.subtract(ball.velocity));
float vn = v.dot(mtd.normalize());
// sphere intersecting but moving away from each other already
if (vn > 0.0f) return;
// collision impulse
float i = (-(1.0f + Constants.restitution) * vn) / (im1 + im2);
Vector2d impulse = mtd.normalize().multiply(i);
// change in momentum
this.velocity = this.velocity.add(impulse.multiply(im1));
ball.velocity = ball.velocity.subtract(impulse.multiply(im2));
}
Source Code: Complete source for ball to ball collider.
If anyone has some suggestions for how to improve this basic physics simulator let me know! One thing I have yet to add is angular momentum so the balls will roll more realistically. Any other suggestions? Leave a comment!
To detect whether two balls collide, just check whether the distance between their centers is less than two times the radius. To do a perfectly elastic collision between the balls, you only need to worry about the component of the velocity that is in the direction of the collision. The other component (tangent to the collision) will stay the same for both balls. You can get the collision components by creating a unit vector pointing in the direction from one ball to the other, then taking the dot product with the velocity vectors of the balls. You can then plug these components into a 1D perfectly elastic collision equation.
Wikipedia has a pretty good summary of the whole process. For balls of any mass, the new velocities can be calculated using the equations (where v1 and v2 are the velocities after the collision, and u1, u2 are from before):
If the balls have the same mass then the velocities are simply switched. Here's some code I wrote which does something similar:
void Simulation::collide(Storage::Iterator a, Storage::Iterator b)
{
// Check whether there actually was a collision
if (a == b)
return;
Vector collision = a.position() - b.position();
double distance = collision.length();
if (distance == 0.0) { // hack to avoid div by zero
collision = Vector(1.0, 0.0);
distance = 1.0;
}
if (distance > 1.0)
return;
// Get the components of the velocity vectors which are parallel to the collision.
// The perpendicular component remains the same for both fish
collision = collision / distance;
double aci = a.velocity().dot(collision);
double bci = b.velocity().dot(collision);
// Solve for the new velocities using the 1-dimensional elastic collision equations.
// Turns out it's really simple when the masses are the same.
double acf = bci;
double bcf = aci;
// Replace the collision velocity components with the new ones
a.velocity() += (acf - aci) * collision;
b.velocity() += (bcf - bci) * collision;
}
As for efficiency, Ryan Fox is right, you should consider dividing up the region into sections, then doing collision detection within each section. Keep in mind that balls can collide with other balls on the boundaries of a section, so this may make your code much more complicated. Efficiency probably won't matter until you have several hundred balls though. For bonus points, you can run each section on a different core, or split up the processing of collisions within each section.
Well, years ago I made the program like you presented here.
There is one hidden problem (or many, depends on point of view):
If the speed of the ball is too
high, you can miss the collision.
And also, almost in 100% cases your new speeds will be wrong. Well, not speeds, but positions. You have to calculate new speeds precisely in the correct place. Otherwise you just shift balls on some small "error" amount, which is available from the previous discrete step.
The solution is obvious: you have to split the timestep so, that first you shift to correct place, then collide, then shift for the rest of the time you have.
You should use space partitioning to solve this problem.
Read up on
Binary Space Partitioning
and
Quadtrees
As a clarification to the suggestion by Ryan Fox to split the screen into regions, and only checking for collisions within regions...
e.g. split the play area up into a grid of squares (which will will arbitrarily say are of 1 unit length per side), and check for collisions within each grid square.
That's absolutely the correct solution. The only problem with it (as another poster pointed out) is that collisions across boundaries are a problem.
The solution to this is to overlay a second grid at a 0.5 unit vertical and horizontal offset to the first one.
Then, any collisions that would be across boundaries in the first grid (and hence not detected) will be within grid squares in the second grid. As long as you keep track of the collisions you've already handled (as there is likely to be some overlap) you don't have to worry about handling edge cases. All collisions will be within a grid square on one of the grids.
A good way of reducing the number of collision checks is to split the screen into different sections. You then only compare each ball to the balls in the same section.
One thing I see here to optimize.
While I do agree that the balls hit when the distance is the sum of their radii one should never actually calculate this distance! Rather, calculate it's square and work with it that way. There's no reason for that expensive square root operation.
Also, once you have found a collision you have to continue to evaluate collisions until no more remain. The problem is that the first one might cause others that have to be resolved before you get an accurate picture. Consider what happens if the ball hits a ball at the edge? The second ball hits the edge and immediately rebounds into the first ball. If you bang into a pile of balls in the corner you could have quite a few collisions that have to be resolved before you can iterate the next cycle.
As for the O(n^2), all you can do is minimize the cost of rejecting ones that miss:
1) A ball that is not moving can't hit anything. If there are a reasonable number of balls lying around on the floor this could save a lot of tests. (Note that you must still check if something hit the stationary ball.)
2) Something that might be worth doing: Divide the screen into a number of zones but the lines should be fuzzy--balls at the edge of a zone are listed as being in all the relevant (could be 4) zones. I would use a 4x4 grid, store the zones as bits. If an AND of the zones of two balls zones returns zero, end of test.
3) As I mentioned, don't do the square root.
I found an excellent page with information on collision detection and response in 2D.
http://www.metanetsoftware.com/technique.html (web.archive.org)
They try to explain how it's done from an academic point of view. They start with the simple object-to-object collision detection, and move on to collision response and how to scale it up.
Edit: Updated link
You have two easy ways to do this. Jay has covered the accurate way of checking from the center of the ball.
The easier way is to use a rectangle bounding box, set the size of your box to be 80% the size of the ball, and you'll simulate collision pretty well.
Add a method to your ball class:
public Rectangle getBoundingRect()
{
int ballHeight = (int)Ball.Height * 0.80f;
int ballWidth = (int)Ball.Width * 0.80f;
int x = Ball.X - ballWidth / 2;
int y = Ball.Y - ballHeight / 2;
return new Rectangle(x,y,ballHeight,ballWidth);
}
Then, in your loop:
// Checks every ball against every other ball.
// For best results, split it into quadrants like Ryan suggested.
// I didn't do that for simplicity here.
for (int i = 0; i < balls.count; i++)
{
Rectangle r1 = balls[i].getBoundingRect();
for (int k = 0; k < balls.count; k++)
{
if (balls[i] != balls[k])
{
Rectangle r2 = balls[k].getBoundingRect();
if (r1.Intersects(r2))
{
// balls[i] collided with balls[k]
}
}
}
}
I see it hinted here and there, but you could also do a faster calculation first, like, compare the bounding boxes for overlap, and THEN do a radius-based overlap if that first test passes.
The addition/difference math is much faster for a bounding box than all the trig for the radius, and most times, the bounding box test will dismiss the possibility of a collision. But if you then re-test with trig, you're getting the accurate results that you're seeking.
Yes, it's two tests, but it will be faster overall.
This KineticModel is an implementation of the cited approach in Java.
I implemented this code in JavaScript using the HTML Canvas element, and it produced wonderful simulations at 60 frames per second. I started the simulation off with a collection of a dozen balls at random positions and velocities. I found that at higher velocities, a glancing collision between a small ball and a much larger one caused the small ball to appear to STICK to the edge of the larger ball, and moved up to around 90 degrees around the larger ball before separating. (I wonder if anyone else observed this behavior.)
Some logging of the calculations showed that the Minimum Translation Distance in these cases was not large enough to prevent the same balls from colliding in the very next time step. I did some experimenting and found that I could solve this problem by scaling up the MTD based on the relative velocities:
dot_velocity = ball_1.velocity.dot(ball_2.velocity);
mtd_factor = 1. + 0.5 * Math.abs(dot_velocity * Math.sin(collision_angle));
mtd.multplyScalar(mtd_factor);
I verified that before and after this fix, the total kinetic energy was conserved for every collision. The 0.5 value in the mtd_factor was the approximately the minumum value found to always cause the balls to separate after a collision.
Although this fix introduces a small amount of error in the exact physics of the system, the tradeoff is that now very fast balls can be simulated in a browser without decreasing the time step size.
Improving the solution to detect circle with circle collision detection given within the question:
float dx = circle1.x - circle2.x,
dy = circle1.y - circle2.y,
r = circle1.r + circle2.r;
return (dx * dx + dy * dy <= r * r);
It avoids the unnecessary "if with two returns" and the use of more variables than necessary.
After some trial and error, I used this document's method for 2D collisions : https://www.vobarian.com/collisions/2dcollisions2.pdf
(that OP linked to)
I applied this within a JavaScript program using p5js, and it works perfectly. I had previously attempted to use trigonometrical equations and while they do work for specific collisions, I could not find one that worked for every collision no matter the angle at the which it happened.
The method explained in this document uses no trigonometrical functions whatsoever, it's just plain vector operations, I recommend this to anyone trying to implement ball to ball collision, trigonometrical functions in my experience are hard to generalize. I asked a Physicist at my university to show me how to do it and he told me not to bother with trigonometrical functions and showed me a method that is analogous to the one linked in the document.
NB : My masses are all equal, but this can be generalised to different masses using the equations presented in the document.
Here's my code for calculating the resulting speed vectors after collision :
//you just need a ball object with a speed and position vector.
class TBall {
constructor(x, y, vx, vy) {
this.r = [x, y];
this.v = [0, 0];
}
}
//throw two balls into this function and it'll update their speed vectors
//if they collide, you need to call this in your main loop for every pair of
//balls.
function collision(ball1, ball2) {
n = [ (ball1.r)[0] - (ball2.r)[0], (ball1.r)[1] - (ball2.r)[1] ];
un = [n[0] / vecNorm(n), n[1] / vecNorm(n) ] ;
ut = [ -un[1], un[0] ];
v1n = dotProd(un, (ball1.v));
v1t = dotProd(ut, (ball1.v) );
v2n = dotProd(un, (ball2.v) );
v2t = dotProd(ut, (ball2.v) );
v1t_p = v1t; v2t_p = v2t;
v1n_p = v2n; v2n_p = v1n;
v1n_pvec = [v1n_p * un[0], v1n_p * un[1] ];
v1t_pvec = [v1t_p * ut[0], v1t_p * ut[1] ];
v2n_pvec = [v2n_p * un[0], v2n_p * un[1] ];
v2t_pvec = [v2t_p * ut[0], v2t_p * ut[1] ];
ball1.v = vecSum(v1n_pvec, v1t_pvec); ball2.v = vecSum(v2n_pvec, v2t_pvec);
}
I would consider using a quadtree if you have a large number of balls. For deciding the direction of bounce, just use simple conservation of energy formulas based on the collision normal. Elasticity, weight, and velocity would make it a bit more realistic.
Here is a simple example that supports mass.
private void CollideBalls(Transform ball1, Transform ball2, ref Vector3 vel1, ref Vector3 vel2, float radius1, float radius2)
{
var vec = ball1.position - ball2.position;
float dis = vec.magnitude;
if (dis < radius1 + radius2)
{
var n = vec.normalized;
ReflectVelocity(ref vel1, ref vel2, ballMass1, ballMass2, n);
var c = Vector3.Lerp(ball1.position, ball2.position, radius1 / (radius1 + radius2));
ball1.position = c + (n * radius1);
ball2.position = c - (n * radius2);
}
}
public static void ReflectVelocity(ref Vector3 vel1, ref Vector3 vel2, float mass1, float mass2, Vector3 intersectionNormal)
{
float velImpact1 = Vector3.Dot(vel1, intersectionNormal);
float velImpact2 = Vector3.Dot(vel2, intersectionNormal);
float totalMass = mass1 + mass2;
float massTransfure1 = mass1 / totalMass;
float massTransfure2 = mass2 / totalMass;
vel1 += ((velImpact2 * massTransfure2) - (velImpact1 * massTransfure2)) * intersectionNormal;
vel2 += ((velImpact1 * massTransfure1) - (velImpact2 * massTransfure1)) * intersectionNormal;
}

Resources