Calculate distance between two points in bing maps - c#-4.0

Ihave a bing map, and a two points :
Point1,Point2 and i want to calculate the distance between these two points? is that possible?
and if i want to put a circle on the two third of the path between point1 and point2 and near point2 ...how can i make it?

Microsoft has a GeoCoordinate.GetDistanceTo Method, which uses the Haversine formula.
For me other implementation return NaN for distances that are too small. I haven't run into any issues with the built in function yet.

See Haversine or even better the Vincenty formula how to solve this problem.
The following code uses haversines way to get the distance:
public double GetDistanceBetweenPoints(double lat1, double long1, double lat2, double long2)
{
double distance = 0;
double dLat = (lat2 - lat1) / 180* Math.PI;
double dLong = (long2 - long1) / 180 * Math.PI;
double a = Math.Sin(dLat / 2) * Math.Sin(dLat / 2)
+ Math.Cos(lat1 / 180* Math.PI) * Math.Cos(lat2 / 180* Math.PI)
* Math.Sin(dLong/2) * Math.Sin(dLong/2);
double c = 2 * Math.Atan2(Math.Sqrt(a), Math.Sqrt(1 - a));
//Calculate radius of earth
// For this you can assume any of the two points.
double radiusE = 6378135; // Equatorial radius, in metres
double radiusP = 6356750; // Polar Radius
//Numerator part of function
double nr = Math.Pow(radiusE * radiusP * Math.Cos(lat1 / 180 * Math.PI), 2);
//Denominator part of the function
double dr = Math.Pow(radiusE * Math.Cos(lat1 / 180 * Math.PI), 2)
+ Math.Pow(radiusP * Math.Sin(lat1 / 180 * Math.PI), 2);
double radius = Math.Sqrt(nr / dr);
//Calculate distance in meters.
distance = radius * c;
return distance; // distance in meters
}
You can find a good site with infos here.

You can use a geographic library for (re-)projection and calculation operations if you need more accurate results or want to do some math operations (e.g. transform a circle onto a sperioid/projection). Take a look at DotSpatial or SharpMap and the samples/unittests/sources there... this might help to solve your problem.
Anyway if you know the geodesic distance and bearing you can also calculate where resulting target position (center of your circle) is, e.g. see "Direct Problem" of Vincenty's algorithms. Here are also some useful algorithm implementations for silverlight/.net
You might also consider to post your questions at GIS Stackexchange. They discuss GIS related problems like yours. Take a look at the question for calculating lat long x-miles from point (as you already know the whole distance now) or see the discussion here about distance calculations. This question is related to the problem how to draw a point on a line in a given distance and is nearly the same (cause you need a center and radius).
Another option is to use ArcGIS API for Silverlight which can also display Bing Maps. It is open source and you can learn the things you need there (or just use them, cause they already exists in the SDK). See the Utilities examples tab within the samples.

As I already mentioned: Take a look at this page to get more infos regarding your problem. There you'll find a formula, Javascript code and an Excel sample for calculating a destination point by a given distance and bearing from start point (see headlines there).
It shouldn't be difficult to "transform" the code to your c#-world.

Related

Shading with squared falloff always makes everything black

I have recently tried to change the current way I calculate diffuse lighting in my RayTracer. It used to be calculated like this:
float lambert = (light_ray_dir * normal) * coef;
red += lambert * current.color.red * mat.kd.red;
green += lambert * current.color.green * mat.kd.green;
blue += lambert * current.color.blue * mat.kd.blue;
where coef is an attenuation coefficient that starts at 1 for each pixel and is then attenuated for each reflected ray that is generated by this line:
coef *= mat.reflection;
This worked well.
But I decided to try something more realistic and implemented this:
float squared_attenuation = LIGHT_FALLOFF * lenght;
light_intensity.setX ((current.color.red /*INTENSITY*/)/ squared_attenuation);
light_intensity.setY ((current.color.green /*INTENSITY*/)/ squared_attenuation);
light_intensity.setZ ((current.color.blue /*INTENSITY*/)/ squared_attenuation);
red += ALBEDO * light_intensity.getX() * lambert * mat.kd.red;
green += ALBEDO * light_intensity.getY() * lambert * mat.kd.green;
blue += ALBEDO * light_intensity.getZ() * lambert * mat.kd.blue;
where LIGHT_FALLOFF it is a constant value:
#define LIGHT_FALLOFF M_PI * 4
and length it is the length of the vector that goes from the point light center to the intersect point:
inline float normalize_return_lenght () {
float lenght = sqrtf (x*x + y*y + z*z);
float inv_length = 1/lenght;
x = x * inv_length, y = y * inv_length, z = z * inv_length;
return lenght;
}
float lenght = light_ray_dir.normalize_return_lenght ();
The problem is that all this is generating nothing more than a black screen! The main culprit is the length that goes as the divisor in the light_intensity.set. It makes the final color values being some value ^ -5. However, even if I replace it by one (ruining my goal of a realistic light attenuation), I still get color values to close to zero still, hence a black image.
I tried to add another light_sources closer to the objects, however the fact that the models that shall be shaded are made of multiple polygons with different coordinates, make hard to determine a goodl ocation for them.
So I ask. It seems normal to you that this is happening or it seems a bug? For me theory, does not seems strange for me, since the attenuation is quadratic.
Is does not seem, there is a some hint as to where to place the light sources, or to anything as can get an image that is not all black?
Thanks for reading all this!
P.S: Intensity is commented out because on the example that I used to do my code, it was one
So you are assuming that the light has a luminosity of "1" ?
How far away is your light?
If your light is - say - 10 units away, then they contribution from the light will be 1/10, or a very small number. This is probably why your image is dark.
You need to have quite large numbers for your light intensity if you are going to do this. In one of my scenes, I have a light that is about 1000 units away (pretending to be the Sun) and the intensity is 380000!!
Another thing ... to simulate reality, you should be using 1 / length^2. The light intensity falls off with the square of distance, not just with distance.
Good luck!

NON orthogonal projection : projecting a point onto a line at given direction (2d)

I need a solution to project a 2d point onto a 2d line at certain Direction .Here's what i've got so far : This is how i do orthogonal projection :
CVector2d project(Line line , CVector2d point)
{
CVector2d A = line.end - line.start;
CVector2d B = point - line start;
float dot = A.dotProduct(B);
float mag = A.getMagnitude();
float md = dot/mag;
return CVector2d (line.start + A * md);
}
Result :
(Projecting P onto line and the result is Pr):
but i need to project the point onto the line at given DIRECTION which should return a result like this (project point P1 onto line at specific Direction calculate Pr) :
How should I take Direction vector into account to calculate Pr ?
I can come up with 2 methods out of my head.
Personally I would do this using affine transformations (but seems you don not have this concept as you are using vectors not points). The procedure with affine transformations is easy. Rotate the points to one of the cardinal axes read the coordinate of your point zero the other value and inverse transform back. The reason for this strategy is that nearly all transformation procedures reduce to very simple human understandable operations with the affine transformation scheme. So no real work to do once you have the tools and data structures at hand.
However since you didn't see this coming I assume you want to hear a vector operation instead (because you either prefer the matrix operation or run away when its suggested, tough its the same thing). So you have the following situation:
This expressed as a equation system looks like (its intentionally this way to show you that it is NOT code but math at this point):
line.start.x + x*(line.end.x - line.start.x)+ y*direction.x = point.x
line.start.y + x*(line.end.y - line.start.y)+ y*direction.y = point.y
now this can be solved for x (and y)
x = (direction.y * line.start.x - direction.x * line.start.y -
direction.y * point.x + direction.x * point.y) /
(direction.y * line.end.x - direction.x * line.end.y -
direction.y * line.start.x + direction.x * line.start.y);
// the solution for y can be omitted you dont need it
y = -(line.end.y * line.start.x - line.end.x * line.start.y -
line.end.y * point.x + line.start.y * point.x + line.end.x * point.y -
line.start.x point.y)/
(-direction.y * line.end.x + direction.x * line.end.y +
direction.y * line.start.x - direction.x * line.start.y)
Calculation done with mathematica if I didn't copy anything wrong it should work. But I would never use this solution because its not understandable (although it is high school grade math, or at least it is where I am). But use space transformation as described above.

Issues with bullet entry points for "shoulder mounted" guns

I'm making a SHMUP game that has a space ship. That space ship currently fires a main cannon from its center point. The sprite that represents the ship has a center based registration point. 0,0 is center of the ship.
When I fire the main cannon i make a bullet and assign make its x & y coordinates match the avatar and add it to the display list. This works fine.
I then made two new functions called fireLeftCannon, fireRightCannon. These create a bullet and add it to the display list but the x, y values are this.y + 15 and this.y +(-) 10. This creates a sort of triangle of bullet entry points.
Similar to this:
   ▲
▲   ▲
the game tick function will adjust the avatar's rotation to always point at the cursor. This is my aiming method. When I shoot straight up all 3 bullets fire up in the expected pattern. However when i rotate and face the right the entry points do not rotate. This is not an issue for the center point main cannon.
My question is how do i use the current center position ( this.x, this.y ) and adjust them based on my current rotation to place a new bullet so that it is angled correctly.
Thanks a lot in advance.
Tyler
EDIT
OK i tried your solution and it didn't work. Here is my bullet move code:
var pi:Number = Math.PI
var _xSpeed:Number = Math.cos((_rotation - 90) * (pi/180) );
var _ySpeed:Number = Math.sin((_rotation - 90) * (pi / 180) );
this.x += (_xSpeed * _bulletSpeed );
this.y += (_ySpeed * _bulletSpeed );
And i tried adding your code to the left shoulder cannon:
_bullet.x = this.x + Math.cos( StaticMath.ToRad(this.rotation) ) * ( this.x - 10 ) - Math.sin( StaticMath.ToRad(this.rotation)) * ( this.x - 10 );
_bullet.y = this.y + Math.sin( StaticMath.ToRad(this.rotation)) * ( this.y + 15 ) + Math.cos( StaticMath.ToRad(this.rotation)) * ( this.y + 15 );
This is placing the shots a good deal away from the ship and sometimes off screen.
How am i messing up the translation code?
What you need to start with is, to be precise, the coordinates of your cannons in the ship's coordinate system (or “frame of reference”). This is like what you have now but starting from 0, not the ship's position, so they would be something like:
(0, 0) -- center
(10, 15) -- left shoulder
(-10, 15) -- right shoulder
Then what you need to do is transform those coordinates into the coordinate system of the world/scene; this is the same kind of thing your graphics library is doing to draw the sprite.
In your particular case, the intervening transformations are
world ←translation→ ship position ←rotation→ ship positioned and rotated
So given that you have coordinates in the third frame (how the ship's sprite is drawn), you need to apply the rotation, and then apply the translation, at which point you're in the first frame. There are two approaches to this: one is matrix arithmetic, and the other is performing the transformations individually.
For this case, it is simpler to skip the matrices unless you already have a matrix library handy already, in which case you should use it — calculate "ship's coordinate transformation matrix" once per frame and then use it for all bullets etc.
I'll now explain doing it directly.
The general method of applying a rotation to coordinates (in two dimensions) is this (where (x1,y1) is the original point and (x2,y2) is the new point):
x2 = cos(angle)*x1 - sin(angle)*y1
y2 = sin(angle)*x1 + cos(angle)*y1
Whether this is a clockwise or counterclockwise rotation will depend on the “handedness” of your coordinate system; just try it both ways (+angle and -angle) until you have the right result. Don't forget to use the appropriate units (radians or degrees, but most likely radians) for your angles given the trig functions you have.
Now, you need to apply the translation. I'll continue using the same names, so (x3,y3) is the rotated-and-translated point. (dx,dy) is what we're translating by.
x3 = dx + x2
y3 = dy + x2
As you can see, that's very simple; you could easily combine it with the rotation formulas.
I have described transformations in general. In the particular case of the ship bullets, it works out to this in particular:
bulletX = shipPosX + cos(shipAngle)*gunX - sin(shipAngle)*gunY
bulletY = shipPosY + sin(shipAngle)*gunX + cos(shipAngle)*gunY
If your bullets are turning the wrong direction, negate the angle.
If you want to establish a direction-dependent initial velocity for your bullets (e.g. always-firing-forward guns) then you just apply the rotation but not the translation to the velocity (gunVelX, gunVelY).
bulletVelX = cos(shipAngle)*gunVelX - sin(shipAngle)*gunVelY
bulletVelY = sin(shipAngle)*gunVelX + cos(shipAngle)*gunVelY
If you were to use vector and matrix math, you would be doing all the same calculations as here, but they would be bundled up in single objects rather than pairs of x's and y's and four trig functions. It can greatly simplify your code:
shipTransform = translate(shipX, shipY)*rotate(shipAngle)
bulletPos = shipTransform*gunPos
I've given the explicit formulas because knowing how the bare arithmetic works is useful to the conceptual understanding.
Response to edit:
In the code you edited into your question, you are adding what I assume is the ship position into the coordinates you multiply by sin/cos. Don't do that — just multiply the offset of the gun position from the ship center by sin/cos and only then add that to the ship position. Also, you are using x x; y y on the two lines, where you should be using x y; x y. Here is your code edited to fix those two things:
_bullet.x = this.x + Math.cos( StaticMath.ToRad(this.rotation)) * (-10) - Math.sin( StaticMath.ToRad(this.rotation)) * (+15);
_bullet.y = this.y + Math.sin( StaticMath.ToRad(this.rotation)) * (-10) + Math.cos( StaticMath.ToRad(this.rotation)) * (+15);
This is the code for a gun at offset (-10, 15).

Why won't my raytracer recreate the "mount" scene?

I'm trying to render the "mount" scene from Eric Haines' Standard Procedural Database (SPD), but the refraction part just doesn't want to co-operate. I've tried everything I can think of to fix it.
This one is my render (with Watt's formula):
(source: philosoraptor.co.za)
This is my render using the "normal" formula:
(source: philosoraptor.co.za)
And this one is the correct render:
(source: philosoraptor.co.za)
As you can see, there are only a couple of errors, mostly around the poles of the spheres. This makes me think that refraction, or some precision error is to blame.
Please note that there are actually 4 spheres in the scene, their NFF definitions (s x_coord y_coord z_coord radius) are:
s -0.8 0.8 1.20821 0.17
s -0.661196 0.661196 0.930598 0.17
s -0.749194 0.98961 0.930598 0.17
s -0.98961 0.749194 0.930598 0.17
That is, there is a fourth sphere behind the more obvious three in the foreground. It can be seen in the gap left between these three spheres.
Here is a picture of that fourth sphere alone:
(source: philosoraptor.co.za)
And here is a picture of the first sphere alone:
(source: philosoraptor.co.za)
You'll notice that many of the oddities present in both my version and the correct version is missing. We can conclude that these effects are the result of interactions between the spheres, the question is which interactions?
What am I doing wrong? Below are some of the potential errors I've already considered:
Refraction vector formula.
As far as I can tell, this is correct. It's the same formula used by several websites and I verified the derivation personally. Here's how I calculate it:
double sinI2 = eta * eta * (1.0f - cosI * cosI);
Vector transmit = (v * eta) + (n * (eta * cosI - sqrt(1.0f - sinI2)));
transmit = transmit.normalise();
I found an alternate formula in 3D Computer Graphics, 3rd Ed by Alan Watt. It gives a closer approximation to the correct image:
double etaSq = eta * eta;
double sinI2 = etaSq * (1.0f - cosI * cosI);
Vector transmit = (v * eta) + (n * (eta * cosI - (sqrt(1.0f - sinI2) / etaSq)));
transmit = transmit.normalise();
The only difference is that I'm dividing by eta^2 at the end.
Total internal reflection.
I tested for this, using the following conditional before the rest of my intersection code:
if (sinI2 <= 1)
Calculation of eta.
I use a stack-like approach for this problem:
/* Entering object. */
if (r.normal.dot(r.dir) < 0)
{
double eta1 = r.iorStack.back();
double eta2 = m.ior;
eta = eta1 / eta2;
r.iorStack.push_back(eta2);
}
/* Exiting object. */
else
{
double eta1 = r.iorStack.back();
r.iorStack.pop_back();
double eta2 = r.iorStack.back();
eta = eta1 / eta2;
}
As you can see, this stores the previous objects that contained this ray in a stack. When exiting the code pops the current IOR off the stack and uses that, along with the IOR under it to compute eta. As far as I know this is the most correct way to do it.
This works for nested transmitting objects. However, it breaks down for intersecting transmitting objects. The problem here is that you need to define the IOR for the intersection independently, which the NFF file format does not do. It's unclear then, what the "correct" course of action is.
Moving the new ray's origin.
The new ray's origin has to be moved slightly along the transmitted path so that it doesn't intersect at the same point as the previous one.
p = r.intersection + transmit * 0.0001f;
p += transmit * 0.01f;
I've tried making this value smaller (0.001f) and (0.0001f) but that makes the spheres appear solid. I guess these values don't move the rays far enough away from the previous intersection point.
EDIT: The problem here was that the reflection code was doing the same thing. So when an object is reflective as well as refractive then the origin of the ray ends up in completely the wrong place.
Amount of ray bounces.
I've artificially limited the amount of ray bounces to 4. I tested raising this limit to 10, but that didn't fix the problem.
Normals.
I'm pretty sure I'm calculating the normals of the spheres correctly. I take the intersection point, subtract the centre of the sphere and divide by the radius.
Just a guess based on doing a image diff (and without reading the rest of your question). The problem looks to me to be the refraction on the back side of the sphere. You might be:
doing it backwards: e.g. reversing (or not reversing) the indexes of refraction.
missing it entirely?
One way to check for this would be to look at the mount through a cube that is almost facing the camera. If the refraction is correct, the picture should be offset slightly but otherwise un-altered. If it's not right, then the picture will seem slightly tilted.
So after more than I year, I finally figured out what was going on here. Clear minds and all that. I was completely off track with the formula. I'm instead using a formula by Heckbert now, which I am sure is correct because I proved it myself using geometry and discrete math.
Here's the correct vector calculation:
double c1 = v.dot(n) * -1;
double c1Sq = pow(c1, 2);
/* Heckbert's formula requires eta to be eta2 / eta1, so I have to flip it here. */
eta = 1 / eta;
double etaSq = pow(eta, 2);
if (etaSq + c1Sq >= 1)
{
Vector transmit = (v / eta) + (n / eta) * (c1 - sqrt(etaSq - 1 + c1Sq));
transmit = transmit.normalise();
...
}
else
{
/* Total internal reflection. */
}
In the code above, eta is eta1 (the IOR of the surface from which the ray is coming) over eta2 (the IOR of the destination surface), v is the incident ray and n is the normal.
There was another problem, which confused the problem some more. I had to flip the normal when exiting an object (which is obvious - I missed it because the other errors were obscuring it).
Lastly, my line of sight algorithm (to determine whether a surface is illuminated by a point light source) was not properly passing through transparent surfaces.
So now my images line up properly :)

Taking altitude into account when calculating geodesic distance

i´m currently dealing with gps data combined with precise altitude measurement.
I want to calculate the distance between two consecuting points. There is a lot
of information out there about calculating distance between two points using the WGS84 ellipsoid and so on.
however, i did not find any information that takes Altitude changes into account for this
distance calculation.
does anyone know about some websites, papers, books etc. that describes such a method?
thanks
edit: Sql Server 2008 geographic extensions also neglect altitude information when calculating distance.
I implemented a WGS84 distance function using the average of the start and end altitude as the constant altitude. If you are certain that there will be relatively little altitude variation along your path this works acceptably well (error is relative to the altitude difference of your two LLA points).
Here's my code (C#):
/// <summary>
/// Gets the geodesic distance between two pathpoints in the current mode's coordinate system
/// </summary>
/// <param name="point1">First point</param>
/// <param name="point2">Second point</param>
/// <param name="mode">Coordinate mode that both points are in</param>
/// <returns>Distance between the two points in the current coordinate mode</returns>
public static double GetGeodesicDistance(PathPoint point1, PathPoint point2, CoordMode mode) {
// calculate proper geodesics for LLA paths
if (mode == CoordMode.LLA) {
// meeus approximation
double f = (point1.Y + point2.Y) / 2 * LatLonAltTransformer.DEGTORAD;
double g = (point1.Y - point2.Y) / 2 * LatLonAltTransformer.DEGTORAD;
double l = (point1.X - point2.X) / 2 * LatLonAltTransformer.DEGTORAD;
double sinG = Math.Sin(g);
double sinL = Math.Sin(l);
double sinF = Math.Sin(f);
double s, c, w, r, d, h1, h2;
// not perfect but use the average altitude
double a = (LatLonAltTransformer.A + point1.Z + LatLonAltTransformer.A + point2.Z) / 2.0;
sinG *= sinG;
sinL *= sinL;
sinF *= sinF;
s = sinG * (1 - sinL) + (1 - sinF) * sinL;
c = (1 - sinG) * (1 - sinL) + sinF * sinL;
w = Math.Atan(Math.Sqrt(s / c));
r = Math.Sqrt(s * c) / w;
d = 2 * w * a;
h1 = (3 * r - 1) / 2 / c;
h2 = (3 * r + 1) / 2 / s;
return d * (1 + (1 / LatLonAltTransformer.RF) * (h1 * sinF * (1 - sinG) - h2 * (1 - sinF) * sinG));
}
PathPoint diff = new PathPoint(point2.X - point1.X, point2.Y - point1.Y, point2.Z - point1.Z, 0);
return Math.Sqrt(diff.X * diff.X + diff.Y * diff.Y + diff.Z * diff.Z);
}
In practice we've found that the altitude difference rarely makes a large difference, our paths are typically 1-2km long with altitude varying on the order of 100m and we see about ~5m change on average versus using the WGS84 ellipsoid unmodified.
Edit:
To add to this, if you do expect large altitude changes, you can convert your WGS84 coordinates to ECEF (earth centered earth fixed) and evaluate straight-line paths as shown at the bottom of my function. Converting a point to ECEF is simple to do:
/// <summary>
/// Converts a point in the format (Lon, Lat, Alt) to ECEF
/// </summary>
/// <param name="point">Point as (Lon, Lat, Alt)</param>
/// <returns>Point in ECEF</returns>
public static PathPoint WGS84ToECEF(PathPoint point) {
PathPoint outPoint = new PathPoint(0);
double lat = point.Y * DEGTORAD;
double lon = point.X * DEGTORAD;
double e2 = 1.0 / RF * (2.0 - 1.0 / RF);
double sinLat = Math.Sin(lat), cosLat = Math.Cos(lat);
double chi = A / Math.Sqrt(1 - e2 * sinLat * sinLat);
outPoint.X = (chi + point.Z) * cosLat * Math.Cos(lon);
outPoint.Y = (chi + point.Z) * cosLat * Math.Sin(lon);
outPoint.Z = (chi * (1 - e2) + point.Z) * sinLat;
return outPoint;
}
Edit 2:
I was asked about some of the other variables in my code:
// RF is the eccentricity of the WGS84 ellipsoid
public const double RF = 298.257223563;
// A is the radius of the earth in meters
public const double A = 6378137.0;
LatLonAltTransformer is a class I used to convert from LatLonAlt coordinates to ECEF coordinates and defines the constants above.
You likely don't care about altitude for large 2D distance separatiions. So if the dist you get is over say 20 (or perhaps 50)km, then who cares about the altitude diff (depends on your needs case). Under say 20km, feed in the simple pythagorean addition to the altitude difference. Feed it in smoothly.
Distance between two geo-points?
I would suggest that over any distance where using the WGS84 would give you significantly better accuracy that the difference in altitude won't matter. And over any distance where the difference in altitudes matters you should probably just use straight line approximation.
In order to do this the first issue you have to address is how to define change in altitude. The normal equations work because they are on a two dimensional surface, however adding the third dimension means that the simple definition of shortest distance is no longer applicable, for example now the thrid dimension is 'in play' your shortest distance could cut through the original ellipsoid.
It's a bit quick and dirty, but your best solution might be to assume that the rate of change of alltitude is constant along the original 2D path on the ellipsoid. You can then calculate the 2D distance as a length, work out the rate of change of altitude and then simply use Pythagoras to calculate the increase in length with one side of the triangle being the 2D distance, and the altitude being the second length.
For starters, you need a model that tells you how the altitude changes on the line between the two points. Without such a model, you don't have any consistent definition of the distance between two points.
If you had a linear model (traveling 50% of the distance between the points also means you went upwards through 50% of the altitude), then you can probably pretend that the entire thing was a right-triangle; i.e. you act as though the world is flat for purposes of determining how the altitude shift affects the distance. The distance along the ground is the base, the altitude change is the height of the triangle, and the hypotenuse is your estimated true travel distance from point to point.
If you want to refine that further, then you can note that the model above is perfectly good for infinitesimal distances, which means that you can iterate across individual deltas of the distance, calculus-style, each time using the current altitude to compute the ground distance and then using the same trigonometric ratio to compute the altitudinal-change contribution to the distance traveled. I'd probably do this in a for() loop with 10 to 100 pieces of the segment, and possibly by trial and error figure out the number of pieces required to get within epsilon of the true value. It would also be possible to work out the line integral to figure out the actual distance between the two points under this model.

Resources