I have a path, made up of n 3d coordinates, all connected in series, which can be seen in this diagram.
I want to find the shortest distance between my point and the poly line segment. I can calculate the distance of a point from a single line segment, but I want to do it for a more complicated path.
Is there an algorithm for this that does not rely on testing every line segment to point distance and stores the minimum? Any pointers in the right direction would be great!
This is for a games project where I want to calculate the distance of the player from a river that exists in the game. The river will be represented with poly line segments.
Thanks
/**
* Calculates the euclidean distance from a point to a line segmented path.
*
* #param v the point from with the distance is measured
* #param path the array of points wich, when sequentialy joined by line segments, form a path
* #return distance from v to closest of the path forming line segments
*
* #author Afonso Santos
*/
public static
double
distanceToPath( final R3 v, final R3[] path )
{
double minDistance = Double.MAX_VALUE ;
for (int pathPointIdx = 1 ; pathPointIdx < path.length ; ++pathPointIdx)
{
final double d = distanceToSegment( v, path[pathPointIdx-1], path[pathPointIdx] ) ;
if (d < minDistance)
minDistance = d ;
}
return minDistance;
}
/**
* Calculates the euclidean distance from a point to a line segment.
*
* #param v the point
* #param a start of line segment
* #param b end of line segment
* #return distance from v to line segment [a,b]
*
* #author Afonso Santos
*/
public static
double
distanceToSegment( final R3 v, final R3 a, final R3 b )
{
final R3 ab = b.sub( a ) ;
final R3 av = v.sub( a ) ;
if (av.dot(ab) <= 0.0) // Point is lagging behind start of the segment, so perpendicular distance is not viable.
return av.modulus( ) ; // Use distance to start of segment instead.
final R3 bv = v.sub( b ) ;
if (bv.dot(ab) >= 0.0) // Point is advanced past the end of the segment, so perpendicular distance is not viable.
return bv.modulus( ) ; // Use distance to end of the segment instead.
// Point is within the line segment's start/finish boundaries, so perpendicular distance is viable.
return (ab.cross( av )).modulus() / ab.modulus() ; // Perpendicular distance of point to segment.
}
The rest of the (self contained, no external dependencies) R3 3D vector algebra java package is here:
https://gist.github.com/reciprocum/4e3599a9563ec83ba2a63f5a6cdd39eb
part of the open source library
https://sourceforge.net/projects/geokarambola/
Related
I am working on implementing Alchemy AO and reading through their paper and they mention to sample each point by: considering a
Disk of radius r and center C that is parallel to the image plane,
select a screen-space point Q uniformly at random on it's project, and
then read a depth or position buffer to find the camera-space scene
point P = (xp, yp, z(Q)) on that ray.
I am wondering how you would go about selecting a screen-space point in the manor? I have made an attempt below but since my result appears quite incorrect, I think it's the wrong approach.
vec3 Position = depthToPosition(uvCoords);
int turns = 16;
float screen_radius = (sampleRadius * 100.0 / Position.z); //ball around the point
const float disk = (2.0 * PI) / turns;
ivec2 px = ivec2(gl_FragCoord.xy);
float phi = (30u * px.x ^ px.y + 10u * px.x * px.y); //per pixel hash for randdom rotation angle at a pixel
for (int i = 0; i < samples; ++i)
{
float theta = disk * float(i+1) + phi;
vec2 samplepoint = vec2(cos(theta), sin(theta));
}
Anyone knows if I am able to user Google Maps Circle, Rectangle and Polygon classes in Node JS? In the frontend is easy with Google Maps Javascript SDK, but I can't figure out how to get a hold of this library within Node JS.
I need to be able to check if points are with bounds, something in the lines of:
const location = google.maps.LatLng(lat, lng);
const circle = new google.maps.Circle({
center: area.center,
radius: area.radius,
});
const doesContain = circle.getBounds().contains(location);
Thanks ahead!
Alright boys, after giving some thought I realized it's easier to create my own code for checking if a geometry contains a point than depend on Google Maps library to do so.
Although this does not offer and the functionality Google Maps SDK offers, it does solve the geometry problem.
For anyone else looking for other Google Maps SDK functionalities, checkout this Node.js Client for Google Maps Services. Though it does not include the geometry functions I was looking for.
Solution
Without further ado here is my code:
class Circle {
/**
* Circle constructor
* #param {array} center Center coordinate [lat, lng]
* #param {number} radius Radius of the circle in meters
*/
constructor(center, radius) {
this.name = "Circle";
this.center = center;
this.radius = radius;
}
/**
* Checks if a point is within the circle
* #param {array} point Coordinates of a point [lat,lng]
* #returns true if point is within, false otherwhise
*/
contains(point) {
const { center, radius } = this;
const distance = this.distance(center, point);
if (distance > radius) return false;
return true;
}
/**
* Calculate the distance between two points (in meters)
* #param {array} p1 [lat,lng] point 1
* #param {array} p2 p1 [lat,lng] point 2
* #returns Distance between the points in meters
*/
distance(p1, p2) {
var R = 6378.137; // Radius of earth in KM
var dLat = (p2[0] * Math.PI) / 180 - (p1[0] * Math.PI) / 180;
var dLon = (p2[1] * Math.PI) / 180 - (p1[1] * Math.PI) / 180;
var a =
Math.sin(dLat / 2) * Math.sin(dLat / 2) +
Math.cos((p1[0] * Math.PI) / 180) *
Math.cos((p2[0] * Math.PI) / 180) *
Math.sin(dLon / 2) *
Math.sin(dLon / 2);
var c = 2 * Math.atan2(Math.sqrt(a), Math.sqrt(1 - a));
var d = R * c;
return d * 1000; // meters
}
}
class Rectangle {
/**
* Rectangle constructor
* #param {arrar} sw South-west coorodinate of the rectangle [lat,lng]
* #param {array} ne North-east coordinate of the rectangle [lat, lng]
*/
constructor(sw, ne) {
this.name = "Rectangle";
this.sw = sw;
this.ne = ne;
}
/**
* Checks if a point is within the reactangle
* #param {array} point Coordinates of a point [lat,lng]
* #returns true if point is within, false otherwhise
*/
contains(point) {
const { sw, ne } = this;
const x = point[0];
const y = point[1];
if (x < sw[0] || x > ne[0] || y < sw[1] || y > ne[1]) return false;
return true;
}
}
class Polygon {
/**
* Polygon constructor
* #param {array} points Array of vertices/points of the polygon [lat,lng]
*/
constructor(points) {
this.name = "Polygon";
this.points = points;
}
/**
*
* #returns {obj} Returns the coordinate of the min/max bounds that surounds the polygon
* (south-west coordinate, north-east coordinage as in [lat,lng] format)
*/
getBounds() {
const { points } = this;
let arrX = [];
let arrY = [];
for (let i in points) {
arrX.push(points[i][0]);
arrY.push(points[i][1]);
}
return {
sw: [Math.min.apply(null, arrX), Math.min.apply(null, arrY)],
ne: [Math.max.apply(null, arrX), Math.max.apply(null, arrY)],
};
}
/**
* Checks if a point is within the polygon
* #param {array} point Coordinates of a point [lat,lng]
* #returns true if point is within, false otherwhise
*/
contains(point) {
const x = point[0];
const y = point[1];
const bounds = this.getBounds();
// Check if point P lies within the min/max boundary of our polygon
if (x < bounds.sw[0] || x > bounds.ne[0] || y < bounds.sw[1] || y > bounds.ne[1])
return false;
let intersect = 0;
const { points } = this;
// Geofencing method (aka Even–odd rule)
// See more at: https://en.wikipedia.org/wiki/Even%E2%80%93odd_rule
// Now for each path of our polygon we'll count how many times our imaginary
// line crosses our paths, if it crosses even number of times, our point P is
// outside of our polygon, odd number our point is within our polygon
for (let i = 0; i < points.length; i++) {
// Check if pont P lies on a vertices of our polygon
if (x === points[i][0] && y === points[i][1]) return true;
let j = i !== points.length - 1 ? i + 1 : 0;
// Check if Py (y-component of our point P) is with the y-boundary of our path
if (
(points[i][1] < points[j][1] && y >= points[i][1] && y <= points[j][1]) ||
(points[i][1] > points[j][1] && y >= points[j][1] && y <= points[i][1])
) {
// Check if Px (x-componet of our point P) crosses our path
let sx =
points[i][0] +
((points[j][0] - points[i][0]) * (y - points[i][1])) /
(points[j][1] - points[i][1]);
if (sx >= x) intersect += 1;
}
}
return intersect % 2 === 0 ? false : true;
}
}
module.exports = { Circle, Rectangle, Polygon };
Explanation
The Circle and Rectangle class is pretty straight forward, it's trivial to determine if a point lies within a boundary. The Polygon class is a bit more complicated because of obvious reasons.
The method used here to determine if a point P is within a polygon is called Geofencing (aka Even–odd rule), a common method in geospacial analysis.
Step 1
First we check if the point P falls within the max/min boundaries of the polygon (image 1), if it doesn't, we return false, problem solved.
Image 1 -- Polygon boundaries, P1 is within the polygon boundaries, P2 is not.
Step 2
Then we check if the point lies on a vertices (points) of the polygon, if it does, we return true, problem solved. (Image 2)
Image 2 -- Polygon boundaries, point P is on a vertices, return true.
Step 3
This next step is the most gratifying one, by now we know the point is with the polygon boundaries (from step 1) but we don't know if it's within it or not. The way to solve this we cast an imaginary line departing from the point to any direction, if it crosses the path of polygon even number of times, the point is outside of the polygon, if it crosses an odd number of times, the point is within the polygon. Like so:
Image 3 -- An imaginary line from P1 crosses the polygon paths an odd number of times (3 times), it's within the polygon boundaries. A imaginary line from P2 crosses an even number of times (4 times), it lies outside of the polygon.
Since we can pick any direction we want to cast the imaginary line from, we'll pick along the x-axis to simplify things, like so:
Image 4 -- Casting the imaginary line from point P parallel to the x-axis t0 simplify determining how many times it intersects our polygon.
To determine how many times the imaginary line intersects our polygon, we have to check each path of the polygon at a time. To do this, we break it down into two steps (see image 5 for references):
For each segment/path of the polygon we check if our point Py (y-component of our point P) is within the the boundaries of the path in question (Y1 and Y2). If it is not, we know our point is does not intersects that specific path and we can move on to the next one. If it is within the path's y-boundaries, then we have to check if it crosses our path in the x-direction (next step).
Assuming the step before is true, to check intersection in the x-direction we have calculate the equation for the path (using line equation: y2 - y1 = m(x2 - x1)) and plug in our Py component to solve for our intersection (in my code I call this Sx). Then we check if Sx is greater than Px, if so, then our imaginary line intersects the path in the x positive direction.
It's important to note that the imaginary line starts at our point P and we only count intersections in that direction we originally picked, in this case x-axis+. This is why Sx has to be grater than or equal to Px, otherwise the test fails.
Image 5 -- We break down each path of the polygon to determine the number of intersections.
Once this path is done we move to the next one and so on. In this case the line crosses 3 times our paths, and therefore we know it's within our polygon.
This is a very clever and simple way if you think about it, it works for any shape, it's truly amazing.
Read more
https://en.wikipedia.org/wiki/Even%E2%80%93odd_rule
Examples
Example 1 - Simple shapes
const p = new Polygon([
[-3, 3],
[-4, 1],
[-3, 0],
[-2, -1],
[0, 0],
[3, 2],
[0, 1],
[-1, 4],
]);
console.log("Contains: ", p.contains([-1, 1])); // returns true
JSFiddle 1
Example 2 - Complex shapes (overlapping areas)
This method works for more complex shapes, when the polygon coordinates creates overlappping areas and they cancel each other out.
const p = new Polygon([
[-2, 0],
[2, 0],
[2, 4],
[-2, 4],
[-2, 0],
[0, 2],
[2, 0],
[0, -2],
[-2, 0],
]);
console.log("Contains: ", p.contains([0, 1])); // returns false
JSFiddle 2
Side note
If you need to quickly plot points just to get a view of a shape/grid, this plotting tool helped a lot to get a visual of what's going on. Very often I thought my code had a bug when in fact my coordinates was skewed and code was correct.
https://www.desmos.com/calculator
I only wish it let you draw lines between points. Either way I found it helpful.
I would like to get the intersection point of a line (defined by a vector and origin) on a triangle.
My engine use right handed coordinate system, so X pointing forward, Y pointing left and Z pointing up.
---- Edit ----
With Antares's help, I convert my points to engine space with:
p0.x = -pt0.y;
p0.y = pt0.z;
p0.z = pt0.x;
But I don't know how to do the same with the direction vector.
I use the function from this stackoverflow question, original poster use this tutorial.
First we look for the distance t from origin to intersection point, in order to find its coordinates.
But I've got a negative t, and code return true when ray is outside the triangle. I set it outside visualy.
It return sometime false when I'm in the triangle.
Here is the fonction I use to get the intersection point, I already checked that it works, with 'classic' values, as in the original post.
float kEpsilon = 0.000001;
V3f crossProduct(V3f point1, V3f point2){
V3f vector;
vector.x = point1.y * point2.z - point2.y * point1.z;
vector.y = point2.x * point1.z - point1.x * point2.z;
vector.z = point1.x * point2.y - point1.y * point2.x;
return vector;
}
float dotProduct(V3f dot1, V3f dot2){
float dot = dot1.x * dot2.x + dot1.y * dot2.y + dot1.z * dot2.z;
return dot;
}
//orig: ray origin, dir: ray direction, Triangle vertices: p0, p1, p2.
bool rayTriangleIntersect(V3f orig, V3f dir, V3f p0, V3f p1, V3f p2){
// compute plane's normal
V3f p0p1, p0p2;
p0p1.x = p1.x - p0.x;
p0p1.y = p1.y - p0.y;
p0p1.z = p1.z - p0.z;
p0p2.x = p2.x - p0.x;
p0p2.y = p2.y - p0.y;
p0p2.z = p2.z - p0.z;
// no need to normalize
V3f N = crossProduct(p0p1, p0p2); // N
// Step 1: finding P
// check if ray and plane are parallel ?
float NdotRayDirection = dotProduct(N, dir); // if the result is 0, the function will return the value false (no intersection).
if (fabs(NdotRayDirection) < kEpsilon){ // almost 0
return false; // they are parallel so they don't intersect !
}
// compute d parameter using equation 2
float d = dotProduct(N, p0);
// compute t (equation P=O+tR P intersection point ray origin O and its direction R)
float t = -((dotProduct(N, orig) - d) / NdotRayDirection);
// check if the triangle is in behind the ray
//if (t < 0){ return false; } // the triangle is behind
// compute the intersection point using equation
V3f P;
P.x = orig.x + t * dir.x;
P.y = orig.y + t * dir.y;
P.z = orig.z + t * dir.z;
// Step 2: inside-outside test
V3f C; // vector perpendicular to triangle's plane
// edge 0
V3f edge0;
edge0.x = p1.x - p0.x;
edge0.y = p1.y - p0.y;
edge0.z = p1.z - p0.z;
V3f vp0;
vp0.x = P.x - p0.x;
vp0.y = P.y - p0.y;
vp0.z = P.z - p0.z;
C = crossProduct(edge0, vp0);
if (dotProduct(N, C) < 0) { return false; }// P is on the right side
// edge 1
V3f edge1;
edge1.x = p2.x - p1.x;
edge1.y = p2.y - p1.y;
edge1.z = p2.z - p1.z;
V3f vp1;
vp1.x = P.x - p1.x;
vp1.y = P.y - p1.y;
vp1.z = P.z - p1.z;
C = crossProduct(edge1, vp1);
if (dotProduct(N, C) < 0) { return false; } // P is on the right side
// edge 2
V3f edge2;
edge2.x = p0.x - p2.x;
edge2.y = p0.y - p2.y;
edge2.z = p0.z - p2.z;
V3f vp2;
vp2.x = P.x - p2.x;
vp2.y = P.y - p2.y;
vp2.z = P.z - p2.z;
C = crossProduct(edge2, vp2);
if (dotProduct(N, C) < 0) { return false; } // P is on the right side;
return true; // this ray hits the triangle
}
My problem is I get t: -52.603783
intersection point P : [-1143.477295, -1053.412842, 49.525799]
This give me, relative to a 640X480 texture, the uv point: [-658, 41].
Probably because my engine use Z pointing up?
My engine use right handed coordinate system, so X pointing forward, Y pointing left and Z pointing up.
You have a slightly incorrect idea of a right handed coordinate system... please check https://en.wikipedia.org/wiki/Cartesian_coordinate_system#In_three_dimensions.
As the name suggests, X is pointing right (right hand's thumb to the right), Y is pointing up (straight index finger) and Z (straight middle finger) is pointing "forward" (actually -Z is forward, and Z is backward in the camera coordinate system).
Actually... your coordinate components are right hand sided, but the interpretation as X is forward etc. is unusual.
If you suspect the problem could be with the coordinate system of your engine (OGRE maybe? plain OpenGL? Or something selfmade?), then you need to transform your point and direction coordinates into the coordinate system of your algorithm. The algorithm you presented works in camera coordinate system, if I am not mistaken. Of course you need to transform the resulting intersection point back to the interpretation you use in the engine.
To turn the direction of a vector component around (e.g. the Z coordinate) you can use multiplication with -1 to achieve the effect.
Edit:
One more thing: I realized that the algorithm uses directional vectors as well, not just points. The rearranging of components does only work for points, not directions, if I recall correctly. Maybe you have to do a matrix multiplication with the CameraView transformation matrix (or its inverse M^-1 or was it the transpose M^T, I am not sure). I can't help you there, I hope you can figure it out or just do trial&error.
My problem is I get t: -52.603783
intersection point P : [-1143.477295, -1053.412842, 49.525799] This give me, relative to a 640X480 texture, the uv point: [-658, 41]
I reckon you think your values are incorrect. Which values do you expect to get for t and UV coordinates? Which ones would be "correct" for your input?
Hope this gets you started. GL, HF with your project! :)
#GUNNM: Concerning your feedback that you do not know how to handle the direction vector, here are some ideas that might be useful to you.
As I said, there should be a matrix multiplication way. Look for key words like "transforming directional vector with a matrix" or "transforming normals (normal vectors) with a matrix". This should yield something like: "use the transpose of the used transformation matrix" or "the inverse of the matrix" or something like that.
A workaround could be: You can "convert" a directional vector to a point, by thinking of a direction as "two points" forming a vector: A starting point and another point which lies in the direction you want to point.
The starting point of your ray, you already have available. Now you need to make sure that your directional vector is interpreted as "second point" not as "directional vector".
If your engine handles a ray like in the first case you would have:
Here is my starting point (0,0,0) and here is my directional vector (5,6,-7) (I made those numbers up and take the origin as starting point to have a simple example). So this is just the usual "start + gaze direction" case.
In the second case you would have:
Here is my start at (0,0,0) and my second point is a point on my directional vector (5,6,-7), e.g. any t*direction. Which for t=1 should give exactly the point where your directional vector is pointing to if it is considered a vector (and the start point being the origin (0,0,0)).
Now you need to check how your algorithm is handling that direction. If it does somewhere ray=startpoint+direction, then it interprets it as point + vector, resulting in a movement shift of the starting point while keeping the orientation and direction of the vector.
If it does ray=startpoint-direction then it interprets it as two points from which a directional vector is formed by subtracting.
To make a directional vector from two points you usually just need to subtract them. This gives a "pure direction" though, without defined orientation (which can be +t or -t). So if you need this direction to be fixed, you may take the absolute of your "vector sliding value" t in later computations for example (may be not the best/fastest way of doing it).
I have a robotic arm composed of 2 servo motors. I am trying to calculate inverse kinematics such that the arm is positioned in the middle of a canvas and can move to all possible points in both directions (left and right). This is an image of the system Image. The first servo moves 0-180 (Anti-clockwise). The second servo moves 0-180 (clockwise).
Here is my code:
int L1 = 170;
int L2 = 230;
Vector shoulderV;
Vector targetV;
shoulderV = new Vector(0,0);
targetV = new Vector(0,400);
Vector difference = Vector.Subtract(targetV, shoulderV);
double L3 = difference.Length;
if (L3 > 400) { L3 = 400; }
if (L3 < 170) { L3 = 170; }
// a + b is the equivelant of the shoulder angle
double a = Math.Acos((L1 * L1 + L3 * L3 - L2 * L2) / (2 * L1 * L3));
double b = Math.Atan(difference.Y / difference.X);
// S1 is the shoulder angle
double S1 = a + b;
// S2 is the elbow angle
double S2 = Math.Acos((L1 * L1 + L2 * L2 - L3 * L3) / (2 * L1 * L2));
int shoulderAngle = Convert.ToInt16(Math.Round(S1 * 180 / Math.PI));
if (shoulderAngle < 0) { shoulderAngle = 180 - shoulderAngle; }
if (shoulderAngle > 180) { shoulderAngle = 180; }
int elbowAngle = Convert.ToInt16(Math.Round(S2 * 180 / Math.PI));
elbowAngle = 180 - elbowAngle;
Initially, when the system is first started, the arm is straightened with shoulder=90, elbow =0.
When I give positive x values I get correct results in the left side of the canvas. However, I want the arm to move in the right side as well. I do not get correct values when I enter negatives. What am I doing wrong? Do I need an extra servo to reach points in the right side?
Sorry if the explanation is not good. English is not my first language.
I suspect that you are losing a sign when you are using Math.Atan(). I don't know what programming language or environment this is, but try and see if you have something like this:
Instead of this line:
double b = Math.Atan(difference.Y / difference.X);
Use something like this:
double b = Math.Atan2(difference.Y, difference.X);
When difference.Y and difference.X have the same sign, dividing them results in a positive value. That prevents you from differentiating between the cases when they are both positive and both negative. In that case, you cannot differentiate between 30 and 210 degrees, for example.
I would like draw sphere in pure OpenGL ES 2.0 without any engines. I write next code:
int GenerateSphere (int Slices, float radius, GLfloat **vertices, GLfloat **colors) {
srand(time(NULL));
int i=0, j = 0;
int Parallels = Slices ;
float tempColor = 0.0f;
int VerticesCount = ( Parallels + 1 ) * ( Slices + 1 );
float angleStep = (2.0f * M_PI) / ((float) Slices);
// Allocate memory for buffers
if ( vertices != NULL ) {
*vertices = malloc ( sizeof(GLfloat) * 3 * VerticesCount );
}
if ( colors != NULL) {
*colors = malloc( sizeof(GLfloat) * 4 * VerticesCount);
}
for ( i = 0; i < Parallels+1; i++ ) {
for ( j = 0; j < Slices+1 ; j++ ) {
int vertex = ( i * (Slices + 1) + j ) * 3;
(*vertices)[vertex + 0] = radius * sinf ( angleStep * (float)i ) *
sinf ( angleStep * (float)j );
(*vertices)[vertex + 1] = radius * cosf ( angleStep * (float)i );
(*vertices)[vertex + 2] = radius * sinf ( angleStep * (float)i ) *
cosf ( angleStep * (float)j );
if ( colors ) {
int colorIndex = ( i * (Slices + 1) + j ) * 4;
tempColor = (float)(rand()%100)/100.0f;
(*colors)[colorIndex + 0] = 0.0f;
(*colors)[colorIndex + 1] = 0.0f;
(*colors)[colorIndex + 2] = 0.0f;
(*colors)[colorIndex + (rand()%4)] = tempColor;
(*colors)[colorIndex + 3] = 1.0f;
}
}
}
return VerticesCount;
}
I'm drawing it with using next code:
glDrawArrays(GL_TRIANGLE_STRIP, 0, userData->numVertices);
Where userData->numVertices - VerticesCount from function GenerateSphere.
But on screen draws series triangles, these aren't sphere approximation!
I think, I need to numerate vertices and use OpenGL ES 2.0 function glDrawElements() (with array, contained number vertices). But series of triangles drawn on the screen is not a sphere approximation.
How can I draw sphere approximation? How specify order vertices (indices in OpenGL ES 2.0 terms)?
Before you start with anything in OpenGL ES, here is some advice:
Avoid bloating CPU/GPU performance
Removing intense cycles of calculations by rendering the shapes offline using another program will surely help. These programs will provide additional details about the shapes/meshes apart from exporting the resultant collection of points [x,y,z] comprising the shapes etc.
I went through all this pain way back, because I kept trying to search for algorithms to render spheres etc and then trying to optimize them. I just wanted to save your time in the future. Just use Blender and then your favorite programming language to parse the obj files that are exported from Blender, I use Perl. Here are the steps to render sphere: (use glDrawElements because the obj file contains the array of indices)
1) Download and install Blender.
2) From the menu, add sphere and then reduce the number of rings and segments.
3) Select the entire shape and triangulate it.
4) Export an obj file and parse it for the meshes.
You should be able to grasp the logic to render sphere from this file: http://pastebin.com/4esQdVPP. It is for Android, but the concepts are same.
Hope this helps.
I struggled with spheres and other geometric shapes. I worked at it a while and created an Objective-C class to create coordinates, normals, and texture coordinates both using indexed and non-indexed mechanisms, the class is here:
http://www.whynotsometime.com/Why_Not_Sometime/Code_Snippets.html
What is interesting to see the resulting triangles representing the geometry is to reduce the resolution (set the resolution property before generating the coordinates). Also, you can use GL_LINE_STRIP instead of GL_TRIANGLES to see a bit more.
I agree with the comment from wimp that since calculating the coordinates generally happens once, not many CPU cycles are used. Also, sometimes one does want to draw only a ball or world or...