Reproject rectangle from latlon to metres - geometry

I have this bounding box expressed in latlong:
POLYGON ((51.2913 -13.5599, 51.2913 13.1589,
35.0325 13.1589, 35.0325 -13.5599, 51.2913 -13.5599))
widthDeg="26.7188" heightDeg="16.2588" areaDeg="434.4156254400001"
I'd like to get the equivalent width/height/area in metres.
I found this formula:
1 degree of longitude = 60 * 1.852 km * cos (latitude)
How can I use this to translate the bounding box? Is this a valid approximation?
Thanks for any hints!
Mulone

The width in metres may be different at the north and south sides of the bounding box; unless your box is guaranteed to be quite small in latitude, you probably don't really want to try to describe it with a height and width in metres.
The area is well defined, though; you can find a formula at Link: it's equivalent to |sin(lat1)-sin(lat2)| * |long1-long2| * R^2 if you measure your longitudes in radians. (Multiply by pi/180 if they're in degrees, and don't forget to convert them to radians before passing them to the sine function in that case.) Here R is the radius of the earth, which is approximately 6400km; more accurately 6371km; if you think you need it more accurately than that, remember that the earth isn't really a sphere and think again.

Related

Having the coordinates of the two triangles of a twisted triangle prism, how can I know if a point is inside it?

Here some examples of twisted triangle prisms.
I want to know if a moving triangle will hit a certain point. That's why I need to solve this problem.
The idea is that a triangle with random coordinates becomes the other random triangle whose vertices all move between then
related: How to determine point/time of intersection for ray hitting a moving triangle?
One of my students made this little animation in Mathematica.
It shows the twisting of a prism to the Schönhardt polyhedron.
See the Wikipedia page for its significance.
It would be easy to determine if a particular point is inside the polyhedron.
But whether it is inside a particular smooth twisting, as in your image, depends on the details (the rate) of the twisting.
Let's bottom triangle lies in plane z=0, it has rotation angle 0, top triangle has rotation angle Fi. Height of twisted prism is Hgt.
Rotation angle linearly depends on height, so layer at height h has rotation angle
a(h) = Fi * h / Hgt
If point coordinates are (x,y,z), then shift point to z=0 and rotate (x,y) coordinates about rotation axis (rx, ry) by -a(z) angle
t = -a(z) = - Fi * z / Hgt
xn = rx + (x-rx) * Cos(t) - (y-ry) * Sin(t)
yn = ry + (x-rx) * Sin(t) - (y-ry) * Cos(t)
Then check whether (xn, yn) lies inside bottom triangle

finding value of a point between measured points on a 2D plane

I'm trying to find the best way to calculate this. On a 2D plane I have fixed points all with an instantaneous measurement value. The coordinates of these points is known. I want to predict the value of a movable point between these fixed points. The movable point coodinates will be known. So the distance betwwen the points is known as well.
This could be comparable to temperature readings or elevation on topography. I this case I'm wanting to predict ionospheric TEC of the mobile point from the fixed point measurements. The fixed point measurements are smoothed over time however I do not want to have to store previous values of the mobile point estimate in RAM.
Would some sort of gradient function be the way to go here?
This is the same algorithm for interpolating the height of a point from a triangle.
In your case you don't have z values for heights, but some other float value for each triangle vertex, but it's the same concept, still 3D points.
Where you have 3D triangle points p, q, r and test point pt, then pseudo code from the above mathgem is something like this:
Vector3 v1 = q - p;
Vector3 v2 = r - p;
Vector3 n = v1.CrossProduct(v2);
if n.z is not zero
return ((n.x * (pt.x - p.x) + n.y * (pt.y - p.y)) / -n.z) + p.z
As you indicate in your comment to #Phpdevpad, you do have 3 fixed points so this will work.
You can try contour plots especially contour lines. Simply use a delaunay triangulation of the points and a linear transformation along the edges. You can try my PHP implementations https://contourplot.codeplex.com for geographic maps. Another algorithm is conrec algorithm from Paul Bourke.

Why does the accuracy of my longitude latitude translate appear to diminish below the equator?

I received some help with a previous question on how to go about translating my map to given longitude and latitude values, here: How can I use SVG translate to center a d3.js projection to given latitude and longitude values? I wrote a method which pans to the given location at the given scale:
zoomTo: function(point, scale) {
var lat = -SFMap.projection(point)[0] * scale;
var lon = -SFMap.projection(point)[1] * scale;
var x = (SFMap.config.width / 2) + lat;
var y = (SFMap.config.height / 2) + lon;
SFMap.g
.transition()
.duration(500)
.attr("transform", "translate("+ x +","+ y +") scale("+ scale +")");
}
I can 'zoom' around Europe just fine, but when I move to Jakarta, Indonesia, the center-point lays clearly over the ocean.
London - SFMap.zoomTo([0.1062, 51.5171], 1300)
Jakarta - SFMap.zoomTo([106.7500, 6.1333], 1300);
And, this problem is emphasised if I try Australia - I can't even see it.
I should note that I am using d3 to render a Mercator projection and retrieving longitude and latitude values from Google Search.
Please could someone suggest why this is happening? I am aware that there is a lot of math behind the scenes, but I'd hoped d3 would take care of this.
Edit:
After reading Pablo's comment, I decided to remove my zoomTo() method from the equation and simply test whether d3 could center([106.7500, 6.1333]) correctly on Jarkata at a fixed scale(1000), but it did not; it still dumps you in the ocean.
Embarrassingly, I discovered a silly mistake with my use of longitude and latitude coordinates.
Although I was passing coordinates to projection([106.7500, 6.1333]) on the inverse after retrieving them from Google, I had not noticed the significance of the orientation (or cardinal direction) that Google was also giving.
6.1333° S, 106.7500° E means Jakarta is 6.1333 degrees South of the equator.
So, my problem occurred because I wasn't passing projection() a negative value for the Southern coordinate - meaning, my projection was always going to be 6.1333 degrees above the equator, when it should have been below.

Flipping an angle using radians

Hello all you math whizzes out there!
I am struggling with a math problem I am hoping you can help me with. I have calculated an angle of direction using radians. Within OpenGL ES I move my guy by changing my point value as such:
spriteLocation.x -= playerSpeed * cosf(playerRadAngle);
spriteLocation.y -= playerSpeed * sinf(playerRadAngle);
// playerRadAgnle is my angle of direction using radians
This works very well to move my sprite in the correct direction. However, I have decided to keep my sprite "locked" in the middle of the screen and move the background instead. This requires me to Reverse my calculated angle. If my sprite's direction in radians is equivalent to 90 degrees, I want to convert it to 270 degrees. Again, keeping everything in radians.
I will admit that my knowledge of Trig is poor at best. Is there a way to figure out the opposite angle using radians? I know I could convert my radians into degrees, then add/subtract 180 degrees, then convert back to radians, but I'm looking for something more efficient.
Thanks in advance....
-Scott
Add/subtract pi instead.
You need to add Pi and then use the remainder after division by 2 Pi (to make it restricted within [0; 2 Pi] range).
JavaScript:
function invertAngle(angle) {
return (angle + Math.PI) % (2 * Math.PI);
}
object_sprite.rotation = warp_direction - 3.14;

Rotating 3D cube perspective problem

Since I was 13 and playing around with AMOS 3D I've been wanting to learn how to code 3D graphics. Now, 10 years later, I finally think I have have accumulated enough maths to give it a go.
I have followed various tutorials, and defined screenX (and screenY, equivalently) as
screenX = (pointX * cameraX) / distance
(Plus offsets and scaling.)
My problem is with what the distance variable actually refers to. I have seen distance being defined as the difference in z between the camera and the point. However, that cannot be completely right though, since x and y have the same effect as z on the actual distance from the camera to the point. I implemented distance as the actual distance, but the result gives a somewhat skewed perspective, as if it had "too much" perspective.
My "actual distance" implementation was along the lines of:
distance = new Vector(pointX, pointY, cameraZ - pointZ).magnitude()
Playing around with the code, I added an extra variable to my equation, a perspectiveCoefficient as follows:
distance = new Vector(pointX * perspectiveCoefficient,
pointY * perspectiveCoefficient, cameraZ - pointZ).magnitude()
For some reason, that is beyond me, I tend to get the best result setting the perspectiveCoefficient to 1/sqrt(2).
My 3D test cube is at http://vega.soi.city.ac.uk/~abdv866/3dcubetest/3dtest.svg. (Tested in Safari and FF.) It prompts you for a perspectiveCoefficient, where 0 gives a perspective without taking x/y distance into consideration, and 1 gives you a perspective where x, y and z distance is equally considered. It defaults to 1/sqrt(2). The cube can be rotated about x and y using the arrow keys. (For anyone interested, the relevant code is in update() in the View.js file.)
Grateful for any ideas on this.
Usually, projection is done on the Z=0 plane from an eye position behind this plane. The projected point is the intersection of the line (Pt,Eye) with the Z=0 plane. At the end you get something like:
screenX = scaling * pointX / (1 + pointZ/eyeDist)
screenY = scaling * pointY / (1 + pointZ/eyeDist)
I assume here the camera is at (0,0,0) and eye at (0,0,-eyeDist). If eyeDist becomes infinite, you obtain a parallel projection.

Resources