How KML Linestring method works - kml

How is a Linestring created? I mean, are the points connected one after another or is there an interpolation or other specific method that connects them?

The type of interpolation used for the kml:LineString element depends on the values of the kml:altitudeMode and kml:tessellate elements. If the kml:altitudeMode value is not clampToGround (i.e., absolute or relativeToGround) then the interpolation between two consecutive points is a straight line segment in the 3D WGS 84 geocentric coordinate reference system. For long distances this can go through the surface of the earth.
If kml:altitudeMode is clampToGround then each point is first projected to the terrain surface along a line through the earth's center of mass, then interpolated between the
projected control points along a straight line segment in the 3D WGS 84 geocentric coordinate reference system as shown in figure below.
Details on interpolation can be found in the KML 2.2 OGC specification (OGC 07-147r2). See section 6.3.2.

Related

Create a .stl file from a collection of points

So the software I am using accepts 3D objects in the form of contours or .stl files. The contours I have are along the z-plane(each plane has a unique z). I have had to modify the contours for my experiment and now the contours do not have a unique z for each plane(they are now slightly angled wrt z=0 plane).
The points represent the edges of the 3D object. What would be the best way to take this collection of points and create a .stl file?
I am relatively new to working with python and 3D objects, so any help, pointers or suggestions would be much appreciated.
Edit: I have the simplices and verticies using the Delaunay(), but how do I proceed next?
The co-ordinates of all points are in this text file in the format "x y z".
So after seeking an answer for months and trying to use Meshlab and Blender I finally stumbled across the answer using numpy-stl. Hopeful that it will help others in a similar situation.
Here is the code to generate the .STL file:
from stl import mesh
num_triangles=len(fin_list)
data = np.zeros(num_triangles, dtype=mesh.Mesh.dtype)
for i in range(num_triangles):
#I did not know how to use numpy-arrays in this case. This was the major roadblock
# assign vertex co-ordinates to variables to write into mesh
data["vectors"][i] = np.array([[v1x, v1y, v1z],[v2x, v2y, v2z],[v3x, v3y, v3z]])
m=mesh.Mesh(data)
m.save('filename.stl')
The three vertices that form a triangle in the mesh go in as a vector that define the surface normal. I just collected three such vertices that form a triangle and wrote them into the mesh. Since I had a regular array of points, it was easy to collect the triangles:
for i in range(len(point_list)-1):
plane_a=[]
plane_b=[]
for j in range(len(point_list[i])-1):
tri_a=[]
tri_b=[]
#series a triangles
tri_a.append(point_list[i+1][j])
tri_a.append(point_list[i][j+1])
tri_a.append(point_list[i][j])
#series b triangles
tri_b.append(point_list[i+1][j])
tri_b.append(point_list[i+1][j+1])
tri_b.append(point_list[i][j+1])
#load to plane
plane_a.append(tri_a)
plane_b.append(tri_b)
group_a.append(plane_a)
group_b.append(plane_b)
The rules for choosing triangles for creating a mesh are as follows:
The vertices must be arranged in a counter-clock direction.
Each triangle must share two vertices with adjacent triangles.
The direction normal must point out of the surface.
There were two more rules that I did not follow but it still worked in my case:
1. All coordinates must be positive(In 1st Quadrant only)
2. All triangles must be arranged in an increasing z-order.
Note: There can be two kinds of .STL file formats: Binary and ASCII. numpy-stl writes out in the binary format. More info on STL files can be found here.
Hope this helps!

SVG Path and morphing

I have a bit of a theoretical questions.
Lets say i have 2 paths in svg. Each with a different number of points. One has 4 Bézier curves and the other 3.
What i want to do is morph one into the other.
Now, i know they have to have the same exact structure and same number of points to do so.
So, the question is, can i add "virtual points" into their paths to get the same structure and number of points, without changing the shape of the objects?
For example, taking one point in one of the paths and just adding the same point after it to increase the number of points. Or creating a Bézier curve in both paths that would actually pretend to be a line instead of a curve. Would that change the object? And if i have points on x=1 y=1 and x=4 y=4, would using this form make Bézier curve a line? (M1 1C1 1 4 4 4 4)
Figured it out. Using control points anywhere on the same line as the coordiantes transforms the Bézier into a line, also if you use the same point as both the control points, start and end coordinate you can make the curve into a point. Adding more of these points into path doesnt change the look of the object, just adds more data into the path.
http://www.petercollingridge.co.uk/book/export/html/560
Down at the cubic curves you can align the points in the described manner to get the desired result
I have a simple to use d3 plugin to animate svg path which supports different number of points, also it animates only the parts of the path which differs from original path, not the whole path.
7kb minified: https://pratyushcrd.github.io/d3-path-morphing/

Projecting a line segment onto a polygon mesh

I am working on a 3d application and am currently looking for a way to project a line segment defined by two points in screen-space onto a three-dimensional polygonal mesh (in my case a triangle mesh). The goal is to find the intersection points in world-space of the line segment with the edges of the mesh.
I can only think of two ways to do this, but neither is ideal. The first is to sample the line segment (in screen-space) at small intervals and ray trace at those intervals to find the world-space coordinates where the ray hits the mesh, but this does not easily give me the intersection points of the line segment with the mesh edges.
The other way I can think of is to somehow back-project the mesh into screen-space, find the intersections there (in 2d) and then project those intersection points back to 3d. The problem with this is that the screen-space coordinate system may change between the selection of the first and second endpoints of the line segment (due to moving the camera).
If any of that was confusing, then here is an image that approximately shows what I'm trying to do (the white dots indicate the points that I want to find). However, in my case the yellow curve is simply a line segment.
[Yunjin Lee, et al. "Mesh scissoring with minima rule and part salience." 2005]
Any help is very much appreciated.
Here's my suggestion:
Project the screen line into world space (getting a plane in world space).
Intersect the plane with the triangles in the mesh, getting a set of edges.
Add the edges to a data structure that keeps only the parts of the edges that are closest to the camera plane (see the diagram below, in which the red line segments and their endpoints are the ones we want to keep). This is like building up an image via a Z-buffer, except that because we know that this set is piecewise linear, we don't have to rasterize it, we can just maintain a sorted list of endpoints.

How to map points in a 3D-plane into screen plane

I have given an assignment of to project a object in 3D space into a 2D plane using simple graphics in C. The question is that a cube is placed in fixed 3D space and there is camera which is placed in a position whose co-ordinates are x,y,z and the camera is looking at the origin i.e. 0,0,0. Now we have to project the cube vertex into the camera plane.
I am proceeding with the following steps
Step 1: I find the equation of the plane aX+bY+cZ+d=0 which is perpendicular to the line drawn from the camera position to the origin.
Step 2: I find the projection of each vertex of the cube to the plane which is obtained in the above step.
Now I want to map those vertex position which i got by projection in step 2 in the plane aX+bY+cZ+d=0 into my screen plane.
thanks,
I don't think that by letting the z co-ordinate equals zero will lead me to the actual mapping. So any help to figure out this.
You can do that in two simple steps:
Translate the cube's coordinates to the camera's system (using
rotation), such that the camera's own coordinates in that system are x=y=z=0 and the cube's translated z's are > 0.
Project the translated cube's coordinates onto a 2d plain by dividing its x's and y's by their respective z's (you may need to apply a constant scaling factor here for the coordinates to be reasonable for the screen, e.g. not too small and within +/-half the screen's height in pixels). This will create the perspective effect. You can now draw pixels using these divided x's and y's on the screen assuming x=y=0 is the center of it.
This is pretty much how it is done in 3d games. If you use cube vertex coordinates, then you get projections of its sides onto the screen. You may then solid-fill the resultant 2d shapes or texture-map them. But for that you'll have to first figure out which sides are not obscured by others (unless, of course, you use a technique called z-buffering). You don't need that for a simple wire-frame demo, though, just draw straight lines between the projected vertices.

Converting from Latitude/Longitude to Cartesian Coordinates with a World File and map image

I have a java applet that allows users to import a jpeg and world file from the local system. The user can then "click" draw lines on the image that was imported. Each endpoint of each line contains a set of X/Y and Lat/Long values. The XY is standard java coordinate space, the applet uses an affine transform calculation with the world file to determine the lat/long for every point on the canvas.
I have a requirement that allows a user to type a distance into a text field and use the arrow key to draw a line in a certain direction (Up, Down, Left, Right) from a single selected point on the screen. I know how to determine the lat/long of a point given a source lat/long, distance, and bearing.
So a user types "100" in the text field and presses the Right arrow key a line should be drawn 100 feet to the right from the currently selected point.
My issue is I don't know how to convert the distance( which is in feet ) into the distance in pixels. This would then tell my where to plot the point.
tcarobruce,
You are correct. The inverse transform algorithm is what I needed. Since I use java I was able to replace my "home made" transform algorithm with the java.awt.AffineTransform object which has an inverse transform function.
This seems to have solved my issue.
Thanks.
I guess you are certain your users are always uploading a raster image that is in the lat/lon wgs84 projection? Because in that case you can set a fixed coordinate transformation.
If you consider ever digitizing images from other sources with other projections, you might want to take a look at the open source geotools library: http://www.geotools.org/

Resources