Best way to convert JTS Geometry from 3D to 2D - geospatial

We are importing a Multi-Polygon Shapefile with 3D coordinates into oracle spatial using JTS Geometry Suite, GeoTools (ShapefileDataStore) and Hibernate Spatial. In Oracle Spatial we want them to be stored in 2D.
The onyl (and very slow) approach I found is the following, using WKBWriter and WKBReader:
private static Geometry convert2D(Geometry geometry3D) {
// create a 2D WKBWriter
WKBWriter writer = new WKBWriter(2);
byte[] binary = writer.write(geometry3D);
WKBReader reader = new WKBReader(factory);
Geometry geometry2D= null;
try {
geometry2D= reader.read(binary);
} catch (ParseException e) {
log.error("error reading wkb", e);
}
return geometry2D;
}
Does anybody know a more efficient way to convert a geometry from 3D to 2D?

I found a way:
Create a new CoordinateArraySequence which forces to use 2D Coordinate instances
Create a new CoordinateArraySequenceFactory which generates the new custom CoodinateArraySequence
Create a new instance of GeometryFactory which uses the new CoordinateFactory and use it to recreate the geometry:
private static Geometry convert2D(Geometry geometry3D) {
GeometryFactory geoFactory = new GeometryFactory(
geometry3d.getPrecisionModel(), geometry3d.getSRID(), CoordinateArraySequence2DFactory.instance());
if (geometry3D instanceOf Point) {
return geoFactory.createPoint(geometry3D.getCoordinateSequence());
} else if (geometry3D instanceOf Point) {
//...
//...
//...
throw new IllegalArgumentException("Unsupported geometry type: ".concat(geometry3d.getClass().getName());
}
Good luck!!

I did not test the WKBWriter and WKBReader but here is another simple approach:
Create a copy of your geometry
Set all coordinates to 2D
Simple code:
private static Geometry convert2D(Geometry g3D){
// copy geometry
Geometry g2D = (Geometry) g3D.clone();
// set new 2D coordinates
for(Coordinate c : g2D.getCoordinates()){
c.setCoordinate(new Coordinate(c.x, c.y));
}
return g2D;
}

Related

Hololens2 Correct Usage of SpatialAnchors

I am trying to implement an Hololens 2 App using SpatialAnchors but it seems that I am not quite understanding the concept right. I am able to create, save and restore SpatialAnchors of the Windows.Perception.Spatial namespace, but whenever I restore them they are placed relative to the start position of the headset.
I use this snippet to create an anchor:
SpatialAnchor thisAnchor = SpatialAnchor.TryCreateRelativeTo(coord);
And this to get the relative SpatialCoordinateSystem (from this post)
public static bool UseSGIPSceneCoordinateSystem(out SpatialCoordinateSystem sGIPSceneCoordinateSystem)
{
// gain access to the scene object
SceneObserverAccessStatus accessStatus = Task.Run(RequestAccess).GetAwaiter().GetResult();
if (accessStatus == SceneObserverAccessStatus.Allowed)
{
Scene scene = Task.Run(GetSceneAsync).GetAwaiter().GetResult();
sGIPSceneCoordinateSystem = SpatialGraphInteropPreview.CreateCoordinateSystemForNode(scene.OriginSpatialGraphNodeId);
return true;
}
else
{
sGIPSceneCoordinateSystem = null;
return false;
}
}
I use this method to restore the SpatialAnchor I need:
public bool GetAnchor(string id, out SpatialAnchor anchor)
{
[...]
_anchors = _anchorStore.GetAllSavedAnchors();
foreach (var kvp in _anchors)
{
if(kvp.Key == id)
{
anchor = kvp.Value;
return true;
}
}
anchor = null;
return false;
}
And this to process the restored Anchor and get a place in the scene for it:
public void SomeOtherMethod(SpatialAnchor anchor)
{
SpatialCoordinateSystem showcaseCoordinateSystem = anchor.CoordinateSystem;
//get the reference SpatialCoordinateSystem
if (!UseSGIPSceneCoordinateSystem(out SpatialCoordinateSystem referenceCoordinateSystem))
return;
_anchorMatrix = showcaseCoordinateSystem.TryGetTransformTo(referenceCoordinateSystem);
System.Numerics.Matrix4x4 _notNullMatrix = _anchorMatrix.Value;
Matrix4x4 unityAnchorMatrix = _notNullMatrix.ToUnity();
anchorGameObject.transform.FromMatrix(unityAnchorMatrix);
}
I think that I am using the wrong SpatialCoordinateSystem but canĀ“t find information on how to get a SpatialCoordinateSystem which the HoloLens2 generates for the physical spatial surrounding of the user that is persistent.
I am using:
Unity 2020.3.13f1
OpenXR Plugin 1.3.1
MRTK 2.7.3
MRTK-OpenXR 1.2.1
MRTK SceneUnderstanding 0.6.0
I am also confused of the amount of different SpatialAnchor-Systems, reference coordinatesystems and the documentation. Every post I find about SpatialAnchors seems to use a different approach. There seems to be SpatialAnchor-Systems for the Unity WLT, UnityEngine.VR.WSA with WorldManager, Azure Spatial Anchors,
UnityEngine.XR.WindowsMR.WindowsMREnvironment, etc.
And unless I havent overseen it, the microsoft documentation is not really clear about which one to use and how to use it right.
I would be really thankful if someone could bring some light into this issue.
I posted this question also on forum.unity.com

Determine points that are under a sketchOverlay in a map

I have a map which has a GraphicsOverlay with various points. I have given the user the ability to select a subset of the points by drawing a polygon using the SketchEditor. How can I determine which points have been selected?
Here is a subset of the code to set up the map:
private GraphicsOverlay graphicsOverlayLow;
// Graphics overlay to host sketch graphics
private GraphicsOverlay _sketchOverlay;
var symbolLow = new SimpleMarkerSymbol(SimpleMarkerSymbolStyle.Circle, Colors.Green, 10d);
graphicsOverlayLow = new GraphicsOverlay() { Renderer = new SimpleRenderer(symbolLow) };
foreach (var graphic in graphicListLow) // graphicListLow is a List of Points
graphicsOverlayLow.Graphics.Add(graphic);
MyMapView.GraphicsOverlays = new GraphicsOverlayCollection();
MyMapView.GraphicsOverlays.Add(graphicsOverlayLow);
_sketchOverlay = new GraphicsOverlay();
MyMapView.GraphicsOverlays.Add(_sketchOverlay);
I have two buttons, one for starting the drawing of the polygon and one to click when done (this follows the esri example for the SketchEditor). The code for starting is as follows:
private async void SelectButton_Click(object sender, RoutedEventArgs e)
{
try
{
// Let the user draw on the map view using the chosen sketch mode
SketchCreationMode creationMode = SketchCreationMode.Polygon;
Esri.ArcGISRuntime.Geometry.Geometry geometry = await MyMapView.SketchEditor.StartAsync(creationMode, true);
// Create and add a graphic from the geometry the user drew
Graphic graphic = CreateGraphic(geometry);
_sketchOverlay.Graphics.Add(graphic);
}
catch (TaskCanceledException)
{
// Ignore ... let the user cancel drawing
}
catch (Exception ex)
{
// Report exceptions
MessageBox.Show("Error drawing graphic shape: " + ex.Message);
}
}
private Graphic CreateGraphic(Esri.ArcGISRuntime.Geometry.Geometry geometry)
{
// Create a graphic to display the specified geometry
Symbol symbol = null;
switch (geometry.GeometryType)
{
// Symbolize with a fill symbol
case GeometryType.Envelope:
case GeometryType.Polygon:
{
symbol = new SimpleFillSymbol()
{
Color = Colors.Red,
Style = SimpleFillSymbolStyle.Solid,
};
break;
}
Here is the handler for the routine that is called when the user clicks the button signaling that they are done drawing the polygon. This is where I want to determine which points have been selected.
private void CompleteButton_Click(object sender, RoutedEventArgs e)
{
// Cancel execution of the sketch task if it is already active
if (MyMapView.SketchEditor.CancelCommand.CanExecute(null))
{
MyMapView.SketchEditor.CancelCommand.Execute(null);
}
}
Note that I am using the 100.4 SDK for WPF.
This can be accomplished by a spatial query. You will have to use the geometry returned by the sketch editor and use it to perform a spatial query on the layer(s) using geometry filter.
Esri.ArcGISRuntime.Geometry.Geometry geometry = await MyMapView.SketchEditor.StartAsync(creationMode, true);
var queryparameters = new QueryParameters()
{
Geometry = geometry,
SpatialRelationship = SpatialRelationship.Intersects
};
await layer.SelectFeaturesAsync(queryparameters, Esri.ArcGISRuntime.Mapping.SelectionMode.New);
You can use GeometryEngine.Intersects method to check when point graphics intersect, touch, cross the selection polygon.
https://community.esri.com/message/826699-re-determine-points-that-are-under-a-sketchoverlay-in-a-map?commentID=826699#comment-826699

Draw states in mapControl

I'm writing an application for windows Phone and I'm using a MapControl.
I'd like to be able to paint the US States in different colors.
For example, CA in Red, NV in blue, etc
I Thought about doing Shapes and Polilines, but I can't find the coordinates to use in the shapes to get the different States.
I also tried using the
var found = await MapLocationFinder.FindLocationsAsync("California", new Geopoint(new BasicGeoposition()));
but it doesn't work for finding States.
The best way is to download GeoJSON files from this public repository
https://github.com/johan/world.geo.json/tree/master/countries/USA
Parse the JSON and Create MapPolygon object and add it to map.
public async void RenderState() {
HttpClient client = new HttpClient();
HttpResponseMessage response=await client.GetAsync(new Uri("https://raw.githubusercontent.com/johan/world.geo.json/master/countries/USA/CO.geo.json"));
string json=response.Content.ToString();
JObject obj = JObject.Parse(json);
JObject poly = (JObject)obj["features"][0]["geometry"];
JArray coords = (JArray)poly["coordinates"][0];
MapPolygon polygon = new MapPolygon();
List<BasicGeoposition> points = new List<BasicGeoposition>();
foreach (JArray arr in coords) {
points.Add(new BasicGeoposition() { Latitude = (double)arr[1], Longitude = (double)arr[0] });
}
//Remove last point as it is a duplicate
if (points.Count > 1) {
points.RemoveAt(points.Count - 1);
}
polygon.Path = new Geopath(points);
polygon.StrokeColor = Colors.Red;
polygon.FillColor = Colors.Blue;
this.mMap.MapElements.Add(polygon);
}
This code will render the state of colarado

How to slowly rotate spherical shapes using Lerp?

I need to slowly rotate spherical shapes which are placed on waypoints. It needs to rotate slowly. How can I achieve this with Lerp?
The code I currently have:
if(!isWayPoint5)
{
//here i"m using turn by using rotate but i needed rotate
//slowly is same as turns train in track.
transform.Rotate(0,0,25);
isWayPoint5 = true;
}
Check out how to use Quaternion.Lerp on the wiki-site.
Using that example:
public Transform from = this.transform;
public Transform to = this.transform.Rotate(0,0,25);
public float speed = 0.1F; //You can change how fast it goes here
void Update() {
transform.rotation = Quaternion.Lerp(from.rotation, to.rotation, Time.time * speed);
}

AForge Hough Transform

Im trying to do an experiment on how to use the HoughTransformation class of AForge. Im using this class to try to count the number of circles on an image. But I always got this error message: Unsupported pixel format of the source image.
Here is my code:
private void CountCircles(Bitmap sourceImage)
{
HoughCircleTransformation circleTransform = new HoughCircleTransformation(15);
circleTransform.ProcessImage(sourceImage);
Bitmap houghCircleImage = circleTransform.ToBitmap();
int numCircles = circleTransform.CirclesCount;
MessageBox.Show("Number of circles found : "+numCircles.ToString());
}
HoughCircleTransformation expects a binary bitmap.
private void CountCircles(Bitmap sourceImage)
{
var filter = new FiltersSequence(new IFilter[]
{
Grayscale.CommonAlgorithms.BT709,
new Threshold(0x40)
});
var binaryImage = filter.Apply(bitmap);
HoughCircleTransformation circleTransform = new HoughCircleTransformation(15);
circleTransform.ProcessImage(binaryImage);
Bitmap houghCircleImage = circleTransform.ToBitmap();
int numCircles = circleTransform.CirclesCount;
MessageBox.Show("Number of circles found : "+numCircles.ToString());
}

Resources