Basic importing coordinates into R and setting projection - geospatial

Ok, I am trying to upload a .csv file, get it into a spatial points data frame and set the projection system to WGS 84. I then want to determine the distance between each point This is what I have come up with but I
cluster<-read.csv(file = "cluster.csv", stringsAsFactors=FALSE)
coordinates(cluster)<- ~Latitude+Longitude
cluster<-CRS("+proj=longlat +datum=WGS84")
d<-dist2Line(cluster)
This returns an error that says
Error in .pointsToMatrix(p) :
points should be vectors of length 2, matrices with 2 columns, or inheriting from a SpatialPoints* object
But this isn't working and I will be honest that I don't fully comprehend importing and manipulating spatial data in R. Any help would be great. Thanks

I was able to determine the issue I was running into. With WGS 84, the longitude comes before the latitude. This is just backwards from how all the GPS data I download is formatted (e.g. lat-long). Hope this helps anyone else who runs into this issue!
thus the code should have been
cluster<-read.csv(file = "cluster.csv", stringsAsFactors=FALSE)
coordinates(cluster)<- ~Longitude+Latitude
cluster<-CRS("+proj=longlat +datum=WGS84")

Related

Convert unknown labels to Yolov5

I own a dataset of images with unknown label format, which is:
angry_actor_104.jpg 0 28 113 226 141 22.9362 0
It indicates an image as follows:
image_name face_id_in_image face_box_top face_box_left face_box_right face_box_bottom face_box_cofidence expression_label
My question is: How can this be converted into the yolov5 format?
I have been looking this up for a long time and hope someone can help.
Thank you very much in advance.
Since the format is unknown you are unlikely to find existing code to completely handle the transformation but I can share some tips to get started.
The annotations file does not have enough info to get converted to Yolo format. Because to convert to Yolo you also need to know the dimensions of the images. If all of your images are the same dimension then it easier but if all of the images are different then you will need additional code to extract the dimensions of the images. I will explain why below.
When you are done you will need to get the images and labels in a specific directly structure like this, with one txt file per image:
/images/actor1.jpg
/images/actor2.jpg
/labels/actor1.txt
/labels/actor2.txt
This is the shape that you want to get the annotation files into.
face_id_in_image x_center_image y_center_image width height
There is a clear description of what the values mean here https://stackoverflow.com/a/66563144/5183735.
Now you need to do some math to calculate the values.
width = (face_box_right - face_box_left)/image_width
height = (face_box_bottom - face_box_top)/image_height
x_center_image = face_box_left/image_width + (width/2)
y_center_image = face_box_top/image_height + (height/2)
I have some bits of code that may help you with reading the text file and saving the text files here.
https://github.com/pylabel-project/pylabel/blob/main/pylabel/exporter.py and https://github.com/pylabel-project/pylabel/blob/main/pylabel/importer.py.
If you are able to share your exact files I may be able to identify some shortcut to transform them.

gmplot Marker does not work after it marks 256 points

I am trying to mark a bunch of points on the map with gmplot and observed that after a certain point it stops marking and wipes out all the previously marked points. I debugged the gmplot.py module and saw that when the length of points array exceeds 256 this is happening without giving any error and warning.
self.points = [] on gmplot.py
Since I am very new to Python and OOPs concept, is there a way to override this and mark more than 256 points?
Are you using gmplot.GoogleMapPlotter.Scatter or gmplot.GoogleMapPlotter.Marker. I used either and was able to get 465 points for a project that I was working on. Is it possible it is an API key issue for you?
partial snippet of my code
import gmplot
import pandas as pd
# df is the database with Lat, Lon and formataddress columns
# change to list, not sure you need to do this. I think you can cycle through
# directly using iterrows. I have not tried that though
latcollection=df['Lat'].tolist()
loncollection=df['Lon'].tolist()
addcollection=df['formataddress'].tolist()
# center map with the first co-ordinates
gmaps2 = gmplot.GoogleMapPlotter(latcollection[0],loncollection[0],13,apikey='yourKey')
for i in range(len(latcollection)):
gmaps2.marker(latcollection[i],loncollection[i],color='#FF0000',c=None,title=str(i)+' '+ addcollection[i])
gmaps2.draw(newdir + r'\laplot_marker_full.html')
I could hover over the 465th point since I knew approximately where it was and I was able to get the title with str(464) <formataddress(464)>, since my array is indexed from 0
Make sure you check the GitHub site to modify your gmplot file, in case you are working with windows.

How to modify orientation of mgh/dicom/nifti file using nibabel

I have a hard time, figuring out a proper affine transformation for 3 different views i.e. coronal, axial and saggital, each having separate issues like below:
1: Axial color map get overlapped with the saggital original view.
2: Similarly Sagittal color map gets overlapped with the axial original image.
3: And everyone has some kind of orientation issues like best visible here when the color map and original image for coronal come correct but with wrong orientation.
I am saving the original file that I am sending to the server for some kind of prediction, which generates a color map and returns that file for visualization, later I am displaying everything together.
In server after prediction, here is the code to save the file.
nifti_img = nib.MGHImage(idx, affine, header=header)
Whereas affine and header are the original affine and header extracted from the file I sent.
I need to process "idx" value that holds the raw data in Numpy array format, but not sure what exactly to be done. Need help here.
Was trying hard to solve the issue using nibabel python library, but due to very limited knowledge of mine about how these files work and about affine transformation, I am having a hard time figuring out what should I do to make them correct.
I am using AMI js with threejs support in the frontend and nibabel with python in the back end. Solution on the frontend or back end anywhere is acceptable.
Please help. Thanks in advance.
img = nib.load(img_path)
# check the orientation you wanna reorient.
# For example, the original orientation of img is RPI,
# you wanna reorient it to RAS, the second the third axes should be flipped
# ornt[P, 1] is flip of axis N, where 1 means no flip and -1 means flip.
ornt = np.array([[0, 1],
[1, -1],
[2, -1]])
img_orient = img.as_reoriented(ornt)
nib.save(img_orient, img_path)
It was simple, using numpy.moveaxis() and numpy.flip() operation on rawdata from nibabel. as below.
# Getting raw data back to process for better orienation and label mapping.
orig_img_data = nib.MGHImage(numpy_arr, affine)
nifti_img = nib.MGHImage(segmented_arr_output, affine)
# Getting original and predicted data to preprocess to original shape and view for visualisation.
orig_img = orig_img_data.get_fdata()
seg_img = nifti_img.get_fdata()
# Placing proper views in proper place and flipping it for a better visualisation as required.
# moveaxis to get original order.
orig_img_ = np.moveaxis(orig_img, -1, 0)
seg_img = np.moveaxis(seg_img, -1, 0)
# Flip axis to overcome mirror image/ flipped view.
orig_img_ = np.flip(orig_img_, 2)
seg_img = np.flip(seg_img, 2)
orig_img_data_ = nib.MGHImage(orig_img_.astype(np.uint8), np.eye(4), header)
nifti_img_ = nib.MGHImage(seg_img.astype(np.uint8), np.eye(4), header)
Note: It's very important to have same affine matrix to wrap both of these array back. A 4*4 Identity matrix is better rather than using original affine matrix as that was creating problem for me.

Google Earth Engine - RGB image export from ImageCollection Python API

I encounter some problems with the Google Earth Engine python API to generate a RGB image based on an ImageCollection.
Basically to transform the ImageCollection into an Image, I apply a median reduction. After this reduction, I apply the visualize function where I need to define the different variables like the min and max. The problem is that these two values are image dependent.
dataset = ee.ImageCollection('LANDSAT/LC08/C01/T1_SR')
.filterBounds(ee.Geometry.Polygon([[39.05789266, 13.59051553],
[39.11335033, 13.59051553],
[39.11335033, 13.64477783],
[39.05789266, 13.64477783],
[39.05789266, 13.59051553]]))
.filterDate('2016-01-01', '2016-12-31')
.select(['B4', 'B3', 'B2'])
reduction = dataset.reduce('median')
.visualize(bands=['B4_median', 'B3_median', 'B2_median'],
min=0,
max=3000,
gamma=1)
Thus for each different image I need to process these two values that can sightly change. Since the number of images I need to generate is huge, It is impossible to do that manually. I do not know how to overcome this problem and I cannot find any answer to that problem. An idea would be to find the minimal value of the image and the maximum value. But I did not find any function that allows to do that on the Javascript or python API.
I hope that someone will be able to help me.
You can use img.reduceRegion() to get image statistics for the region you want and for each image to export. You will have to call the results of the region reduction into the visualization function. Here is an example:
geom = ee.Geometry.Polygon([[39.05789266, 13.59051553],
[39.11335033, 13.59051553],
[39.11335033, 13.64477783],
[39.05789266, 13.64477783],
[39.05789266, 13.59051553]])
dataset = ee.ImageCollection('LANDSAT/LC08/C01/T1_SR')\
.filterBounds(geom)\
.filterDate('2016-01-01', '2016-12-31')\
.select(['B4', 'B3', 'B2'])
reduction = dataset.median()
stats = reduction.reduceRegion(reducer=ee.Reducer.minMax(),geometry=geom,scale=100,bestEffort=True)
statDict = stats.getInfo()
prettyImg = reduction.visualize(bands=['B4', 'B3', 'B2'],
min=[statDict['B4_min'],statDict['B3_min'],statDict['B2_min']]
max=[statDict['B4_max'],statDict['B3_max'],statDict['B2_max']],
gamma=1)
Using this approach, I get an output image like this:
I hope this helps!

Read shapefile attributes using talend

I am using the spatial plug-ins for TOS to perform the following task:
I have a dataset with X and Y coordinates. I have also a shapefile with multi polygons and two metadata attributes, name and Id. The idea is to look-up the names in the shapefile with the coordinates. With a point in polygon will be determined which polygon belongs a point to.
I am using the shapefile input component which points to the .shp file.
I am facing to hurdles:
I cannot retrieve the name and Id from the file. I can only see an attribute call the_geom. How can I read the metadata?
The second thing is, the file contains a multi polygon and I don't know how to iterate over it in order to perform a Contains or intersect with the points.
Any comment will be highly appreciated.
thanks for your input #chrki
I managed to solve my tasks in this way:
1) Create a generic schema under metadata:
As the .dbf file was in the same directory of the shapefile Talend automatically recognized the metadata:
2) This is the job overview:
3) I read the shape file using a sShapeFileInput component:
4) The shapefile contains multipolygons and I want to have polygons. My solution was to use a sSimplify component. I used the default settings.
5) The projection of the shapefile was "MGI / Austria Lambert" which corresponds to EPSG 31287. I want to re-project it as EPSG 4326 (GCS_WGS_1984) which is the one used by my input coordinates.
6) I read the x, y coordinates from a csv file.
7) With a s2DPointReplacer I converted the x,y coordinates as Point(x,y) (WKT)
8) Finally I created an expression in a tMap to get only the polygons and points with an intersection. I guess a "contains" would also work:
I hope this helps someone else.
Kind regards,
Paul

Resources