I have been searching for a while in the internet and i did not find answer to create organized point cloud from depth image. i have a depth_image and color image is it possible to create organized point cloud from it? all i need is to get organized point cloud with Nan values using the depth and color image. any help would be appreciated.
thank you
Related
When preforming image co-registration of multiple subjects, how should we select the reference image?
Can a randomly selected image form one dataset could be the reference image for an image from the other dataset?
If we do that, should all the images belonging to the reference image dataset be co-registered with the reference image as well?
I couldn't find any material in this area. Could someone please advice?
I'm not sure exactly what you mean by the term "dataset", but I will assume you are asking about co-registering multiple images from different patients (i.e. multiple 3D images per subject).
To answer your questions:
If there are no obvious choices about which image is best, then a random choice is fine. If you have e.g. a CT and an MRI for each subject, then co-registration using the CT images is likely going to give you better results because of intrinsic image characteristics (e.g. less distortion, image value linked to physical quantity).
I suppose that depends on what you want to do, but if it is important to have all imaging data in the same co-registered reference space then yes.
Another option is to try and generate an average image, and then use that as a reference to register other images to. Without more information about what you are trying to achieve it's hard to give any more specific advice.
I have a 3D point cloud and I would like to match different point clouds with each other for recognition purposes. Does OpenCV or Tensorflow do it for me? if yes, how?
Example:
src1 = pointCloud of object 1
src2 = pointCloud of object 2
compare(src1, src2)
Output: Both point clouds are of the same object or different objects.
I want to achieve something like this. Please help with some ideas or resources.
OpenCV Surface Matching can be used to detect and find pose of a given point cloud within another point cloud.
In Open3d there is a 3d reconstruction module, but it is used to register (find poses) of RGBD Images and reconstruct 3d object from them. But there is a sub step in which different point cloud fragments are registered (finding pose of point clouds) to combine them into a single point cloud for reconstruction. But not sure if it is useful for your task.
There are many 3d Point cloud object detection methods which use neural networks, as well, but you have to generate the data needed to train, if your objects are not available in a standard dataset.
Thanks for dropping in here.
I'm currently working on a project, and I'm not that strong with python yet. So I was hoping for some constructive feedback on this question.
I have a dataset containing core samples, all stored with sample id, latitude, longitude, content and other data irrelevant for this question.
Now I've imported this dataset and sliced it as I want it to be. For the images I'm using the rasterio module to open 2 satellite images that covers the region. I'm using the utm module to convert back and forth between latlong->UTM->Pixel values (Which also seems to be throwing me strange coordinates at some points).
Annoyingly enough, the two Sentinel-2 images are cut right across the center of the map.
As I'm doing bounding boxes on top of where the samples are taken, this is a problem as I need to extract 10x10 pixel cut outs of that region. This leads to a lot of the samples not getting a proper cut out.
So I thought why not merge the two images together into one large rectangular bit. But I still need to retain the meta data with the UTM coordinates.
How would you suggest I proceed. Can it be done in an easy way? Is there another angle on this I've overlooked?
Thank you for your time.
I'm not sure I completely understand the question, but if you are simply trying to merge 2 images, have you looked at the command line tool gdal_merge.py?
A very simple example:
gdal_merge.py -o merged_image.tif image1.tif image2.tif
I'm a newbie to the GIS world but after few weeks of hard searching/trying I feel stuck.
I am building an app to analyse landcover to assess terrain roughness. All the logic will come later but now I try to design the architecture.
A sample raster image is under the link:
Africa landcover 20m resolution - small sample
a visualization of the problem
Goal:
- read raster file (BigTIFF/GeoTIFF, ...around 6GB) from a cloud storage, like AWS S3
- use a javascript library to process the file, like node-gdal, geotiff.js, etc.
- apply a vector polygon to the raster and count the different pixels within the polygon. THis will be a "circle" with a radius of 3km for instance. Also make some histograms, so to see which pixels are dominant within a quadrant or 1/8 of the area.
- do some maths in JS with the data (about this I have no concerns)
- visualize the raster image on a map including the vector polygon
- push the code to production for multi users.
My skillset and limited experience is with the following as also my preferences to solve the challange:
Node + Express (Javascript)
Node-gdal library
PostgreSQL
Heroku for production
MApbox for visulaizing the maps
So far I have got the following issues when finding the right architecture or functioning codes:
node-gdal library can read only from the file system
geotiff.js can do the job but less documentation and I cannot see how to handle the specific tasks later
PostGIS should be very powerfull but a bit cumbersome to setup. Furthermore I am not sure if it is worth for a single tiff raster to
be feeded
Rasterio in Python can do very nice jobs and I managed in a Jupyter Notebook but I have no experience with Flask or others. Prefer Node JS
Turf.js can do a lot but more for vectors, I could not find modules for raster analysis
Thank you for your suggestions.
I am new to meshlab and am trying to reconstruct an stl file which has a number of issues such as over 700 self-intersecting faces, non-manifold edges and flipped triangles. The part I am trying to fix is a sunglass frame just to give you some perspective. I was able to remove the flipped triangles using Netfabb, which reduced the number of self-intersecting faces. I attempted to fix the rest of the problems by using features within the "Cleaning and Repairing" tab in Meshlab such as remove non-manifold edges and intersecting faces; however, I was unable to fix all problems with the features in the "Cleaning and Repairing" tab alone. Thus I decided to convert the mesh into a point cloud, calculate normals from the "Sampling" tab and try surface reconstruction: poisson. This method gave me a mesh that looked like a big blob instead of the detailed part that I was trying to achieve.
Can anyone please give me a step by step outline of how I can convert the point cloud back into a mesh with surface reconstruction while maintaining the part's dimensional integrity and avoiding self-intersecting faces? Or if you have any other suggestions, I'd be more than happy to listen.
Thank you!