Merging satellite images and retaining coordinates - python-3.x

Thanks for dropping in here.
I'm currently working on a project, and I'm not that strong with python yet. So I was hoping for some constructive feedback on this question.
I have a dataset containing core samples, all stored with sample id, latitude, longitude, content and other data irrelevant for this question.
Now I've imported this dataset and sliced it as I want it to be. For the images I'm using the rasterio module to open 2 satellite images that covers the region. I'm using the utm module to convert back and forth between latlong->UTM->Pixel values (Which also seems to be throwing me strange coordinates at some points).
Annoyingly enough, the two Sentinel-2 images are cut right across the center of the map.
As I'm doing bounding boxes on top of where the samples are taken, this is a problem as I need to extract 10x10 pixel cut outs of that region. This leads to a lot of the samples not getting a proper cut out.
So I thought why not merge the two images together into one large rectangular bit. But I still need to retain the meta data with the UTM coordinates.
How would you suggest I proceed. Can it be done in an easy way? Is there another angle on this I've overlooked?
Thank you for your time.

I'm not sure I completely understand the question, but if you are simply trying to merge 2 images, have you looked at the command line tool gdal_merge.py?
A very simple example:
gdal_merge.py -o merged_image.tif image1.tif image2.tif

Related

Stitch Multiple Sequentially Taken Images

I have a set of images of a pipe taken by a camera that rotates 360 degrees and captures image every x degrees, hence inducing a constant overlap between the images. I need to stitch these images together such that they can be analysed as one big image, perhaps a panoramic image or orthomosaic. Here's a couple of examples:
Because it's a pipe, there's a slight curve in each image, so I am thinking first we can "unroll" the image, and do it for every image. After that, perhaps all those images can be stitched together.
I have tried unwrapping using "six-point" method ( you define cross-section of the cylinder with 3 points from the top and 3 from the bottom) , like you are unwrapping a sticker on a bottle, which is not terrible (can be improved of course). Here's how "unwrapping" looks like:
Second, SIFT is not working well for stitching. I am thinking it's because images are quite similar in nature. But, I am not sure how to best stitch them. This is where I need help. I need to align the crests of the pipe and stitch the images seamlessly - images could be up to 90 or 120. Would love any help here. Thanks.
This is something from a software, which is quite bad:

OHow can I extract text from a specific area of an image with python?

I am trying to extract text from an image, but within a certain area of the image and not the entire image.
I have already been able to detect where the objects of interest are and get their coordinates. Though I do not know where to start when extracting text from a specific area.
I'm using the code from this example:
https://www.codingame.com/playgrounds/38470/how-to-detect-circles-in-images
It is able to detect the circles, but I want to take it one step further and extract the numbers from the circles and tag them to their corresponding coordinate.
I'm using this example to learn how to do something similar myself, but I'm really more interested in deciding the search in a set area.
Most image processing libraries support the concept of ROIs (region of interest) or AOIs (area of interest).
The idea is to restrict processing to a subset of pixels that are usually selected by defining geometric shapes like rectangles, polygons, circles within the image coordinate system.
You can fix this issue by first cropping the image using your coordinates and try to extract text from it.

Is it possible to cut parts out of a picture and analyze them separately with python?

I am doing some studies on eye vascularization - my project contains a machine which can detect the different blood vessels in the retinal membrane at the back of the eye. What I am looking for is a possibility to segment the picture and analyze each segmentation on it`s own. The Segmentation consist of six squares wich I want to analyze separately on the density of white pixels.
I would be very thankful for every kind of input, I am pretty new in the programming world an I actually just have a bare concept on how it should work.
Thanks and Cheerio
Sam
Concept DrawOCTA PICTURE
You could probably accomplish this by using numpy to load the image and split it into sections. You could then analyze the sections using scikit-image or opencv (though this could be difficult to get working. To view the image, you can either save it to a file using numpy, or use matplotlib to open it in a new window.
First of all, please note that in image processing "segmentation" describes the process of grouping neighbouring pixels by context.
https://en.wikipedia.org/wiki/Image_segmentation
What you want to do can be done in various ways.
The most common way is by using ROIs or AOIs (region/area of interest). That's basically some geometric shape like a rectangle, circle, polygon or similar defined in image coordinates.
The image processing is then restricted to only process pixels within that region. So you don't slice your image into pieces but you restrict your evaluation to specific areas.
Another way, like you suggested is to cut the image into pieces and process them one by one. Those sub-images are usually created using ROIs.
A third option which is rather limited but sufficient for simple tasks like yours is accessing pixels directly using coordinate offsets and several nested loops.
Just google "python image processing" in combination with "library" "roi" "cropping" "sliding window" "subimage" "tiles" "slicing" and you'll get tons of information...

Turn an image into lines and circles

I need to be able to turn a black and white image into series of lines (start, end points) and circles (start point, radius). I have a "pen width" that's constant.
(I'm working with a screen that can only work with this kind of graphics).
Problem is, I don't want to over complicate things - I could represent any image with loads of small lines, but it would take a lot of time to draw, so I basically want to "approximate" the image using those lines and circles.
I've tried several approaches (guessing lines, working area by area, etc) but none had any reasonable results without using a lot of lines and circles.
Any idea on how to approach this problem?
Thanks in advance!
You don't specify what language you are working in here but I'd suggest OpenCV if possible. If not, then most decent CV libraries ought to support the features that I'm about to describe here.
You don't say if the input is already composed of simple shapes ( lines and polygons) or not. Assuming that it's not, i.e. it's a photo or frame from a video for example, you'll need to do some edge extraction to find the lines that you are going to model. Use a Canny or other edge detector to convert the image into a series of lines.
I suggest that you then extract Circles as they are the richest feature that you can model directly. You should consider using a Hough Circle transform to locate circles in your edge image. Once you've located them you need to remove them from the edge image (to avoid duplicating them in the line processing section below).
Now, for each pixel in the edge image that's 'on' you want to find the longest line segment that it's a part of. There are a number of algorithms for doing this, simplest would be Probabilistic Hough Transform (also available in openCV) to extract line segments which will give you control over the minimum length, allowed gaps etc. You may also want to examine alternatives like LSWMS which has OpenCV source code freely available.
Once you have extracted the lines and circles you can plot them into a new image or save the coordinates for your output device.

Could I create a photorealistic 3D model of an object from photographs of the object taken from different angles?

Say I have a very large amount of photographs of an object, all from different angles, so that one can through these photographs view the object from whatever angle is desired.
Is there a way to take these photographs and combine them into a photorealistic 3D model of the object that could then be displayed just like a traditional 3D model and moved around/rotated?
The reason I am asking is that I am working on a project where a traditional 3D model will not do and we need photorealistic quality, but we would still like the ability to rotate, zoom and pan around the object.
Thanks for your help
This sounds very much like Photosynth. Check out the demo at TED in 2007 for a nice example using images sourced from flickr to build a model of Notre Dame cathedral (about half way through the presentation).
Here is a little answer to your big big question...
you will have to find the edges, interpret them as a boundary (like a series of different spline cross sections)
interpolate enough missing points to give desired resolution, which could be done my using the parallel points in series a0,a1,a2... as a series of points in a bezier path. (like a string wound around the model)

Resources