Stitch Multiple Sequentially Taken Images - python-3.x

I have a set of images of a pipe taken by a camera that rotates 360 degrees and captures image every x degrees, hence inducing a constant overlap between the images. I need to stitch these images together such that they can be analysed as one big image, perhaps a panoramic image or orthomosaic. Here's a couple of examples:
Because it's a pipe, there's a slight curve in each image, so I am thinking first we can "unroll" the image, and do it for every image. After that, perhaps all those images can be stitched together.
I have tried unwrapping using "six-point" method ( you define cross-section of the cylinder with 3 points from the top and 3 from the bottom) , like you are unwrapping a sticker on a bottle, which is not terrible (can be improved of course). Here's how "unwrapping" looks like:
Second, SIFT is not working well for stitching. I am thinking it's because images are quite similar in nature. But, I am not sure how to best stitch them. This is where I need help. I need to align the crests of the pipe and stitch the images seamlessly - images could be up to 90 or 120. Would love any help here. Thanks.
This is something from a software, which is quite bad:

Related

Merging satellite images and retaining coordinates

Thanks for dropping in here.
I'm currently working on a project, and I'm not that strong with python yet. So I was hoping for some constructive feedback on this question.
I have a dataset containing core samples, all stored with sample id, latitude, longitude, content and other data irrelevant for this question.
Now I've imported this dataset and sliced it as I want it to be. For the images I'm using the rasterio module to open 2 satellite images that covers the region. I'm using the utm module to convert back and forth between latlong->UTM->Pixel values (Which also seems to be throwing me strange coordinates at some points).
Annoyingly enough, the two Sentinel-2 images are cut right across the center of the map.
As I'm doing bounding boxes on top of where the samples are taken, this is a problem as I need to extract 10x10 pixel cut outs of that region. This leads to a lot of the samples not getting a proper cut out.
So I thought why not merge the two images together into one large rectangular bit. But I still need to retain the meta data with the UTM coordinates.
How would you suggest I proceed. Can it be done in an easy way? Is there another angle on this I've overlooked?
Thank you for your time.
I'm not sure I completely understand the question, but if you are simply trying to merge 2 images, have you looked at the command line tool gdal_merge.py?
A very simple example:
gdal_merge.py -o merged_image.tif image1.tif image2.tif

Is it possible to cut parts out of a picture and analyze them separately with python?

I am doing some studies on eye vascularization - my project contains a machine which can detect the different blood vessels in the retinal membrane at the back of the eye. What I am looking for is a possibility to segment the picture and analyze each segmentation on it`s own. The Segmentation consist of six squares wich I want to analyze separately on the density of white pixels.
I would be very thankful for every kind of input, I am pretty new in the programming world an I actually just have a bare concept on how it should work.
Thanks and Cheerio
Sam
Concept DrawOCTA PICTURE
You could probably accomplish this by using numpy to load the image and split it into sections. You could then analyze the sections using scikit-image or opencv (though this could be difficult to get working. To view the image, you can either save it to a file using numpy, or use matplotlib to open it in a new window.
First of all, please note that in image processing "segmentation" describes the process of grouping neighbouring pixels by context.
https://en.wikipedia.org/wiki/Image_segmentation
What you want to do can be done in various ways.
The most common way is by using ROIs or AOIs (region/area of interest). That's basically some geometric shape like a rectangle, circle, polygon or similar defined in image coordinates.
The image processing is then restricted to only process pixels within that region. So you don't slice your image into pieces but you restrict your evaluation to specific areas.
Another way, like you suggested is to cut the image into pieces and process them one by one. Those sub-images are usually created using ROIs.
A third option which is rather limited but sufficient for simple tasks like yours is accessing pixels directly using coordinate offsets and several nested loops.
Just google "python image processing" in combination with "library" "roi" "cropping" "sliding window" "subimage" "tiles" "slicing" and you'll get tons of information...

Turn an image into lines and circles

I need to be able to turn a black and white image into series of lines (start, end points) and circles (start point, radius). I have a "pen width" that's constant.
(I'm working with a screen that can only work with this kind of graphics).
Problem is, I don't want to over complicate things - I could represent any image with loads of small lines, but it would take a lot of time to draw, so I basically want to "approximate" the image using those lines and circles.
I've tried several approaches (guessing lines, working area by area, etc) but none had any reasonable results without using a lot of lines and circles.
Any idea on how to approach this problem?
Thanks in advance!
You don't specify what language you are working in here but I'd suggest OpenCV if possible. If not, then most decent CV libraries ought to support the features that I'm about to describe here.
You don't say if the input is already composed of simple shapes ( lines and polygons) or not. Assuming that it's not, i.e. it's a photo or frame from a video for example, you'll need to do some edge extraction to find the lines that you are going to model. Use a Canny or other edge detector to convert the image into a series of lines.
I suggest that you then extract Circles as they are the richest feature that you can model directly. You should consider using a Hough Circle transform to locate circles in your edge image. Once you've located them you need to remove them from the edge image (to avoid duplicating them in the line processing section below).
Now, for each pixel in the edge image that's 'on' you want to find the longest line segment that it's a part of. There are a number of algorithms for doing this, simplest would be Probabilistic Hough Transform (also available in openCV) to extract line segments which will give you control over the minimum length, allowed gaps etc. You may also want to examine alternatives like LSWMS which has OpenCV source code freely available.
Once you have extracted the lines and circles you can plot them into a new image or save the coordinates for your output device.

Scaling an image up in Corona SDK without it becoming fuzzy

I am working on a classic RPG that requires a pixelated style of graphics. I want to do this by making a small image and scaling it up. However, when I do this, it gets fuzzy. Is there any way to scale it while keeping a crisp edge for every pixel, or do I just need to make a bigger image?
You cannot scale an image expecting it to keep a crisp aspect if it's not made in a big enough resolution in the first place. In your case you would have to make a bigger image and scale it down to make the small image.
If you do not use the large image all the time however, you should consider having two versions of the same image (one small / one large) for optimization sake.

Do I need to rectify if camera planes are aligned?

If I am taking images from a pair of cameras whose principle axis(in both the cameras) is perpendicular to the baseline do I need to rectify the images?Typical example would be bumblebee stereo cameras.
If you can also guarantee that:
the camera axes are parallel (maybe so if bought as a single package like the bumblebee)
you have no lens distortion (probably not)
all the other internal camera parameters are identical
your measurement axis is parallel to your baseline
then you might be able to skip image rectification. Personally I wouldn't.
Just think about lens distortion. Even assuming everything else is equal and aligned, this might mess things up. Suppose a feature appears on the edge in one image and a the centre of the other. At the edge it might be distorted a few pixels away, while at the centre it appears where it should. Without rectification, your stereoscopic calculation (which assumes straight lines from object to sensor) is going to give you bad results.
Depends what you mean by "rectify". In stereo vision, it is common to ensure that the epipolar lines are aligned too. That means the i-th row in image 1 corresponds to the i-th row in image 2. An optional step is to reduce distortion caused by the rectification process.
If you are taking images from a pair of cameras whose principle axis is perpendicular to the baseline, then you have epipoles mapped on infinity (parallel epipolar lines in the same image). You need another transform to align the epipolar lines in both images. You will find this transform in Loop & Zhang's paper, also the transform to reduce distortion.
And be careful about lens distortion (see wxffles' answer).

Resources