Get a single TIFF using multiple TIFFs and masks - python-3.x

I'm working with satellite imagery (from Sentinel-2), in particular with cloud detection and cloud cleaning.
I got a batch of images of the same area, but in different periods:
From these images, you can see that the position of the clouds is always different.
I also have the mask for each image, where the black areas represent clouds:
These masks are not perfect, but this is not a problem.
What I want to do is to use the mask to cut all the white portions (so get the land and exclude the clouds), and then fill these cuts with a black portion of another image (fill the "hole" in the image with a part of another image without clouds).
Imagery is in TIFF format, while masks are in JPG format.
I'm using Python with libraries like Rasterio, numpy and scikit-image, so a Pythonic solution would be appreciated.

Related

Creating a whole brain mask of added 2 nifti images

In my analysis to create a whole brain mask for all my subs summed togeather and extract fractional anisotropy out of it, I started with segmenting structural image of each subject into white and gray matter, i added the 2 nifti images (gray matter + white matter) of same subject together with fslmaths -add and want to creat a binarized mask out of them with fslmaths. (In order to add all masks later together so i have a general mask of all subs)
I chose threshold also as an option. The result of this command is a black nifti image, no matter how I put the threshold level. Can you give me an advice please? I could assume that the resulted nifti image from first process (adding) is the problem, but cant process forward.
thanks!

Is there a way to get the hue value of an image using Python?

I am trying to build a machine learning algorithm where I need to convert pictures to their binaries. I am using Pillow library to get the data from images. Since the performance of the algorithm is not great, I need extra parameters to thoroughly train the network and one of the extra parameters might be hue.
So is there a method in Python that gives me hue value of an image?
I am not sure what you are really trying to do, as Christoph says in the comments, but you can find the mean hue of all the pixels in an image with PIL like this:
from PIL import Image, ImageStat
# Load image, convert to HSV and split the channels, retain H, discard S and V
H, _, _ = Image.open('image.png').convert('RGB').convert('HSV').split()
# Print the mean Hue across all pixels
print(ImageStat.Stat(H).mean)
Note that I converted to RGB first to avoid problems that may arise if you try this with palette images, CMYK images, images with transparency and greyscale images. See here.

OHow can I extract text from a specific area of an image with python?

I am trying to extract text from an image, but within a certain area of the image and not the entire image.
I have already been able to detect where the objects of interest are and get their coordinates. Though I do not know where to start when extracting text from a specific area.
I'm using the code from this example:
https://www.codingame.com/playgrounds/38470/how-to-detect-circles-in-images
It is able to detect the circles, but I want to take it one step further and extract the numbers from the circles and tag them to their corresponding coordinate.
I'm using this example to learn how to do something similar myself, but I'm really more interested in deciding the search in a set area.
Most image processing libraries support the concept of ROIs (region of interest) or AOIs (area of interest).
The idea is to restrict processing to a subset of pixels that are usually selected by defining geometric shapes like rectangles, polygons, circles within the image coordinate system.
You can fix this issue by first cropping the image using your coordinates and try to extract text from it.

DICOM Image is too dark with ITK

i am trying to read an image with ITK and display with VTK.
But there is a problem that has been haunting me for quite some time.
I read the images using the classes itkGDCMImageIO and itkImageSeriesReader.
After reading, i can do two different things:
1.
I can convert the ITK image to vtkImageData using itkImageToVTKImageFilter and the use vtkImageReslicer to get all three axes. Then, i use the classes vtkImageMapper, vtkActor2D, vtkRenderer and QVTKWidget to display the image.
In this case, when i display the images, there are several problems with colors. Some of them are shown very bright, others are so dark you can barely see them.
2.
The second scenario is the registration pipeline. Here, i read the image as before, then use the classes shown in the ITK Software Guide chapter about registration. Then i resample the image and use the itkImageSeriesWriter.
And that's when the problem appears. After writing the image to a file, i compare this new image with the image i used as input in the XMedcon software. If the image i wrote ahs been shown too bright in my software, there no changes when i compare both of them in XMedcon. Otherwise, if the image was too dark in my software, it appears all messed up in XMedcon.
I noticed, when comparing both images (the original and the new one) that, in both cases, there are changes in modality, pixel dimensions and glmax.
I suppose the problem is with the glmax, as the major changes occur with the darker images.
I really don't know what to do. Does this have something to do with color level/window? The most strange thing is that all the images are very similar, with identical tags and only some of them display errors when shown/written.
I'm not familiar with the particulars of VTK/ITK specifically, but it sounds to me like the problem is more general than that. Medical images have a high dynamic range and often the images will appear very dark or very bright if the window isn't set to some appropriate range. The DICOM tags Window Center (0028, 1050) and Window Width (0028, 1051) will include some default window settings that were selected by the modality. Usually these values are reasonable, but not always. See part 3 of the DICOM standard (11_03pu.pdf is the filename) section C.11.2.1.2 for details on how raw image pixels are scaled for display. The general idea is that you'll need to apply a linear scaling to the images to get appropriate pixel values for display.
What pixel types do you use? In most cases, it's simpler to use a floating point type while using ITK, but raw medical images are often in short, so that could be your problem.
You should also write the image to the disk after each step (in MHD format, for example), and inspect it with a viewer that's known to work properly, such as vv (http://www.creatis.insa-lyon.fr/rio/vv). You could also post them here as well as your code for further review.
Good luck!
For what you describe as your first issue:
I can convert the ITK image to vtkImageData using itkImageToVTKImageFilter and the use vtkImageReslicer to get all three axes. Then, i use the classes vtkImageMapper, vtkActor2D, vtkRenderer and QVTKWidget to display the image.
In this case, when i display the images, there are several problems with colors. Some of them are shown very bright, others are so dark you can barely see them.
I suggest the following: Check your window/level in VTK, they probably aren't adequate to your images. If they are abdominal tomographies window = 350 level 50 should be a nice color level.

detect color space with openCV

how can I see the color space of my image with openCV ?
I would like to be sure it is RGB, before to convert to another one using cvCvtColor() function
thanks
Unfortunately, OpenCV doesn't provide any sort of indication as to the color space in the IplImage structure, so if you blindly pick up an IplImage from somewhere there is just no way to know how it was encoded. Furthermore, no algorithm can definitively tell you if an image should be interpreted as HSV vs. RGB - it's all just a bunch of bytes to the machine (should this be HSV or RGB?). I recommend you wrap your IplImages in another struct (or even a C++ class with templates!) to help you keep track of this information. If you're really desperate and you're dealing only with a certain type of images (outdoor scenes, offices, faces, etc.) you could try computing some statistics on your images (e.g. build histogram statistics for natural RGB images and some for natural HSV images), and then try to classify your totally unknown image by comparing which color space your image is closer to.
txandi makes an interesting point. OpenCV has a BGR colorspace which is used by default. This is similar to the RGB colorspace except that the B and R channels are physically switched in the image. If the physical channel ordering is important to you, you will need to convert your image with this function: cvCvtColor(defaultBGR, imageRGB, CV_BGR2RGB).
As rcv said, there is no method to programmatically detect the color space by inspecting the three color channels, unless you have a priori knowledge of the image content (e.g., there is a marker in the image whose color is known). If you will be accepting images from unknown sources, you must allow the user to specify the color space of their image. A good default would be to assume RGB.
If you modify any of the pixel colors before display, and you are using a non-OpenCV viewer, you should probably use cvCvtColor(src,dst,CV_BGR2RGB) after you have finished running all of your color filters. If you are using OpenCV for the viewer or will be saving the images out to file, you should make sure they are in BGR color space.
The IplImage struct has a field named colorModel consisting of 4 chars. Unfortunately, OpenCV ignores this field. But you can use this field to keep track of different color models.
I basically split the channels and display each one to figure out the color space of the image I'm using. It may not be the best way, but it works for me.
For detailed explanation, you can refer the below link.
https://dryrungarage.wordpress.com/2018/03/11/image-processing-basics/

Resources