How to add png files - colors

Given are n layers represented by png files. These layers are overlapping each other. How do you add them correctly regarding transparency, alpha channel, and color?
Greetings

Related

Get a single TIFF using multiple TIFFs and masks

I'm working with satellite imagery (from Sentinel-2), in particular with cloud detection and cloud cleaning.
I got a batch of images of the same area, but in different periods:
From these images, you can see that the position of the clouds is always different.
I also have the mask for each image, where the black areas represent clouds:
These masks are not perfect, but this is not a problem.
What I want to do is to use the mask to cut all the white portions (so get the land and exclude the clouds), and then fill these cuts with a black portion of another image (fill the "hole" in the image with a part of another image without clouds).
Imagery is in TIFF format, while masks are in JPG format.
I'm using Python with libraries like Rasterio, numpy and scikit-image, so a Pythonic solution would be appreciated.

Skimage RAG merging for gray images

Can the skimage RAG support the gray images? For example, in RAG merging, it states This example constructs a Region Adjacency Graph (RAG) and progressively merges regions that are similar in color. But can we merge regions in terms of pixel intensity for the gray images?
Yes, the RAGs are very flexible. You can see the source code for rag_mean_color here. Unfortunately, you can see that 3 channels are assumed in this line. It would not be hard to change it to vary according to the number of channels in the input image. In the meantime, though, you can convert a grayscale image to a "color" image (the grayscale repeated three times) by using skimage.color.gray2rgb, and then use that example directly.
Note also other RAG examples in the documentation, such as those based on region boundary values.

Scikit-learn, image classification

This example allows the classification of images with scikit-learn:
http://scikit-learn.org/stable/auto_examples/classification/plot_digits_classification.html
However, it is important that all the images have the same size (width and height, as written in the comments).
How can I modify this code to allow classification of images with different sizes?
You will need to define your own Feature Extraction.
In example from above, every pixel is represent a feature. If your images of different sizes, most trivial (but certainly not the best) thing that you can do is pad all images to the size of largest image with, for example, white pixels.
Here an example how to add boarders to image.

GPUImage blend two images with mask image

I'm looking to blend two images together using a third masking (black and white) image.
Just like you would do in Photoshop by having two layers and add a mask on the top one, where the mask is white you see the top layer, where it is black you create transparency so the lower layer is visible. Do I need to create a filter with 3 inputs or can I add the black and white image to one of my two images as an alpha?
thanks

detect color space with openCV

how can I see the color space of my image with openCV ?
I would like to be sure it is RGB, before to convert to another one using cvCvtColor() function
thanks
Unfortunately, OpenCV doesn't provide any sort of indication as to the color space in the IplImage structure, so if you blindly pick up an IplImage from somewhere there is just no way to know how it was encoded. Furthermore, no algorithm can definitively tell you if an image should be interpreted as HSV vs. RGB - it's all just a bunch of bytes to the machine (should this be HSV or RGB?). I recommend you wrap your IplImages in another struct (or even a C++ class with templates!) to help you keep track of this information. If you're really desperate and you're dealing only with a certain type of images (outdoor scenes, offices, faces, etc.) you could try computing some statistics on your images (e.g. build histogram statistics for natural RGB images and some for natural HSV images), and then try to classify your totally unknown image by comparing which color space your image is closer to.
txandi makes an interesting point. OpenCV has a BGR colorspace which is used by default. This is similar to the RGB colorspace except that the B and R channels are physically switched in the image. If the physical channel ordering is important to you, you will need to convert your image with this function: cvCvtColor(defaultBGR, imageRGB, CV_BGR2RGB).
As rcv said, there is no method to programmatically detect the color space by inspecting the three color channels, unless you have a priori knowledge of the image content (e.g., there is a marker in the image whose color is known). If you will be accepting images from unknown sources, you must allow the user to specify the color space of their image. A good default would be to assume RGB.
If you modify any of the pixel colors before display, and you are using a non-OpenCV viewer, you should probably use cvCvtColor(src,dst,CV_BGR2RGB) after you have finished running all of your color filters. If you are using OpenCV for the viewer or will be saving the images out to file, you should make sure they are in BGR color space.
The IplImage struct has a field named colorModel consisting of 4 chars. Unfortunately, OpenCV ignores this field. But you can use this field to keep track of different color models.
I basically split the channels and display each one to figure out the color space of the image I'm using. It may not be the best way, but it works for me.
For detailed explanation, you can refer the below link.
https://dryrungarage.wordpress.com/2018/03/11/image-processing-basics/

Resources