GPUImage blend two images with mask image - gpuimage

I'm looking to blend two images together using a third masking (black and white) image.
Just like you would do in Photoshop by having two layers and add a mask on the top one, where the mask is white you see the top layer, where it is black you create transparency so the lower layer is visible. Do I need to create a filter with 3 inputs or can I add the black and white image to one of my two images as an alpha?
thanks

Related

Python:Contouring around the rectangle object in a image to obtain the corner points of the rectangle

I have an image which consists of an rectangular object in it and i want to find the 4 corners of the rectangle so that i can calculate the angle of inclination of that object to rotate the image based on that angle.I wanted to know if there are ways to identify the 4 corners of rectangular object so that i can wrap the image using the calculated angle.
I have tried doing some image processing stuff such as converting it gray scale and reducing the noise through Gaussian filter and after which i detect the edge using edge detection filter followed by thresholding and finding the contour.
The problem is that the contours that are found is not consistent and its not performing well on different images from my dataset .Also the background for each of these images is not constant it varies.
Try cv.findContours() on the binarized image, with white object on black background. Then run either cv.boundingRect() or cv.minAreaRect() on the contour.
See Tutorial here: https://docs.opencv.org/3.4/dd/d49/tutorial_py_contour_features.html

crop images with different black margins

I have an image dataset and before feeds it to deep learning algorithm I need to crop it to the same size. All images have different size of black margins as the below image demonstrates.
Any suggestions for a way to crop images with different margin size.
Since your border color is black (nearly perfect black) and will be the same in all the images, I would suggest applying binary threshold making everything white (255) except the black region. Now some of the image regions may get affected too but that's not a problem.
Now find contours in the image and second largest contour will be your region. Calculate rectangular bounding box for this contour and crop the same region in the original image.
First, do a thresholding with a low-intensity threshold value (if your background is definitely completely black, you could even threshold at an intensity of 1) to determine all non-border components.
Next, use Connected-component labeling to determine all isolated foreground components. The central scan-image you are interested in should then always result in the biggest component. Crop out this biggest component to remove the border together with all possible non-black artifacts (labels, letters etc.). You should be left with only the borderless scan.
You can find all the algorithms needed in any basic image processing library. I'd personally recommend looking into OpenCV, they also include phyton bindings.
One way to this could be as follows:
Flood-fill the image with red starting at the top-left corner, and allowing around 5% divergence from the black pixel there.
Now make everything that is not red into white - because the next step after this looks for white pixels.
Now use findContours() (which looks for white objects) and choose the largest white contour as your image and crop to that.
You could consider making things more robust by considering some of the following ideas:
You could normalise a copy of the image to the full range of black to white first in case you get any with near-black borders.
You could check that more than one, or all corner pixels are actually black in case you get images without a border.
You could also flag up issues if your cropped image appears to be less than, say 70%, of the total image area.
You could consider a morphological opening with 9x9 square structuring element as the penultimate step to tidy things up before findContrours().
here is the solution code for this question:
import warnings
warnings.filterwarnings('always')
warnings.filterwarnings('ignore')
import cv2
import numpy as np
import os
path = "data/benign/"
img_resized_dir = "data/pre-processed/benign/"
dirs = os.listdir(path)
def thyroid_scale():
for item in dirs:
if os.path.isfile(path+item):
img = cv2.imread(path+item)
gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
ret,thresh = cv2.threshold(gray,0,255,0)
im2, contours, hierarchy = cv2.findContours(thresh,cv2.RETR_TREE,cv2.CHAIN_APPROX_SIMPLE)
areas = [cv2.contourArea(c) for c in contours]
max_index = np.argmax(areas)
cnt=contours[max_index]
x,y,w,h = cv2.boundingRect(cnt)
crop_img = img[y+35:y+h-5,x+25:x+w-10]
resize_img = cv2.resize(crop_img, (300, 250), interpolation = cv2.INTER_CUBIC)
cv2.imwrite(img_resized_dir+item, resize_img)
thyroid_scale()

what should be the depth of a convolution layer for grayscale and black and white images?

I'm going through the lecture notes here:
http://cs231n.github.io/convolutional-networks/
In the first convolution layer, we typically look at 5x5x3 where 3 refers to the RGB color space and 5x5 is the height and width of the picture.
However, if I'm looking at grayscale images it would be 5x5x1 where the last dimension would be from 0 to 1 (perfectly black to perfectly white)? Similarly, if it was even simpler with just pure black and white images, it would be 5x5x1 where the last dimension is always 0 or 1?
Yes you're right. In case of grayscale or black and white images you have only one feature map in the input layer.

Crossfade images within SVG

What is the best approach for crossfading two horizontally aligned images in SVG?
This is my approach so far, but I'm having two problems:
Within my mask, how can I apply the two gradients coming from oppisite sides? I've only been able to get 1 out of 2 to appear.
When the mask is applied to the first image, why is the second image no longer visible? If I am using objectBoundingBox, shouldn't the mask just be contained to the element?

Alpha Blending to remove seam in an image

I have stitched two images but in the final image there is a visible seam. I am trying to use Alpha blending to remove that seam. I know Alpha blending is applied using the cvAddweight() function, but in this the function parameters are two images,alpha, beta , gamma and desitination . I am taking gamma=0, alpha=0.6, beta=0.4. What will be my two input source images and the destination image as the last part of my code is this->
IplImage* WarpImg = cvCreateImage
(cvSize(T1Img->width*2, T1Img->height*2), T1Img->depth, T1Img- >nChannels);
cvWarpPerspective(T1Img, WarpImg, &mxH);
cvSetImageROI(WarpImg, cvRect(0, 0, T2Img->width, T2Img->height));
cvCopy(T2Img, WarpImg);
cvResetImageROI(WarpImg);
cvNamedWindow("WarpImg Img",1);
cvShowImage("WarpImg Img", WarpImg);
cvSaveImage("save.jpg",WarpImg);
My final Image is
I have to admit, I dont think alpha blending is your answer. The seem is there due to the difference in lighting / exposure. Alpha blending is a way of essentially having one image visible through another by means of weighted averaging the two images colors together. Your right and your left images are backed by black. If you simply alpha blend then you are essentially going to be weighting your images with a black background. The resultant effect will simply be a darkening of both images.
2 potential other methods might be to look at the average color of both images at the seem, and adjust one up or down by 50% of the difference in brightness, and the other opposite by the other 50% (one goes up and the other down and the 50% makes it so that the overall brightness jump by either is only 50% of the difference).
The other might do a more complex image histogram technique where you try to widen or shrink the histogram of one side' image to the other as well as align them, and re-asign your color (in this case grayscale) via the new histograms.
Pyramid/multiband blending should do a good enough job for you scenario. Try enblend: http://enblend.sourceforge.net

Resources