Identify the difference between two images and highlight the difference - python-3.x

I have curved rectangular object based images. There is a reference object on which the images have to be compared and the differences need to be identified.
Reference Image:
New Images:
I want to identify the difference between these images and highlight the difference.
Key Pointers:
I cannot do pixel by pixel comparison as the objects are not exactly in the same pixel
Approximate shape as to that of reference image also is acceptable
I have tried identifying the contours but as the lines are continuous it is difficult identify the defective part only

Related

How do I obtain the left, right, upper and lower shifts in coordinates between a cropped image and its original in Python?

I have the following original image:
and the cropped image:
I will like to obtain the left (a), right (c), upper (b) and lower shifts (d) from the original image to obtain the crop:
As of now, I can only think of matching the pixel array values (row and column-wise) and then subtracting the the overlapping pixel arrays coordinates from the original image's coordinates to get the shifts. However this approach seems computationally expensive and a search on 4 sides will have to be undertaken. Also if it helps, I do not have the transformations that led to the cropped image, and I'm assuming that there are no pixel value changes between the original and cropped image on regions of overlap.
Is there a more efficient approach for this? I'm not sure if there are existing built-in functions in OpenCV or other imaging libraries that can do this, so some insights on this will be deeply appreciated.

Rectangular connected component extraction in python

There are multiple rectangular areas in the 2d-numpy array. All the rectangular areas have value 1, other areas are zero. I want to extract a minimum number of rectangular connected components from the numpy array. These connected components can touch each other in any direction.
I tried extracting connected components using label function from scipy.ndimage.measurements but it assigns the same label to rectangles which touch each other.
I also tried, morphological opening but I do not want to lose the original shape of the rectangle.
The image shows the expected output for a better understanding of the problem.
Is there a better way to extract a minimum number of perfectly rectangular regions?

how to choose a range for filtering points by RGB color?

I have an image and I am picking colors by RGB (data sampling). I select N points from a specific region in the image which has the "same" color. By "same" I mean, that part of the image belongs to an object, (let's say a yellow object). Each picked point in the RGB case has three values [R,G,B]. For example: [120,150,225]. And the maximum and minimum for each field are 255 and 0 respectively.
Let's assume that I picked N points from the region of the object in the image. The points obviously have different RGB values but from the same family (a gradient of the specific color).
Question:
I want to find a range for each RGB field that when I apply a color filter on the image the pixels related to that specific object remain (to be considered as inliers). Is it correct to find the maximum and minimum from the sampled points and consider them as the filter range? For example if the max and min of the field R are 120 ,170 respectively, can it be used as a the range that should be kept.
In my opinion, the idea is not true. Because when choosing the max and min of a set of sampled data some points will be out of that range and also there will be some point on the object that doesn't fit in this range.
What is a better solution to include more points as inliers?
If anybody needs to see collected data samples, please let me know.
I am not sure I fully grasp what you are asking for, but in my opinion filtering in RGB is not the way to go. You should use a different color space than RGB if you want to compare pixels of similar color. RGB is good for representing colors on a screen, but you actually want to look at the hue, saturation and intensity (lightness, or luminance) for analysing visible similarities in colors.
For example, you should convert your pixels to HSI or HSL color space first, then compare the different parameters you get. At that point, it is more natural to compare the resulting hue in a hue range, saturation in a saturation range, and so on.
Go here for further information on how to convert to and from RGB.
What happens here is that you implicitly try to reinvent either color indexing or histogram back-projection. You call it color filter but it is better to focus on probabilities than on colors and color spaces. Colors of course not super reliable and change with lighting (though hue tends to stay the same given non-colored illumination) that's why some color spaces are better than others. You can handle this separately but it seems that you are more interested in the principles of calculating "filtering operation" that will do segmentation of the foreground object from background. Hopefully.
In short, a histogram back-projection works by first creating a histogram for R, G, B within object area and then back-projecting them into the image in the following way. For each pixel in the image find its bin in the histogram, calculate its relative weight (probability) given overall sum of the bins and put this probability into the image. In such a way each pixel would have probability that it belongs to the object. You can improve it by dividing with probability of background if you want to model background too.
The result will be messy but somewhat resemble an object segment plus some background noise. It has to be cleaned and then reconnected into object using separate methods such as connected components, grab cut, morphological operation, blur, etc.

Can images be weighted averaging to obtain one image in RGB color space?

all
I have a few images of one object taken from different perspectives, so some part of the object
may be in the shadow. I hope to stitch the images to get one big image. I find the color in the
resultant image doesn't appear correct. Maybe I should average the images in HSV color space.
Can the color can be averaged in RGB color space? For my case, some part may be in shadow, and the images can be averaged i RGB color space?
If you are familiar with the color theory, please give me some information. Thanks.
Regards
Jogging

How does dribbble's color search work?

How does dribble's color search work? It's not like other search by color features. What I can't figure out is how they can have search parameters for color variance and color minimum without storing a row for every individual color in an image (which I suppose is possible).
Colors are usually extracted from the image using a histogram computing the density of the colors. Once, you have the top 5/10/15 colors from the image, performing a search is matching the given color against these extracted colors.
To match a given color against other, various techniques are available such as minimizing the euclidean distance between the two colors. More on such techniques can be read at http://en.wikipedia.org/wiki/Color_quantization
Similar strategy is discussed in the blog entry http://mattmueller.me/blog/creating-piximilar-image-search-by-color

Resources