Calculate a result image from two images - colors

I work with glsl and try produce result of displaying one image above another. For example, I have two pixels let it be bg(146,108,147,255) and image(252,0,255,46) I need to implement this calculation in the shader and it should give me a result pixel that will be similar to the pixel if we draw one image after another. An image should be above bg. I try to found a formula to calculate it but formulas that I found do not produce the same result with paint.net, GPU, ... rendering.
As I know there should be a formula for this calculation

Related

Identify the difference between two images and highlight the difference

I have curved rectangular object based images. There is a reference object on which the images have to be compared and the differences need to be identified.
Reference Image:
New Images:
I want to identify the difference between these images and highlight the difference.
Key Pointers:
I cannot do pixel by pixel comparison as the objects are not exactly in the same pixel
Approximate shape as to that of reference image also is acceptable
I have tried identifying the contours but as the lines are continuous it is difficult identify the defective part only

How do I obtain the left, right, upper and lower shifts in coordinates between a cropped image and its original in Python?

I have the following original image:
and the cropped image:
I will like to obtain the left (a), right (c), upper (b) and lower shifts (d) from the original image to obtain the crop:
As of now, I can only think of matching the pixel array values (row and column-wise) and then subtracting the the overlapping pixel arrays coordinates from the original image's coordinates to get the shifts. However this approach seems computationally expensive and a search on 4 sides will have to be undertaken. Also if it helps, I do not have the transformations that led to the cropped image, and I'm assuming that there are no pixel value changes between the original and cropped image on regions of overlap.
Is there a more efficient approach for this? I'm not sure if there are existing built-in functions in OpenCV or other imaging libraries that can do this, so some insights on this will be deeply appreciated.

Getting a specific contour in VTK

I like to get a specific contour from image data.
My main goal is to remesh a polydata in grid form. So I followed below pipeline:
convert polydata to image using PolyDataToImageData
convert above image output to vtkImageDataGeometryFilter
use vtkImplicitPolyDataDistance to compute the distance from the original polydata
copy the distance values to image output scalars in step 2
The result is below:
I then tried to use vtkContourFilter to get polydata with SetValue(0, 0.0). And as you can see the result is not entirely correct:
The value of distance array is https://pastebin.ubuntu.com/p/2mZsgdrcmX/ and it is never 0 so I think I am doing it wrong in SetValue but I am also not sure how to get that specific green contour.
Is there any way to get those green points contour?
I am not completely sure to understand your pipeline.
In the vtkContourFilter, the SetValue takes two parameters. The first one is the id of the contour (as the filter can extract several contours at once, see the SetNumberOfContours). The second is the isovalue of the contour.
Here, you set an isovalue of 0.0. Which means you want the points at a distance 0 of the original data set. Looking at the first image, it seems these are the red points. If you want a contour at the green points, you may want to specify a higher scalar value.
PS: If the goal of your pipeline is to have a "larger version" of your shape, you may also have a look at the vtkWarpVector (and give it the normals of your polydata).

How to calculate what percentage of a pixel is within the bounds of a shape

I have a 2d grid where pixel centers are at the intersection of two half-grid lines, as shown below.
I also have a shape that is drawn on this grid. In my case the shape is a glyph, and is described by segments. Each segment has a start point, end point and a number of off-curve points. These segments can be quadratic curves or lines. What's important is that I can know the points and functions that make up the outline of the shape.
The rule for deciding which pixels should be turned on is simple: if the center of the pixel falls within the shape outline, turn that pixel on. The following image shows an example of applying this rule.
Now the problem I'm facing has to do with anti aliasing. What I'd like to do is to calculate what percentage of the area of a given pixel falls within the outline. As an example, in the image above, I've drawn a red square around a pixel that would be about 15% inside the shape.
The purpose of this would be so that I can then turn that pixel on only by 15% and thus get some cleaner edges for the final raster image.
While I was able to find algorithms for determining if a given point falls within a polygon (ray casting), I wasn't able to find anything about this type of problem.
Can someone can point me toward some algorithms to achieve this? Also let me know if I'm going about this problem in the wrong way!
This sounds like an X, Y problem.
You are asking for a way to calculate the perecentage of pixel coverage, but based on your question, it sounds that what you want to do is anti alias a polygon.
If you are working only with single color 2D shapes (i.e red, blue, magenta... squares, lines, curves...) A very simple solution is to create your image and blur the result afterwards.
This will automatically give you a smooth outline and is simple to implement in many languages.

Change pixels color [duplicate]

I have more then 1 week reading about selective color change of an image. It meand selcting a color from a color picker and then select a part of image in which I want to change the color and apply the changing of color form original color to color of color picker.
E.g. if I select a blue color in color picker and I also select a red part in the image I should be able to change red color to blue color in all the image.
Another example. If I have an image with red apples and oranges and if I select an apple on the image and a blue color in the color picket, then all apples should be changing the color from red to blue.
I have some ideas but of course I need something more concrete on how to do this
Thank you for reading
As a starting point, consider clustering the colors of your image. If you don't know how many clusters you want, then you will need methods to determine whether to merge or not two given clusters. For the moment, let us suppose that we know that number. For example, given the following image at left, I mapped its colors to 3 clusters, which have the mean colors as shown in the middle, and representing each cluster by its mean color gives the figure at right.
With the output at right, now what you need is a method to replace colors. Suppose the user clicks (a single point) somewhere in your image, then you know the positions in the original image that you will need to modify. For the next image, the user (me) clicked on a point that is contained by the "orange" cluster. Then he clicked on some blue hue. From that, you make a mask representing the points in the "orange" cluster and play with that. I considered a simple gaussian filter followed by a flat dilation 3x5. Then you replace the hues in the original image according to the produced mask (after the low pass filtering, the values on it are also considered as a alpha value for compositing the images).
Not perfect at all, but you could have a better clustering than me and also a much-less-primitive color replacement method. I intentionally skipped the details about clustering method, color space, and others, because I used only basic k-means on RGB without any pre-processing of the input. So you can consider the results above as a baseline for anything else you can do.
Given the image, a selected color, and a target new color - you can't do much that isn't ugly. You also need a range, some amount of variation in color, so you can say one pixel's color is "close enough" while another is clearly "different".
First step of processing: You create a mask image, which is grayscale and varying from 0.0 to 1.0 (or from zero to some maximum value we'll treat as 1.0), and the same size as the input image. For each input pixel, test if its color is sufficiently near the selected color. If it's "the same" or "close enough" put 1.0 in the mask. If it's different, put 0.0. If is sorta borderline, put an in-between value. Exactly how to do this depends on the details of the image.
This might work best in LAB space, and testing for sameness according to the angle of the A,B coordinates relative to their origin.
Once you have the mask, put it aside. Now color-transform the whole image. This might be best done in HSV space. Don't touch the V channel. Add a constant to S, modulo 360deg (or mod 256, if S is stored as bytes) and multiply S by a constant chosen so that the coordinates in HSV corresponding to the selected color is moved to the HSV coordinates for the target color. Convert the transformed S and H, with the unchanged L, back to RGB.
Finally, use the mask to blend the original image with the color-transformed one. Apply this to each channel - red, green, blue:
output = (1-mask)*original + mask*transformed
If you're doing it all in byte arrays, 0 is 0.0 and 255 is 1.0, and be careful of overflow and signed/unsigned problems.

Resources