How to deal with negative values when manipulating CIE xyY or XYZ colors? - colors

I'm writing some image processing code to do some creative color gamut limiting that I want to use for color grading applicationgs. I'm building to operate in the Davinci Wide Gamut color space, typically using the DaVinci Intermediate logarithmic encoding, since that is my typical working space in a color managed workflow. To do this, I want to convert the input to xyY, limit the value of xy to fit within my desired gamut and then convert back to DaVinci Wide Gamut/Intermediate.
These are the steps I'm following:
Convert DaVinci Intermediate RGB to linear RGB
Multiply by DWG to XYZ conversion matrix.
Convert XYZ to xyY.
Modify x and y to fall within gamut.
Convert xyY to XYZ.
Multiply by XYZ to DWG conversion matrix.
Convert linear RGB to DaVinci Intermediate.
This works fine for most cases, I have also verified that roundtripping without modification of x and y works as intended. The problem occurs for for extremely saturated blues. Take the following RGB values in Davinci Wide Gamut/Intermediate as an example: [0, 0, 1.0]
Let's follow along:
Convert to linear: [0, 0, 100]
Convert to XYZ: [10.11,-14.78, 132.59]
Convert to xyY: [0.08,-0.12,-14.78]
Modifying x and y to fit in my gamut: [0.18, 0.07, -14.78]
Convert back to XYZ: [-35.64,-14.78,-149.08]
Convert back to DWG linear: [-27.99,-27.99,-117.44]
Convert back to DaVinci Intermediate: [-292.34, -292.34, -1226.52]
As you can see, the result is nonsensical, but only for these extreme blue values. To me, it seems pretty clear that this is caused by the blue primary in DaVinci Wide Gamut which has a negative y coordinate ([0.0790, -0.1155]). This causes negative luminance (Y = -14.78) and when messing with the xy values without dealing with the luminance at the same time, things go haywire.
How do I deal with negative values in xyY and XYZ when doing gamut limiting in this way?

Related

Equivalent gray value of a color given the LAB values

I have an RGB image and I converted to Lab colorspace. Now, I want to convert the image in LAB space to grayscale one. I know L NOT = Luminance.
So, any idea how to get the equivalent gray value of a specific color in lab space?
I'm looking for a formula or algorithm to determine the equivalent gray value of a color given the LAB values.
The conversion from Luminance Y to Lightness L* is defined by the CIE 1976 Lightness Function. Put another way, L* transforms linear values into non-linear values that are perceptually uniform for the Human Visual System (HVS). With that in mind, your question is now dependent on what kind of gray you are looking for, if perceptually uniform and thus non-linear, the Lightness channel from CIE Lab* is actually that of CIE 1976 and is appropriate. If you need something linear, you would have to convert back to CIE XYZ tristimulus values and use the Y channel.

Color additivity in the RGB space

I'm have a color calibrated camera and projector pair. I'm trying to come up with an algorithm based on color additivity, and I'm wondering whether color additiveness holds under the RGB space. For example, will the colors in RGB with values of (30, 30, 30) and (60, 60, 60) produce colors close to (90, 90, 90). I'm observing that this isn't the case and it produces a color like (72, 72, 72). I'm wondering whether this is due to some system error, or do I have to go into a different color space like YUV or Lab? Or whether I'm misunderstanding color additiveness and the color additive property does not apply for the addition of separate color components?
EDIT: I'm talking in decimal values.
Usually RGB integer values in domain [0-255] are 8bit values with the working RGB colourspace OECF (transfer function) applied thus non linear values. If you want to perform arithmetical operations on those values you ideally need to apply the inverse OECF to the values prior the operations.
This PDF describes the sRGB colourspace OECF for example: http://www.color.org/srgb.pdf

how to choose a range for filtering points by RGB color?

I have an image and I am picking colors by RGB (data sampling). I select N points from a specific region in the image which has the "same" color. By "same" I mean, that part of the image belongs to an object, (let's say a yellow object). Each picked point in the RGB case has three values [R,G,B]. For example: [120,150,225]. And the maximum and minimum for each field are 255 and 0 respectively.
Let's assume that I picked N points from the region of the object in the image. The points obviously have different RGB values but from the same family (a gradient of the specific color).
Question:
I want to find a range for each RGB field that when I apply a color filter on the image the pixels related to that specific object remain (to be considered as inliers). Is it correct to find the maximum and minimum from the sampled points and consider them as the filter range? For example if the max and min of the field R are 120 ,170 respectively, can it be used as a the range that should be kept.
In my opinion, the idea is not true. Because when choosing the max and min of a set of sampled data some points will be out of that range and also there will be some point on the object that doesn't fit in this range.
What is a better solution to include more points as inliers?
If anybody needs to see collected data samples, please let me know.
I am not sure I fully grasp what you are asking for, but in my opinion filtering in RGB is not the way to go. You should use a different color space than RGB if you want to compare pixels of similar color. RGB is good for representing colors on a screen, but you actually want to look at the hue, saturation and intensity (lightness, or luminance) for analysing visible similarities in colors.
For example, you should convert your pixels to HSI or HSL color space first, then compare the different parameters you get. At that point, it is more natural to compare the resulting hue in a hue range, saturation in a saturation range, and so on.
Go here for further information on how to convert to and from RGB.
What happens here is that you implicitly try to reinvent either color indexing or histogram back-projection. You call it color filter but it is better to focus on probabilities than on colors and color spaces. Colors of course not super reliable and change with lighting (though hue tends to stay the same given non-colored illumination) that's why some color spaces are better than others. You can handle this separately but it seems that you are more interested in the principles of calculating "filtering operation" that will do segmentation of the foreground object from background. Hopefully.
In short, a histogram back-projection works by first creating a histogram for R, G, B within object area and then back-projecting them into the image in the following way. For each pixel in the image find its bin in the histogram, calculate its relative weight (probability) given overall sum of the bins and put this probability into the image. In such a way each pixel would have probability that it belongs to the object. You can improve it by dividing with probability of background if you want to model background too.
The result will be messy but somewhat resemble an object segment plus some background noise. It has to be cleaned and then reconnected into object using separate methods such as connected components, grab cut, morphological operation, blur, etc.

Color that is most contrast to a given set of colors?

I have a list of several different "random" colors values (no less than 1 and no more than 8 colors). (Random means that there is no telling of their mutual "contrast".)
Colors are given as RGB values (possible simplification: as H values in HSL model, or in some other color system of choice — I have some degree of control of how original colors are generated).
I need to compute a single one color value that is the most "contrast" (i.e. visually distinguishable) from all colors from the list.
A practical criteria for the contrast, for the case with 8 colors:
If we draw 9 squares, filled with our colors as follows:
[1][2][3]
[4][X][5]
[6][7][8]
Color of square X must be clearly distinguishable from all adjacent colors.
Possible simplification: reduce maximum number of colors from 8 to 4 (squares 2, 4, 5, 7 in the example, ignore diagonals).
I think the best solution could be:
maximize hue difference with all the colors (simple linear optimization)
maximize lighting
maximize saturation
http://www.colorsontheweb.com/colorcontrasts.asp
Edit: with linear programming, you could give lower significance to the diagonal colors.
Edit2: What maximization means:
You want to maximize the hue contrast, that means the sum of all |Hi - result|, where Hi stands for Hue of color i, is to be maximized. You can even create conditions for minimum difference, e.g. |Hi - result| > Hmin. The actual calculation can be done by giving the equations to the linear optimization algorithm or you can try all hue values between 0.0 and 1.0 stepping by 0.05 and saving the best result.
http://en.wikipedia.org/wiki/Linear_programming.

Should the result of sRGB->CIEXYZ->discard luminance be convertible back to sRGB?

I'm writing shadow removal for my pony detector. After I've converted a PNG image from sRGB to CIE XYZ I remove the luminance as per instructions:
When I try to convert the image back to sRGB for display, I get RGB values that fall outside the sRGB gamut (I get values greater than 255). Is this normal, or should I keep looking for bugs? Note: conversion to XYZ and back without modification produces no glitches.
Illustration (top left: original, bottom left: byte value wraparaund for red and blue hues):
For completeness: top right: color ratios, bottom right: convert to HSV and equalize value.
The final transformation does not remove the luminance, it creates two new values, x, y that together define the chromacity while Y contains the luminance. This is the key paragraph in your instructions link (just before the formulas you link):
The CIE XYZ color space was
deliberately designed so that the Y
parameter was a measure of the
brightness or luminance of a color.
The chromaticity of a color was then
specified by the two derived
parameters x and y, two of the three
normalized values which are functions
of all three tristimulus values X, Y,
and Z:
What this means is that if you have an image of a surface that has a single colour, but a part of the surface is in the shadow, then in the xyY space the x and y values should be the same (or very similar) for all pixels on the surface whether they are in the shadow or not.
The xyz values you get from the final transformation cannot be translated directly back to RGB as if they were XYZ values (note capitalization). So to answer your actual question: If you are using the xyz values as if they are XYZ values then there are no bugs in your code. Translation to RGB from that should not work using the formulas you linked.
Now if you want to actually remove the shadows from the entire image what you do is:
scale your RGB values to be floating point valies in the [0-1] range by dividing each value by 255 (assuming 24-bit RGB). The conversion to floating point helps accuracy a lot!
Convert the image to xyY
Replace all Y values with one value, say 0.5, but experiment.
Reverse the transformation, from xyY to XYZ:
then to RGB:
Then rescale your RGB values on the [0..255] interval.
This should give you a very boring but shadowless version of your original image. Of course if your goal is detecting single colour regions you could also just do this on the xy values in the xyY image and use the regions you detect there on the original.

Resources