Equivalent gray value of a color given the LAB values - python-3.x

I have an RGB image and I converted to Lab colorspace. Now, I want to convert the image in LAB space to grayscale one. I know L NOT = Luminance.
So, any idea how to get the equivalent gray value of a specific color in lab space?
I'm looking for a formula or algorithm to determine the equivalent gray value of a color given the LAB values.

The conversion from Luminance Y to Lightness L* is defined by the CIE 1976 Lightness Function. Put another way, L* transforms linear values into non-linear values that are perceptually uniform for the Human Visual System (HVS). With that in mind, your question is now dependent on what kind of gray you are looking for, if perceptually uniform and thus non-linear, the Lightness channel from CIE Lab* is actually that of CIE 1976 and is appropriate. If you need something linear, you would have to convert back to CIE XYZ tristimulus values and use the Y channel.

Related

How to deal with negative values when manipulating CIE xyY or XYZ colors?

I'm writing some image processing code to do some creative color gamut limiting that I want to use for color grading applicationgs. I'm building to operate in the Davinci Wide Gamut color space, typically using the DaVinci Intermediate logarithmic encoding, since that is my typical working space in a color managed workflow. To do this, I want to convert the input to xyY, limit the value of xy to fit within my desired gamut and then convert back to DaVinci Wide Gamut/Intermediate.
These are the steps I'm following:
Convert DaVinci Intermediate RGB to linear RGB
Multiply by DWG to XYZ conversion matrix.
Convert XYZ to xyY.
Modify x and y to fall within gamut.
Convert xyY to XYZ.
Multiply by XYZ to DWG conversion matrix.
Convert linear RGB to DaVinci Intermediate.
This works fine for most cases, I have also verified that roundtripping without modification of x and y works as intended. The problem occurs for for extremely saturated blues. Take the following RGB values in Davinci Wide Gamut/Intermediate as an example: [0, 0, 1.0]
Let's follow along:
Convert to linear: [0, 0, 100]
Convert to XYZ: [10.11,-14.78, 132.59]
Convert to xyY: [0.08,-0.12,-14.78]
Modifying x and y to fit in my gamut: [0.18, 0.07, -14.78]
Convert back to XYZ: [-35.64,-14.78,-149.08]
Convert back to DWG linear: [-27.99,-27.99,-117.44]
Convert back to DaVinci Intermediate: [-292.34, -292.34, -1226.52]
As you can see, the result is nonsensical, but only for these extreme blue values. To me, it seems pretty clear that this is caused by the blue primary in DaVinci Wide Gamut which has a negative y coordinate ([0.0790, -0.1155]). This causes negative luminance (Y = -14.78) and when messing with the xy values without dealing with the luminance at the same time, things go haywire.
How do I deal with negative values in xyY and XYZ when doing gamut limiting in this way?

Detecting Colour Neutrality From HSL

I have a number of HSL values that I want to categorise in terms of:
colour neutrality (i.e. group all neutral colours such as black/grey/white/beige/brown together, and group all non-neutral colours such as yellow/blue/green/red in a separate category)
brightness
The latter is relatively simple in that I can take the L value and define >50% as light and <50% as dark. However I'm having trouble defining a rule that would categorise HSL values by their colour neutrality - what's the best way to do this?
I put together a few colour charts arranged by HSL, HSV and LCH (cylindrical LAB) to see what's the better metric for 'colour neutrality'. Saturation/chroma increases top–down, luminance/value increases left–right and hue increases diagonally top-left–bottom-right inside each 4*4 sub square.
HSL
HSV
LCH
Of course it's up to you to decide, but I think HSL S, HSV S and LCH C all seem to correspond fairly well with 'colour neutrality'.
Had a little idea. To me it sort of looks like this:
We can implement a version of this with some simple arithmetic.
convert xc:blue xc:darkRed xc:red xc:pink xc:brown xc:gray \
-colorspace HSL -format '%[fx:(abs(b-0.5)+(1-g))/1.5]\n' info:-
# 5.08634e-06
# 0.151629
# 5.08634e-06
# 0.250985
# 0.333272
# 0.670588
Or applying it to the HSL colour chart
convert comp-hsl.png -colorspace HSL \
-channel red -fx "(abs(b-0.5)+(1-g))/1.5" -channel R -separate \
hsl4.png

What is the mathematical relationship between hexadecimal colour values on opposite sides of the colour wheel?

I want to incrementally rotate around the color wheel hopping to the opposite side each turn. I have an undefined number of clients to represent on a kendo chart and I want to ensure that they are all identifiable against their immediate neighbours. Can anyone pin down a mathematical relationship between colours on opposite sides of the colour wheel? I am of course working on this myself but I thought it an interesting little problem that you guys might enjoy with me.
It would be easier to do this type of conversion in the HSL or HSV color space, rather than RGB (aka hex values). Then to get the opposite point on the wheel just follow the formula:
hue = (hue + 180) % 360
So starting with hsl(0, 80%, 20%) would yield hsl(180, 80%, 20%) etc. The easiest way to convert a given RGB value to an RGB value on the opposite point would be to convert RGB to HSL or HSV, do the shift, and convert that back to RGB. The formulas for that can be found here: http://en.wikipedia.org/wiki/HSL_and_HSV
Modern browsers support HSL natively, so maybe some of this complexity can be avoided and you would never need to muck with RGB values in the first place. http://caniuse.com/css3-colors
The color wheel is based on the HSV color space, where the hue coordinate represents your angle on the color wheel. You need to convert RGB colors into HSV, perform your rotation on the hue coordinate, then convert back to RGB.

Problem determining Color Saturation

I am writing a tool that will attempt to determine which of the known colors is "closest" to some user-chosen color (from the full RGB gamut). I am noticing that the values returned by Microsoft's GetHue and GetBrightness appear to have the same values as the HSL Hue and the HSL Lightness values computed by the HSL and HSV article. But Microsoft's GetSaturation does not appear to consistently equate to any computed value (HSL, HSV, HSI).
Question(s)
What color model does Microsoft use for its GetHue, GetSaturation, and GetBrightness functions?
Has anyone found errors in the HSL and HSV computations?
I reviewed Chris Haas' algorithm in RGB to HSL and back, calculation problems and found that my derivation of the algorithm was flawed.
What color model does Microsoft use for its GetHue, GetSaturation, and GetBrightness functions? HSL. In the Color Dialog component, the HSL values are transformed from the range [0,1] to the range [0,240].
There do not appear to be any errors in HSL and HSV, only those that I introdiced.

Should the result of sRGB->CIEXYZ->discard luminance be convertible back to sRGB?

I'm writing shadow removal for my pony detector. After I've converted a PNG image from sRGB to CIE XYZ I remove the luminance as per instructions:
When I try to convert the image back to sRGB for display, I get RGB values that fall outside the sRGB gamut (I get values greater than 255). Is this normal, or should I keep looking for bugs? Note: conversion to XYZ and back without modification produces no glitches.
Illustration (top left: original, bottom left: byte value wraparaund for red and blue hues):
For completeness: top right: color ratios, bottom right: convert to HSV and equalize value.
The final transformation does not remove the luminance, it creates two new values, x, y that together define the chromacity while Y contains the luminance. This is the key paragraph in your instructions link (just before the formulas you link):
The CIE XYZ color space was
deliberately designed so that the Y
parameter was a measure of the
brightness or luminance of a color.
The chromaticity of a color was then
specified by the two derived
parameters x and y, two of the three
normalized values which are functions
of all three tristimulus values X, Y,
and Z:
What this means is that if you have an image of a surface that has a single colour, but a part of the surface is in the shadow, then in the xyY space the x and y values should be the same (or very similar) for all pixels on the surface whether they are in the shadow or not.
The xyz values you get from the final transformation cannot be translated directly back to RGB as if they were XYZ values (note capitalization). So to answer your actual question: If you are using the xyz values as if they are XYZ values then there are no bugs in your code. Translation to RGB from that should not work using the formulas you linked.
Now if you want to actually remove the shadows from the entire image what you do is:
scale your RGB values to be floating point valies in the [0-1] range by dividing each value by 255 (assuming 24-bit RGB). The conversion to floating point helps accuracy a lot!
Convert the image to xyY
Replace all Y values with one value, say 0.5, but experiment.
Reverse the transformation, from xyY to XYZ:
then to RGB:
Then rescale your RGB values on the [0..255] interval.
This should give you a very boring but shadowless version of your original image. Of course if your goal is detecting single colour regions you could also just do this on the xy values in the xyY image and use the regions you detect there on the original.

Resources