Calculating complementary hue in CIELAB color space - colors

I need to procedurally generate color palettes consisting of several swatches. For each swatch the luminance is given as input, while chroma and hue are chosen using some algorithm.
Since luminance needs to be perceptually fixed I opted for working in LAB color space. I implemented a small C++ LAB class that converts to/from RGB. It appears to be working fine.
In order to manipulate chroma and hue I covert A and B components into polar coordinates where the angle of AB vector represents hue, while its length represents chroma (so called CIEHLC model).
chroma = sqrt(a*a + b*b);
hue = atan2(b, a);
Now rotating this vector shifts the hue while scaling it changes the chroma. So far so good.
The thing is now similar to HSL/HSV color wheel. However, it appears that rotating the AB vector by 180 degrees does not actually result in a complementary hue as it would in the HSL wheel.
So my question is how to go about calculating the correct complementary hue in the LAB color space?

Related

How does one shift hue in CIE-XYZ color space?

I need to implement my own hue rotation function within the CIE color space. I can't find any technical descriptions of how this is done. I found high level descriptions like "rotate around the Y axis in the XYZ color space", which makes sense because Y is the luminance.
I quickly did a dumb matrix rotation:
vec3 xyz = rgb_to_cie_xyz(color.r, color.g, color.b);
vec3 Y_axis = vec3(0,1,0);
mat4 mat = rotationMatrix(Y_axis, hue_angle);
vec4 res_xyz = mat * vec4(xyz, 1.0);
vec3 res = cie_xyz_to_rgb(res_xyz.x, res_xyz.y, res_xyz.z);
But later realized it's completely wrong learning more about cie space.
So my question is: A. How do you rotate hue in CIE/XYZ?
or B. Should I convert from XYZ to CIE LCH and change the H (hue) there, and then convert back to XYZ? (that sounds easy if I can find functions for it but would that even be correct / equivalent to changing hue in XYZ?)
or C. should I convert from XYZ to this 2D-xy CIE-xyY color space? How do you rotate hue on that?
[EDIT]
I have implemented for ex this code (tried another source & another source too), planning to convert from XYZ to LAB to LCH, change hue, then LCH to LAB to XYZ. But it doesn't seem to make the round trip. XYZ - LAB - XYZ - RGB works fine, looks identical. But XYZ - LAB - LCH - LAB - XYZ - RGB breaks; result color is completely different from source color. Is it not meant to be used like this (e.g. is it one way only?), what am I misunderstanding?
vec3 xyz = xyzFromRgb(color);
vec3 lab = labFromXyz(xyz);
vec3 lch = lchFromLab(lab);// doesn't work
//lch.z = lch.z + hue;
lab = labFromLch(lch);// doesn't work
xyz = xyzFromLab(lab);
vec3 rgb = rgbFromXyz(xyz);
my full code: https://github.com/gka/chroma.js/issues/295
Resources:
what is CIE and CIE-XYZ:
XYZ system is based on the color matching experiments. X, Y and Z are
extrapolations of RGB created mathematically to avoid negative numbers
and are called Tristimulus values. X-value in this model represents
approximately the red/green part of a color. Y-value represents
approximately the lightness and the Z-value corresponds roughly to the
blue/yellow part.
CIE LAB and CIE LCH:
The LCh color space, similar to CIELAB, is preferred by some
industry professionals because its system correlates well with how the
human eye perceives color. It has the same diagram as the Lab* color
space but uses cylindrical coordinates instead of rectangular
coordinates.
In this color space, L* indicates lightness, C* represents chroma, and
h is the hue angle. The value of chroma C* is the distance from the
lightness axis (L*) and starts at 0 in the center. Hue angle starts at
the +a* axis and is expressed in degrees (e.g., 0° is +a*, or red, and
90° is +b, or yellow).
How to convert between rgb and CIE XYZ (transformation matrixes)
CIE color has a lot of representations and subrepresentations, and it's not visualized or explained technically or consistently around the internets.. After reading many sources and checking many projects to converge on a clear picture, and as Giacomo said in the comments, yes, it seems the only way to change hue is to go from CIE XYZ to CIE LAB and then into a cylindrical hue shiftable representation which is CIE LCH.
The conversion from (RGB -) XYZ - LAB - LCH - LAB - XYZ (- RGB) just to change the hue, is normal and done everywhere, although it's very hard to find projects / code online that specifically say they're "changing hue in CIE color space" or that even have the word "hue" in them at all. It's also strange that you cannot find anything on github that converts straight up from XYZ to LCH or HSV to LCH given how many projects chain the intermediate steps to blend CIE colors, and the fact that the web is transitioning to using LCH color space.
I found this brilliant shadertoy while searching for lablch: https://www.shadertoy.com/view/lsdGzN which offers efficient and merged XYZ to LCH and LCH to XYZ conversions. ❤️
It has a lot of magic numbers in it though so by using it, I still haven't figured out what was wrong with my code/ports, but others are having issues too. I'm doing atan2 right, floats right, matrix mults right etc. 🤷‍♂️ I'll update my answer when I get around to figuring it out.
[Edit] It seems the problem here is there is loss of information going through all these color spaces and a lot of assumptions or approximations must happen to make a round trip. Shadertoy person prolly did some magic. Have to investigate further when I need to.

Relation of luminance in RGB/XYZ color and physical luminance

Short version: When a color described in XYZ or xyY coordinates has a luminance Y=1, what are the physical units of that? Does that mean 1 candela, or 1 lumen? Is there any way to translate between this conceptual space and physical brightness?
Long version: I want to simulate how the sky looks in different directions, at different times of day, and (eventually) under different cloudiness and air pollution conditions. I've learned enough to figure out how to translate a given spectrum into a chrominance, for example xyz coordinates. But almost everything I've read on color theory in graphical display is focused on relative color, so the luminance is always 1. Non-programming color theory describes the units of luminance, so that I can translate from a spectrum in watts/square meter/steradian to candela or lumens, but nothing that describes the units of luminance in programming. What are the units of luminance in XYZ coordinates? I understand that the actual brightness of a patch would depend on monitor settings, but I'm really not finding any hints as to how to proceed.
Below is an example of what I'm coming across. The base color, at relative luminance of 1, was calculated from first principles. All the other colors are generated by increasing or decreasing the luminance. Most of them are plausible colors for mid-day sky. For the parameters I've chosen, I believe the total intensity in the visible range is 6.5 W/m2/sr = 4434 cd/m2, which seems to be in the right ballpark according to Wiki: Orders of Magnitude. Which color would I choose to represent that patch of sky?
Without more, luminance is usually expressed in candelas per square meter (cd/m2), and CIE XYZ's Y component is a luminance in cd/m2 — if the convention used is "absolute XYZ", which is rare. (The link is to an article I wrote which contains more detailed information.) More commonly, XYZ colors are normalized such that the white point (such as the D65 or D50 white point) has Y = 1 (or Y = 100).

How to apply flat shading to RGB colors?

I am creating a small 3d rendering application. I decided to use simple flat shading for my triangles - just calculate the cosine of angle between face normal and light source and scale light intensity by it.
But I'm not sure about how exactly should I apply that shading coefficient to my RGB colors.
For example, imagine some surface at 60 degree angle to light source. cos(60 degree) = 0.5, so I should retain only half of the energy in emitted light.
I could simply scale RGB values by that coefficient, as in following pseudocode:
double shade = cos(angle(normal, lightDir))
Color out = new Color(in.r * shade, in.g * shade, in.b * shade)
But the resulting colors get too dark even at smaller angles. After some thought, that seems logical - our eyes perceive the logarithm of light energy (it's why we can see both in the bright day, and in the night). And RGB values already represent that log scale.
My next attempt was to use that linear/logarithmic insight. Theoretically:
output energy = lg(exp(input energy) * shade)
That can be simplified to:
output energy = lg(exp(input energy)) + lg(shade)
output energy = input energy + lg(shade)
So such shading will just amount to adding logarithm of shade coefficient (which is negative) to RGB values:
double shade = lg(cos(angle(normal, lightDir)))
Color out = new Color(in.r + shade, in.g + shade, in.b + shade)
That seems to work, but is it correct? How it is done in real rendering pipelines?
The color RGB vector is multiplied by the shade coefficient
The cosine value as you initially assumed. The logarithmic scaling is done by the target imaging device and human eyes
If your colors get too dark then the probable cause is:
the cosine or angle value get truncated to integer
or your pipeline does not have linear scale output (some gamma corrections can do that)
or you have a bug somewhere
or your angle and cosine uses different metrics (radians/degrees)
you forget to add ambient light coefficient to the shade value
your vectors are opposite or wrong (check them visually see the first link on how)
your vectors are not in the same coordinate system (light is usually in GCS and Normal vectors in model LCS so you need convert at least one of them to the coordinate system of the other)
The cos(angle) itself is not usually computed by cosine
As you got all data as vectors then just use dot product
double shade = dot(normal, lightDir)/(|normal|.|lightDir|)
if the vectors are unit size then you can discard the division by sizes ... that is why normal and light vectors are normalized ...
Some related questions and notes
Normal shading this may enlight thing or two (for beginners)
Normal/Bump mapping see fragment shader and search the dot
mirrored light see for slightly more complex lighting scheme
GCS/LCS mean global/local coordinate system

Why is color segmentation easier on HSV?

I've heard that if you need to do a color segmentation on your software (create a binary image from a colored image by setting pixels to 1 if they meet certain threshold rules like R<100, G>100, 10< B < 123) it is better to first convert your image to HSV. Is this really true? And why?
The big reason is that it separates color information (chroma) from intensity or lighting (luma). Because value is separated, you can construct a histogram or thresholding rules using only saturation and hue. This in theory will work regardless of lighting changes in the value channel. In practice it is just a nice improvement. Even by singling out only the hue you still have a very meaningful representation of the base color that will likely work much better than RGB. The end result is a more robust color thresholding over simpler parameters.
Hue is a continuous representation of color so that 0 and 360 are the same hue which gives you more flexibility with the buckets you use in a histogram. Geometrically you can picture the HSV color space as a cone or cylinder with H being the degree, saturation being the radius, and value being the height. See the HSV wikipedia page.

CIE XYZ colorspace: do I have RGBA or XYZA?

http://zi.fi/shots/xyz.png
I plan to write a painting program based on linear combinations of xy plane points (0,1), (1,0) and (0,0). Such system works identically to RGB, except that the primaries are not within the gamut but at the corners of a triangle that encloses the entire gamut. I have seen the three points being referred to as X, Y and Z (upper case) somewhere, but I cannot find the page anymore (I marked them to the picture myself).
My pixel format stores the intensity of each of those three components the same way as RGB does, together with alpha value. This allows using pretty much any image manipulation operation designed for RGBA without modifying the code.
What is my format called? Is it XYZA, RGBA or something else? Google doesn't seem to know of XYZA. RGBA will get confused with sRGB + alpha (which I also need to use in the same program).
Notice that the primaries X, Y and Z and their intensities have little to do with the x, y and z coordinates (lower case) that are more commonly used.
http://en.wikipedia.org/wiki/RGB_color_space has the answer:
The CIE 1931 color space standard defines both the CIE RGB space, which is an RGB color space with monochromatic primaries, and the CIE XYZ color space, which works like an RGB color space except that it has non-physical primaries that can not be said to be red, green, and blue.
From this I interpret that XYZA is the correct way to call it.
Are you storing a float between 0.0 and 1.0 for each XYX and A intensity and then mapping that to the RGBA space?
You just have a custom format. It's not called anything special. (I don't believe that it really is a pixel format, it is in fact a color space or color coordinate in a color space being mapped to a certain pixel format.)

Resources