I have a number of HSL values that I want to categorise in terms of:
colour neutrality (i.e. group all neutral colours such as black/grey/white/beige/brown together, and group all non-neutral colours such as yellow/blue/green/red in a separate category)
brightness
The latter is relatively simple in that I can take the L value and define >50% as light and <50% as dark. However I'm having trouble defining a rule that would categorise HSL values by their colour neutrality - what's the best way to do this?
I put together a few colour charts arranged by HSL, HSV and LCH (cylindrical LAB) to see what's the better metric for 'colour neutrality'. Saturation/chroma increases top–down, luminance/value increases left–right and hue increases diagonally top-left–bottom-right inside each 4*4 sub square.
HSL
HSV
LCH
Of course it's up to you to decide, but I think HSL S, HSV S and LCH C all seem to correspond fairly well with 'colour neutrality'.
Had a little idea. To me it sort of looks like this:
We can implement a version of this with some simple arithmetic.
convert xc:blue xc:darkRed xc:red xc:pink xc:brown xc:gray \
-colorspace HSL -format '%[fx:(abs(b-0.5)+(1-g))/1.5]\n' info:-
# 5.08634e-06
# 0.151629
# 5.08634e-06
# 0.250985
# 0.333272
# 0.670588
Or applying it to the HSL colour chart
convert comp-hsl.png -colorspace HSL \
-channel red -fx "(abs(b-0.5)+(1-g))/1.5" -channel R -separate \
hsl4.png
Related
I have an RGB image and I converted to Lab colorspace. Now, I want to convert the image in LAB space to grayscale one. I know L NOT = Luminance.
So, any idea how to get the equivalent gray value of a specific color in lab space?
I'm looking for a formula or algorithm to determine the equivalent gray value of a color given the LAB values.
The conversion from Luminance Y to Lightness L* is defined by the CIE 1976 Lightness Function. Put another way, L* transforms linear values into non-linear values that are perceptually uniform for the Human Visual System (HVS). With that in mind, your question is now dependent on what kind of gray you are looking for, if perceptually uniform and thus non-linear, the Lightness channel from CIE Lab* is actually that of CIE 1976 and is appropriate. If you need something linear, you would have to convert back to CIE XYZ tristimulus values and use the Y channel.
So I know basically nothing about colors apart from the very basics.
I have a color I'm trying to mimic.
I copied it, stuck it in paint, and used the color feature to get the RGB HSL numbers. Great!
RGB: 0; 49; 70
HSL: 132; 240; 33
The issue: When I try to manually input them into Excel, it "autocorrects" the RGB values after I enter in the HSL, and it "autocorrects" the HSL when I re-enter the RGB.
Why is this happening? Is this just an aspect to colors I know nothing about? Some limitation on Excel?
For reference, when I put in just the RGB, I'm much closer (but not quite there) on the color I'm looking for
HSL and RGB are two ways of "translating colors" into numbers.
HSL means Hue, Saturation, Lightness.
Hue is a degree on the color wheel from 0 to 360. 0 is red, 120 is green, 240 is blue.
Saturation is a percentage value; 0% means a shade of gray and 100% is the full color.
Lightness is also a percentage; 0% is black, 100% is white.
RGB means Red, Green, Blue, each of which is given a value between 0 and 255 in Excel.
Check this tool - https://www.w3schools.com/colors/colors_hsl.asp
If you put 0, 49, 70 for HSL you would see that it gets translated to 216, 141, 141 into RGB.
Excel is following the same logic, thus once you adjust the RGB the HSL gets automatically adjusted to represent the same color.
Excel colors are confusing because they don't follow the standard
Although the first reason you may have been confused was if you didn't know that RGB and HSL are two different ways of describing colors (and that every RGB color code has an equivalent HSL color code—see examples below), a second reason many people can get confused when selecting colors in Excel particularly is:
“Frustratingly, Excel does not handle HSL in the standard way. Instead, Excel measures all the numbers where 0 is the lowest and 255 is the biggest. But, it’s a quirk we can handle.” - https://exceloffthegrid.com/convert-color-codes/
“This approach assumes that each of your HSL values can be express in the range of 0 to 255. If, however, your HSL values are either an angle (for hue) or a percentage (for saturation and luminance), then you'll need to convert them manually before entering them in step 6. You can convert an angle value by multiplying the angle by 255 and then dividing by 360. Percentages can be converted by multiplying them by 2.55.” - https://excelribbon.tips.net/T013535_Converting_HSL_to_RGB.html
“To change the lightness (adding white) or darkness (adding black), drag your selection up and down the luminance scale on the right. Notice that the Lum value increases as the color gets lighter. Full luminance is 255 (white), and setting Lum to 0 results in black regardless of the hue and saturation settings.” - https://support.microsoft.com/en-us/office/choosing-colors-in-the-colors-dialog-box-c3d59ddf-65a7-4e62-aad7-f7b8d7684a2d
Examples of converting RGB color codes to HSL
rgb(0, 49, 70) = hsl(198, 100%, 14%)
These independent sites agree with Google that that RGB code converts to that HSL code:
https://colorpicker.me/#003146
https://hslpicker.com/#003146
So if someone told you that rgb(0, 49, 70) was equivalent to hsl(132, 240, 33), they were mistaken (even when using Excel's non-standard way of calculating HSL).
I want to incrementally rotate around the color wheel hopping to the opposite side each turn. I have an undefined number of clients to represent on a kendo chart and I want to ensure that they are all identifiable against their immediate neighbours. Can anyone pin down a mathematical relationship between colours on opposite sides of the colour wheel? I am of course working on this myself but I thought it an interesting little problem that you guys might enjoy with me.
It would be easier to do this type of conversion in the HSL or HSV color space, rather than RGB (aka hex values). Then to get the opposite point on the wheel just follow the formula:
hue = (hue + 180) % 360
So starting with hsl(0, 80%, 20%) would yield hsl(180, 80%, 20%) etc. The easiest way to convert a given RGB value to an RGB value on the opposite point would be to convert RGB to HSL or HSV, do the shift, and convert that back to RGB. The formulas for that can be found here: http://en.wikipedia.org/wiki/HSL_and_HSV
Modern browsers support HSL natively, so maybe some of this complexity can be avoided and you would never need to muck with RGB values in the first place. http://caniuse.com/css3-colors
The color wheel is based on the HSV color space, where the hue coordinate represents your angle on the color wheel. You need to convert RGB colors into HSV, perform your rotation on the hue coordinate, then convert back to RGB.
I have two color values in HSI (Hue Saturation and Intensity) and I want a number which represents the visual difference between the two colors. Hue is a number between 0 and 360 inclusive. Saturation is 0 to 1 and Intensity is 0 to 1.
Lets consider for example Red and Blue at Saturation of 100% and Intensity of 100%.
At this website is a way to display the color by entering in the following text.
red is:
hsv 0, 100%, 100%
blue is:
hsv 240, 100%, 100%
Clearly these are two very different colors, and so a simple way I could try to calculate the difference between colors is to use the Hue component and calculate the absolute difference in hue which would be 120 (360-240) since 360 is also equal to 0 in hue.
The problem arises where the Saturation or Intensity is very dark or light, consider a very dark red and blue.
dark red is:
hsv 0, 100%, 20%
dark blue is:
hsv 240, 100% 20%
Obviously the visual difference between these two colors is less than the bright red and blue colors, as a human would state if asked to compare the differences. What I mean here is, ask a friend "Which pair of colors is most different?" they will likely say the top bright red blue.
I am trying to calculate the difference between two colors as a human would notice. If a human being looked at two colors a and b, then two colors c and d, he could notice which ones are the most different. Firstly if the colors are bright (but not too bright) then the difference is hue based. If the colors are too bright such as white or too dark such as black or too grey then the differences are smaller.
It should be possible to have a function diff where x=diff(a,b) and y=diff(c,d) yields x and y, and I can use x and y to compare the differences to find the most different color or least different color.
The WCAG2.0 and 1.0 guidelines both make reference to different equations on perception of color difference:
contrast ratio (http: //www.w3.org/TR/2008/REC-WCAG20-20081211/Overview.html#contrast-ratiodef)
brigtness difference and 3. color difference (http://www.w3.org/TR/AERT#color-contrast).
I tried the Delta-e method(http: //colormine.org/delta-e-calculator/) but it is quasimetric so the difference measurement may change depending on the order you pass the two colors. If in your example you expect diff(a,b) to always equal diff(b,a) then this is not what you want(there may be different algorithms under this name that aren't quasimetric but I haven't looked into it past that site).
I think that the color difference metric is the closest to matching my expectations of color difference measurements. For your example it will yield that diff(a,b) > diff(c,d)
You can test it out for yourself using the tool at this website: http://www.dasplankton.de/ContrastA/
The general answer seems to be what David van Driessche said, to use Delta E. I found some Java code here: https://github.com/kennyliou/GAI
This is a answer to the question, may not be the best answer.
I'm writing shadow removal for my pony detector. After I've converted a PNG image from sRGB to CIE XYZ I remove the luminance as per instructions:
When I try to convert the image back to sRGB for display, I get RGB values that fall outside the sRGB gamut (I get values greater than 255). Is this normal, or should I keep looking for bugs? Note: conversion to XYZ and back without modification produces no glitches.
Illustration (top left: original, bottom left: byte value wraparaund for red and blue hues):
For completeness: top right: color ratios, bottom right: convert to HSV and equalize value.
The final transformation does not remove the luminance, it creates two new values, x, y that together define the chromacity while Y contains the luminance. This is the key paragraph in your instructions link (just before the formulas you link):
The CIE XYZ color space was
deliberately designed so that the Y
parameter was a measure of the
brightness or luminance of a color.
The chromaticity of a color was then
specified by the two derived
parameters x and y, two of the three
normalized values which are functions
of all three tristimulus values X, Y,
and Z:
What this means is that if you have an image of a surface that has a single colour, but a part of the surface is in the shadow, then in the xyY space the x and y values should be the same (or very similar) for all pixels on the surface whether they are in the shadow or not.
The xyz values you get from the final transformation cannot be translated directly back to RGB as if they were XYZ values (note capitalization). So to answer your actual question: If you are using the xyz values as if they are XYZ values then there are no bugs in your code. Translation to RGB from that should not work using the formulas you linked.
Now if you want to actually remove the shadows from the entire image what you do is:
scale your RGB values to be floating point valies in the [0-1] range by dividing each value by 255 (assuming 24-bit RGB). The conversion to floating point helps accuracy a lot!
Convert the image to xyY
Replace all Y values with one value, say 0.5, but experiment.
Reverse the transformation, from xyY to XYZ:
then to RGB:
Then rescale your RGB values on the [0..255] interval.
This should give you a very boring but shadowless version of your original image. Of course if your goal is detecting single colour regions you could also just do this on the xy values in the xyY image and use the regions you detect there on the original.