QColor hsl/hsv representaion is wrong? - pyqt

I'm trying to change color space of given image by using PyQt. I can't understand how QColor works.
Speaking about HSV we have 3 channels: H - from 0 to 359, S - from 0 to 100, V - from 0 to 100. But in documentation:
The value of s, v, and a must all be in the range 0-255; the value of h must be in the range 0-359.
How can be S and V values be in range 0-255? The same question is about HSL, where S and L should be in range 0-100
The value of s, l, and a must all be in the range 0-255; the value of h must be in the range 0-359.
And one more question. Should be the image, converted from rgb to hsl / rgb to hsv look the same and has the same colors?

Speaking about HSV we have 3 channels: H - from 0 to 359, S - from 0 to 100, V - from 0 to 100.
That's just a common convention, but it's not part of the HSV color space definition, nor its "parent" HSL from which it origined.
Those values are always intended as a range between a minimum and a maximum, not a discrete-based value range.
First of all, they both are alternative representations of the RGB color model.[1]
Then, colors are not discrete, our "digital usage" forces us to make them so, and their value range is completely arbitrary.
The commonly used RGB model is based on 8 bits for each primary color (providing a 256 value range for each of them, from 0 to 255), but even if it's normally fine for most usage, it's actually limited, especially when shown in a video or animation: in some cases (notably, with gradients), even a value change of 1 in a component can be clearly seen.
Color model representations in digital world commonly use discrete integer values of color spaces using limited ranges for performance reasons, and that's also valid for the values you're referring to. The range depends on the implementation.
For instance, the CSS rgb() notation accepts values with the 8-bit notation and percentages. Those values are almost never consistent, and for obvious reasons: while the theoretical range is of 256 values, the range of a percentage always refers to the maximum (255), meaning that 50% (or 0.5) is actually 127.5.
In fact, rgb(50%, 50%, 50%) normally results in #808080, which is rgb(128, 128, 128) (since 127.5 is rounded), meaning that rgb(50%, 50%, 50%) and rgb(128, 128, 128) are not the same, conceptually speaking.[2]
So, to the point, the value range only depends on the implementation. The only difference is that the hue component is wrapping because it's based on a circle, meaning that it always truly is a 0-360 range value: 50% (or 0.5) will always be 180 degrees, and that's because the maximum (360°, or 100%) equals the minimum (0).
Qt chose to use a 8-bit standard (0-255) for integer values that, following convention, use 0-255 or percentage ranges, with the exception of the hue component that uses the common 360 degrees notation.
If you want something more consistent with your habits, then you can add it with a simple helper function, but remember that, as the documentation explains, "components are stored using 16-bit integers" (note that this is still valid even for Qt6[3]), meaning that results might slightly differ.
def fromHsv100(*args, alpha=None):
if isinstance(args[0], QColor):
args = args[1:]
h, s, v = args[:3]
if alpha is None:
if len(args) == 4:
alpha = args[3]
else:
alpha = 100
return QColor.fromHsvF(
(h / 360) % 1,
(s * .01) % 1,
(v * .01) % 1,
(alpha * .01) % 1
)
def getHsv100(color):
return (
color.hue(),
round(color.saturationF() * 100),
round(color.valueF() * 100),
round(color.alphaF() * 100)
)
QColor.fromHsv100 = fromHsv100
QColor.getHsv100 = getHsv100
# usage:
color = QColor.fromHsv100(120, 55, 89)
print(color.getHsv100())
Finally, remember that, due to the nature of hue-based color models, you can create different colors that are always shown as "black" if their value (for HSV) or lightness (for HSL) component is 0, while they can have different hue and saturation values:
>> print(QColor.fromHsv(60, 0, 0).name())
#000000
>> print(QColor.fromHsv(240, 50, 0).name())
#000000
About your last question, since HSL and HSV are just alternative representations of the RGB color model, an image created with any of the above will theoretically look the same as long as it uses the same color space, and as long as the resulting integer values of the colors are compatible and rounded in the same way. But, since those values are always rounded based on their ranges, and those ranges are proportional to the actual model (which is not consistent for obvious reasons), that might not always happen.
For instance:
>>> hue = 290
>>> rgb = QColor.fromHsv(hue, 150, 150).getRgb()
>>> print(rgb)
(135, 62, 150, 255)
>>> newHue = QColor.fromRgb(*rgb).hue()
>>> print(hue == newHue, hue, newHue)
False 290 289
This means that if you create or edit images using multiple conversions between different color spaces, you might end up with images that are not actually identical.
[1] See the related Wikipedia article
[2] Actual values of the resulting 24-bit RGB (which, as of late 2022, is the final color shown by a non-HDR browser/system) might depend on the browser and its rounding implementation; note that rounding is not always consistent, for instance, Python uses the Round half to even (aka, the "bankers' rounding") method for round(), meaning that both 127.5 and 128.5 are rounded to 128.
[3] Even if most modern devices support wider color dynamic ranges, QColor is intended for basic, performant behavior, since it's used in a lot of basic classes that expect fast results, like displaying labels, buttons or texts of items in a model view; things for which such dynamic ranges are quite pointless.

Related

Transformed colors when painting semi-transparent in p5.js

A transformation seems to be applied when painting colors in p5.js with an alpha value lower than 255:
for (const color of [[1,2,3,255],[1,2,3,4],[10,11,12,13],[10,20,30,40],[50,100,200,40],[50,100,200,0],[50,100,200,1]]) {
clear();
background(color);
loadPixels();
print(pixels.slice(0, 4).join(','));
}
Input/Expected Output Actual Output (Firefox)
1,2,3,255 1,2,3,255 ✅
1,2,3,4 0,0,0,4
10,11,12,13 0,0,0,13
10,20,30,40 6,19,25,40
50,100,200,40 51,102,204,40
50,100,200,0 0,0,0,0
50,100,200,1 0,0,255,1
The alpha value is preserved, but the RGB information is lost, especially on low alpha values.
This makes visualizations impossible where, for example, 2D shapes are first drawn and then the visibility in certain areas is animated by changing the alpha values.
Can these transformations be turned off or are they predictable in any way?
Update: The behavior is not specific to p5.js:
const ctx = new OffscreenCanvas(1, 1).getContext('2d');
for (const [r,g,b,a] of [[1,2,3,255],[1,2,3,4],[10,11,12,13],[10,20,30,40],[50,100,200,40],[50,100,200,0],[50,100,200,1]]) {
ctx.clearRect(0, 0, 1, 1);
ctx.fillStyle = `rgba(${r},${g},${b},${a/255})`;
ctx.fillRect(0, 0, 1, 1);
console.log(ctx.getImageData(0, 0, 1, 1).data.join(','));
}
I could be way off here...but it looks like internally that in the background method if _isErasing is true then blendMode is called. By default this will apply a linear interpolation of colours.
See https://github.com/processing/p5.js/blob/9cd186349cdb55c5faf28befff9c0d4a390e02ed/src/core/p5.Renderer2D.js#L45
See https://p5js.org/reference/#/p5/blendMode
BLEND - linear interpolation of colours: C = A*factor + B. This is the
default blending mode.
So, if you set the blend mode to REPLACE I think it should work.
REPLACE - the pixels entirely replace the others and don't utilize
alpha (transparency) values.
i.e.
blendMode(REPLACE);
for (const color of [[1,2,3,255],[1,2,3,4],[10,11,12,13],[10,20,30,40],[50,100,200,40],[50,100,200,0],[50,100,200,1]]) {
clear();
background(color);
loadPixels();
print(pixels.slice(0, 4).join(','));
}
Internally, the HTML Canvas stores colors in a different way that cannot preserve RGB values when fully transparent. When writing and reading pixel data, conversions take place that are lossy due to the representation by 8-bit numbers.
Take for example this row from the test above:
Input/Expected Output Actual Output
10,20,30,40 6,19,25,40
IN (conventional alpha)
R
G
B
A
values
10
20
30
40 (= 15.6%)
Interpretation: When painting, add 15.6% of (10,20,30) to the 15.6% darkened (r,g,b) background.
Canvas-internal (premultiplied alpha)
R
G
B
A
R
G
B
A
calculation
10 * 0.156
20 * 0.156
30 * 0.156
40 (= 15.6%)
values
1.56
3.12
4.7
40
values (8-bit)
1
3
4
40
Interpretation: When painting, add (1,3,4) to the 15.6% darkened (r,g,b) background.
Premultiplied alpha allows faster painting and supports additive colors, that is, adding color values without darkening the background.
OUT (conventional alpha)
R
G
B
A
calculation
1 / 0.156
3 / 0.156
4 / 0.156
40
values
6.41
19.23
25.64
40
values (8-bit)
6
19
25
40
So the results are predictable, but due to the different internal representation, the transformation cannot be turned off.
The HTML specification explicitly mentions this in section 4.12.5.1.15 Pixel manipulation:
Due to the lossy nature of converting between color spaces and converting to and from premultiplied alpha color values, pixels that have just been set using putImageData(), and are not completely opaque, might be returned to an equivalent getImageData() as different values.
see also 4.12.5.7 Premultiplied alpha and the 2D rendering context

svg feGaussianBlur: correlation between stdDeviation and size

When I blur an object in Inkscape by let's say 10%, it get's a filter with a feGaussionBlur with a stdDeviation of 10% * size / 2.
However the filter has a size of 124% (it is actually that big, Inkscape doesn't add a bit just to be on the safe-side).
Where does this number come from? My guess would be 100% + 2.4 * (2*stdDeviation/size), but then where does this 2.4 come from?
From the SVG 1.1 spec:
This filter primitive performs a Gaussian blur on the input image.
The Gaussian blur kernel is an approximation of the normalized convolution:
G(x,y) = H(x)I(y)
where
H(x) = exp(-x2/ (2s2)) / sqrt(2* pis2)
and
I(y) = exp(-y2/ (2t2)) / sqrt(2 pi*t2)
with 's' being the standard deviation in the x direction and 't' being the standard deviation in the y direction, as specified by ‘stdDeviation’.
The value of ‘stdDeviation’ can be either one or two numbers. If two numbers are provided, the first number represents a standard deviation value along the x-axis of the current coordinate system and the second value represents a standard deviation in Y. If one number is provided, then that value is used for both X and Y.
Even if only one value is provided for ‘stdDeviation’, this can be implemented as a separable convolution.
For larger values of 's' (s >= 2.0), an approximation can be used: Three successive box-blurs build a piece-wise quadratic convolution kernel, which approximates the Gaussian kernel to within roughly 3%.
let d = floor(s * 3*sqrt(2*pi)/4 + 0.5)
... if d is odd, use three box-blurs of size 'd', centered on the output pixel.
... if d is even, two box-blurs of size 'd' (the first one centered on the pixel boundary between the output pixel and the one to the left, the second one centered on the pixel boundary between the output pixel and the one to the right) and one box blur of size 'd+1' centered on the output pixel.
Note: the approximation formula also applies correspondingly to 't'.*

Given the RGB components of a color, how can I decide if it is perceived as gray by humans?

One simple way is to say that when the RGB components are equal, they form a gray color.
However, this is not the whole story, because if they only have a slight difference, they will still look gray.
Assuming the viewer has a healthy vision of color, how can I decide if the given values would be perceived as gray (presumably with an adjustable threshold level for "grayness")?
A relatively straightforward method would be to convert RGB value to HSV color space and use threshold on the saturation component, e.g. "if saturation < 0.05 then 'almost grey', else not grey".
Saturation is actually the "grayness/colorfulness" by definition.
This method is much more accurate than using differences between R, G and B channels (since human eye perceives saturation differently on light and dark colors). On the other hand, converting RGB to HSV is computationally intensive. It is up to you to decide what is of more value - precise answer (grey/not grey) or performance.
If you need an even more precise method, you may use L*a*b* color space and compute chroma as sqrt(a*a + b*b) (see here), and then apply thresholding to this value. However, this would be even more computationally intensive.
You can also combine multiple methods:
Calculate simple differences between R, G, B components. If the color can be identified as definitely desaturated (e.g. max(abs(R-G), abs(R-B), abs(G-B)) <= 5) or definitely saturated (e.g. max(abs(R-G), abs(R-B), abs(G-B)) > 100), then stop.
Otherwise, convert to L*a*b*, compute chroma as sqrt(a*a + b*b) and use thresholding on this value.
r = 160;
g = 179;
b = 151;
tolerance = 20;
if (Math.abs(r-g) < 20 && Math.abs(r-b) < 20) {
#then perceived as gray
}

How to get colors with the same perceived brightness?

Is there a tool / program / color system that enables you to get colors of the same luminance (perceived brightness)?
Say I pick a color (determine RGB values) and the program gives me all the colors around the color wheel with the same luminance but different hues?
I haven't seen such tool yet, all I came across were three different algorithms for color luminance:
(0.2126*R) + (0.7152*G) + (0.0722*B)
(0.299*R + 0.587*G + 0.114*B)
sqrt( 0.241*R^2 + 0.691*G^2 + 0.068*B^2 )
Just to be clear, I'm talking about color luminance / perceived brightness or whatever you want to call it - the attribute that encounters that we perceive red hue brighter than blue for example. (So 255,0,0 has higher luminance value than 0,0,255.)
P.S.: Does anyone know which algorithm is used to determine color luminence on this website: http://www.workwithcolor.com/hsl-color-picker-01.htm
It looks like they used none of the posted algorithms.
In the HSL color picker you linked to, it looks like they are using the 3rd Lightness equation given here, and then making it a percentage. So the equation is:
L = (100 * 0.5 * (max(r,g,b) + min(r,g,b))) / 255
Edit: Actually, I just realized that they have an L value and a Lum value shown on that color picker. The equation above applies to the L value, but I don't know how they are arriving at the Lum value. It doesn't seem to follow any of the standard equations.

HLSL tex2d sampler seemingly using inconsistent rounding; why?

I have code that needs to render regions of my object differently depending on their location. I am trying to use a colour map to define these regions.
The problem is when I sample from my colour map, I get collisions. Ie, two regions with different colours in the colourmap get the same value returned from the sampler.
I've tried various formats of my colour map. I set the colours for each region to be "5" apart in each case;
Indexed colour
RGB, RGBA: region 1 will have RGB 5%,5%,5%. region 2 will have RGB 10%,10%,10% and so on.
HSV Greyscale: region 1 will have HSV 0,0,5%. region 2 will have HSV 0,0,10% and so on.
(Values selected in The Gimp)
The tex2D sampler returns a value [0..1].
[ I then intend to derive an int array index from region. Code to do with that is unrelated, so has been removed from the question ]
float region = tex2D(gColourmapSampler,In.UV).x;
Sampling the "5%" colour gave a "region" of 0.05098 in hlsl.
From this I assume the 5% represents 5/100*255, or 12.75, which is rounded to 13 when stored in the texture. (Reasoning: 0.05098 * 255 ~= 13)
By this logic, the 50% should be stored as 127.5.
Sampled, I get 0.50196 which implies it was stored as 128.
the 70% should be stored as 178.5.
Sampled, I get 0.698039, which implies it was stored as 178.
What rounding is going on here?
(127.5 becomes 128, 178.5 becomes 178 ?!)
Edit: OK,
http://en.wikipedia.org/wiki/Bankers_rounding#Round_half_to_even
Apparently this is "banker's rounding". I have no idea why this is being used, but it solves my problem. Apparently, it's a Gimp issue.
I am using Shader Model 2 and FX Composer. This is my sampler declaration;
//Colour map
texture gColourmapTexture <
string ResourceName = "Globe_Colourmap_Regions_Greyscale.png";
string ResourceType = "2D";
>;
sampler2D gColourmapSampler : register(s1) = sampler_state {
Texture = <gColourmapTexture>;
#if DIRECT3D_VERSION >= 0xa00
Filter = MIN_MAG_MIP_LINEAR;
#else /* DIRECT3D_VERSION < 0xa00 */
MinFilter = Linear;
MipFilter = Linear;
MagFilter = Linear;
#endif /* DIRECT3D_VERSION */
AddressU = Clamp;
AddressV = Clamp;
};
I never used HLSL, but I did use GLSL a while back (and I must admit it's terribly far in my head).
One issue I had with textures is that 0 is not the first pixel. 1 is not the second one. 0 is the edge of the texture and 1 is the right edge of the first pixel. The values get interpolated automatically and that can cause serious trouble if what you need is precision like when applying a lookup table rather than applying a normal texture. You need to aim for the middle of the pixel, so asking for [0.5,0.5], [1.5,0.5] rather than [0,0], [1, 0] and so on.
At least, that's the way it was in GLSL.
Beware: region in levels[region] is rounded down. When you see 5 % in your image editor, the actual value in the texture 8b representation is 5/100*255 = 12.75, which may be either 12 or 13. If it is 12, the rounding down will hit you. If you want rounding to nearest, you need to change this to levels[region+0.5].
Another similar thing (already written by Louis-Philippe) which might hit you is texture coordinates rounding rules. You always need to hit a spot in the texel so that you are not in between of two texels, otherwise the result is ill-defined (you may get any of two randomly) and some of your source texels may disapper while other duplicate. Those rules are different for bilinar and point sampling, you may need to add half of texel size when sampling to compensate for this.
GIMP uses banker's rounding. Apparently.
This threw out my code to derive region indicies.

Resources