RGBA Alpha Value - colors

So with RGBA, you set the alpha with the last attribute. However, I have seen people set the alpha as 255 to get it at 100% when I always thought the correct way was to set it to 1?
What I'm trying to say is this:
rgba(0, 0, 0, 120) // What they did
rgba(0, 0, 0, 0.47) // What I would do.
From what I can see, they do the same thing. Is there a "correct" way of doing it?

The type we usually use to represent the RGBA information should be an 4 bytes integer. 0x00000000 these four bytes represent red green blue alpha (or red blue green alpha) respectively.
So the "Correct" way is to set the last byte to be 255, if you want the alpha to be 1.
However, in some libraries or languages there are 2 interfaces to set the last byte to be 255:
rgba(int,int,int,int)
and
rgba(int,int,int,float)
So, that depends!
You should find out the interface in the documentation!
Hope this can help you!

Related

QColor hsl/hsv representaion is wrong?

I'm trying to change color space of given image by using PyQt. I can't understand how QColor works.
Speaking about HSV we have 3 channels: H - from 0 to 359, S - from 0 to 100, V - from 0 to 100. But in documentation:
The value of s, v, and a must all be in the range 0-255; the value of h must be in the range 0-359.
How can be S and V values be in range 0-255? The same question is about HSL, where S and L should be in range 0-100
The value of s, l, and a must all be in the range 0-255; the value of h must be in the range 0-359.
And one more question. Should be the image, converted from rgb to hsl / rgb to hsv look the same and has the same colors?
Speaking about HSV we have 3 channels: H - from 0 to 359, S - from 0 to 100, V - from 0 to 100.
That's just a common convention, but it's not part of the HSV color space definition, nor its "parent" HSL from which it origined.
Those values are always intended as a range between a minimum and a maximum, not a discrete-based value range.
First of all, they both are alternative representations of the RGB color model.[1]
Then, colors are not discrete, our "digital usage" forces us to make them so, and their value range is completely arbitrary.
The commonly used RGB model is based on 8 bits for each primary color (providing a 256 value range for each of them, from 0 to 255), but even if it's normally fine for most usage, it's actually limited, especially when shown in a video or animation: in some cases (notably, with gradients), even a value change of 1 in a component can be clearly seen.
Color model representations in digital world commonly use discrete integer values of color spaces using limited ranges for performance reasons, and that's also valid for the values you're referring to. The range depends on the implementation.
For instance, the CSS rgb() notation accepts values with the 8-bit notation and percentages. Those values are almost never consistent, and for obvious reasons: while the theoretical range is of 256 values, the range of a percentage always refers to the maximum (255), meaning that 50% (or 0.5) is actually 127.5.
In fact, rgb(50%, 50%, 50%) normally results in #808080, which is rgb(128, 128, 128) (since 127.5 is rounded), meaning that rgb(50%, 50%, 50%) and rgb(128, 128, 128) are not the same, conceptually speaking.[2]
So, to the point, the value range only depends on the implementation. The only difference is that the hue component is wrapping because it's based on a circle, meaning that it always truly is a 0-360 range value: 50% (or 0.5) will always be 180 degrees, and that's because the maximum (360°, or 100%) equals the minimum (0).
Qt chose to use a 8-bit standard (0-255) for integer values that, following convention, use 0-255 or percentage ranges, with the exception of the hue component that uses the common 360 degrees notation.
If you want something more consistent with your habits, then you can add it with a simple helper function, but remember that, as the documentation explains, "components are stored using 16-bit integers" (note that this is still valid even for Qt6[3]), meaning that results might slightly differ.
def fromHsv100(*args, alpha=None):
if isinstance(args[0], QColor):
args = args[1:]
h, s, v = args[:3]
if alpha is None:
if len(args) == 4:
alpha = args[3]
else:
alpha = 100
return QColor.fromHsvF(
(h / 360) % 1,
(s * .01) % 1,
(v * .01) % 1,
(alpha * .01) % 1
)
def getHsv100(color):
return (
color.hue(),
round(color.saturationF() * 100),
round(color.valueF() * 100),
round(color.alphaF() * 100)
)
QColor.fromHsv100 = fromHsv100
QColor.getHsv100 = getHsv100
# usage:
color = QColor.fromHsv100(120, 55, 89)
print(color.getHsv100())
Finally, remember that, due to the nature of hue-based color models, you can create different colors that are always shown as "black" if their value (for HSV) or lightness (for HSL) component is 0, while they can have different hue and saturation values:
>> print(QColor.fromHsv(60, 0, 0).name())
#000000
>> print(QColor.fromHsv(240, 50, 0).name())
#000000
About your last question, since HSL and HSV are just alternative representations of the RGB color model, an image created with any of the above will theoretically look the same as long as it uses the same color space, and as long as the resulting integer values of the colors are compatible and rounded in the same way. But, since those values are always rounded based on their ranges, and those ranges are proportional to the actual model (which is not consistent for obvious reasons), that might not always happen.
For instance:
>>> hue = 290
>>> rgb = QColor.fromHsv(hue, 150, 150).getRgb()
>>> print(rgb)
(135, 62, 150, 255)
>>> newHue = QColor.fromRgb(*rgb).hue()
>>> print(hue == newHue, hue, newHue)
False 290 289
This means that if you create or edit images using multiple conversions between different color spaces, you might end up with images that are not actually identical.
[1] See the related Wikipedia article
[2] Actual values of the resulting 24-bit RGB (which, as of late 2022, is the final color shown by a non-HDR browser/system) might depend on the browser and its rounding implementation; note that rounding is not always consistent, for instance, Python uses the Round half to even (aka, the "bankers' rounding") method for round(), meaning that both 127.5 and 128.5 are rounded to 128.
[3] Even if most modern devices support wider color dynamic ranges, QColor is intended for basic, performant behavior, since it's used in a lot of basic classes that expect fast results, like displaying labels, buttons or texts of items in a model view; things for which such dynamic ranges are quite pointless.

What is a "valid IM color"?

I found the following documentation in a bash script written for use with some software named, "imagemagick"
# USAGE: multicrop2 [-c coords] [-b bcolor] [more options, blah blah,...]
# [... snip ...]
# -b bcolor background color to use instead of option -c;
# any valid IM color; default is to use option -c
I cannot fathom what the code author considered to be a "valid IM color." I am guessing that "IM" is simply an abbreviation of "image," but perhaps "instant messaging," or something else was meant. What do you think?
A hex code for RGB would have blue in the lower byte, green in the middle byte, and red in the upper byte, but I am not sure whether they use standard RGB encoding for colors or not.
Multicrop2 is my script. A valid ImageMagick color is any color scheme that ImageMagick recognizes, such as rgb, hex, cmyk, hsl, and even color names in the format specified in its documentation and especially at http://www.imagemagick.org/script/color.php. There are too many to list in my script. Often I refer to this link, but apparently in this case I did not. But most ImageMagick users are aware of its typical color schemes. ImageMagick users mostly know that IM is an abbreviation for ImageMagick. Colors follow the CSS stye guide for the most part. Colors with #, spaces or parentheses, need to be quoted, at least on Unix-like systems. Color names do not need quoting. Apologies to newbies to my scripts and ImageMagick.
Examples:
rgb(255, 0, 0) range 0 - 255
rgba(255, 0, 0, 1.0) the same, with an explicit alpha value
rgb(100%, 0%, 0%) range 0.0% - 100.0%
rgba(100%, 0%, 0%, 1.0) the same, with an explicit alpha value
#ff0000 #rrggbb
#ff0000ff #rrggbbaa
gray50 near mid gray
gray(127) near mid gray
gray(50%) mid gray
graya(50%, 0.5) semi-transparent mid gray
hsb(120, 100%, 100%) full green in hsb
hsba(120, 100%, 100%, 1.0) the same, with an alpha value of 1.0
hsb(120, 255, 255) full green in hsb
hsba(120, 255, 255, 1.0) the same, with an alpha value of 1.0
hsl(120, 100%, 50%) full green in hsl
hsla(120, 100%, 50%, 1.0) the same, with an alpha value of 1.0
hsl(120, 255, 127.5) full green in hsl
hsla(120, 255, 127.5, 1.0) the same, with an alpha value of 1.0
cielab(62.253188, 23.950124, 48.410653)
icc-color(cmyk, 0.11, 0.48, 0.83, 0.00) cymk
icc-color(rgb, 1, 0, 0) linear rgb
icc-color(rgb, red) linear rgb
icc-color(lineargray, 0.5) linear gray
icc-color(srgb, 1, 0, 0) non-linear rgb
icc-color(srgb, red) non-linear rgb
icc-color(gray, 0.5) non-linear gray

How to use a color buffer in OpenGL ES 2

I am a bit confused on how to draw color using a color buffer. I found a similar question here and made my shader the same as shown in the post's accepted answer. I then used the code:
mColorHandle = GLES20.glGetAttribLocation(Shader, "vColor");
GLES20.glEnableVertexAttribArray(mColorHandle);
ByteBuffer cb = ByteBuffer.allocateDirect(color.length * BYTES_PER_FLOAT);
cb.order(ByteOrder.nativeOrder());
colorBuffer = cb.asFloatBuffer();
colorBuffer.put(color);
colorBuffer.position(0);
GLES20.glBindBuffer(GLES20.GL_ARRAY_BUFFER, cbo);
GLES20.glBufferSubData(GLES20.GL_ARRAY_BUFFER, 0, colorBuffer.capacity(), colorBuffer);
GLES20.glVertexAttribPointer(mColorHandle, 4,
GLES20.GL_FLOAT, false,
0, 0);
in attempt to draw the color.
The shape displayed the color I was trying to draw but it faded out the color across the shape like this:
If someone could tell me what's going wrong and how I could get the shape to be all the same color, I would appreciate it.
Thanks to Rabbid76 for helping me find the mistake.
Instead of 4 elements total in the color array, there needs to be 16, an RGBA value for each vertex. (4 elements of the array are used to make one RGBA value.)

How can i animate an SVG stroke color transition with velocity.js?

I would like to animate an SVG stroke color change from say, red to green.
Is this possible?
I have managed to do so with the "fill" property, but for some reason i cannot do it with stroke.
One solution would be something along the lines of:
.velocity({
strokeRed: 0,
strokeGreen: 255,
strokeBlue: 0
});
My understanding is that linear 1s should the default timing, perhaps adding a named easing will enable that,
.velocity({
strokeRed: 0,
strokeGreen: 255,
strokeBlue: 0
}, "easeInSine");
Also, you can simply use "stroke", but note that stroke requires a hex value, unlike those listed above which can either be unitless or a percent.

Why doesn't the alpha pixel in html canvas blend in with the background color?

http://jsfiddle.net/jBgqW/
I've painted the background with fillRect and fillStyle set to rgb(255,0,0) but when I iterate through the pixels and set some random color and value of the alpha pixel to 0 everything becomes white. I've assumed that when the pixel is transparent it should blend with the previously painted background color or does it always default to white.
I hope that it's just my wrong way of using the canvas.
Can anyone explain why the background isn't red in this case and how do i use the alpha pixel properly? I would like to know if this has something to do with the alpha premultiplication.
When using globalAlpha, the pixel colors are calculated with the current rgba values and the new values.
However, in this case you're setting the values manually and therefore doing no calculations. You're just setting the rgba values yourself, which means that the alpha channel is not used for calculating but is just altered without further use. The previous color (red) is basically overwritten in a 'brute force' way - instead of rgba(255, 0, 0, 255), it's now just rgba(128, 53, 82, 0). The original red color has simply been thrown away.
As a result, an alpha of 0 represents complete transparency, so you see the colors of the parent element.
This can be confirmed if you change the body background color: http://jsfiddle.net/jBgqW/2/.
This is somewhat thread necromancy, but I've just faced this problem and have a solution to it, if not for the original poster then for people like me coming from google.
As noted, putImageData directly replaces pixels rather than alpha blends, but that also means it preserves your alpha data. You can then redraw that image with alpha blending using drawImage.
To give an example, lets says we have a canvas that is 200 by 100 pixels and a 100 by 100 imageData object.
// our canvas
var canvas = document.getElementById("mycanvas");
var ctx = canvas.getContext("2d");
// our imageData, created in whatever fashion, with alpha as appropriate...
var data = /* ... */
// lets make the right half of our canvas blue
ctx.fillStyle="blue";
ctx.rect(100, 0, 100, 100);
ctx.fill();
// now draw our image data to the left (white) half, pixels are replaced
ctx.putImageData(data, 0, 0, 100, 100);
// now the magic, draw the canvas to itself with clipping
ctx.drawImage(canvas, 100, 0, 100, 100, 100, 0, 100, 100);
Voila. The right half of the image is now your image data blended with the blue background, rendered with hardware assistance.

Resources