Convert grayscale value to RGB representation? - colors

How can I convert a grayscale value (0-255) to an RGB value/representation?
It is for using in an SVG image, which doesn't seem to come with a grayscale support, only RGB...
Note: this is not RGB -> grayscale, which is already answered in another question, e.g. Converting RGB to grayscale/intensity)

The quick and dirty approach is to repeat the grayscale intensity for each component of RGB. So, if you have grayscale 120, it translates to RGB (120, 120, 120).
This is quick and dirty because the effective luminance you get depends on the actual luminance of the R, G and B subpixels of the device that you're using.

If you have the greyscale value in the range 0..255 and want to produce a new value in the form 0x00RRGGBB, then a quick way to do this is:
int rgb = grey * 0x00010101;
or equivalent in your chosen language.

Conversion of a grayscale to RGB is simple. Simply use R = G = B = gray value. The basic idea is that color (as viewed on a monitor in terms of RGB) is an additive system.
http://en.wikipedia.org/wiki/Additive_color
Thus adding red to green yields yellow. Add in some blue to that mix in equal amounts, and you get a neutral color. Full on [red, green, blue] = [255 255 255] yields white. [0,0,0] yields monitor black. Intermediate values, when R=G=B are all equal will yield nominally neutral colors of the given level of gray.
A minor problem is depending on how you view the color, it may not be perfectly neutral. This will depend on how your monitor (or printer) is calibrated. There are interesting depths of color science we could go into from this point. I'll stop here.

Grey-scale means that all values have the same intensity. Set all channels (in RGB) equal to the the grey value and you will have the an RGB black and white image.

Woudln't setting R,G,and B to the same value (the greyscale value) for each pixel get you a correct shade of gray?

You may also take a look at my solution Faster assembly optimized way to convert RGB8 image to RGB32 image. Gray channel is simply repeated in all other channels.
The purpose was to find the fasted possible solution for conversion using x86/SSE.

Related

Difference between colors with a same rgb value in sRGB space and CIE RGB space

Could someone tell me why colors with a same rgb value (for example 127, 127, 127) look the exactly same in an image using sRGB space and one using CIE RGB space? Since one is non-linear (with gamma correction) and the other one is linear (without gamma correction), I think they should look kinda different. But image I've created looks exactly the same (I used Photoshop to create the former and for the latter, I tried Photoshop, OpenGL and OpenCV).
The difference is coming when you are manipulating an image or a color (changing the brightness or the saturation of an image). This is most visible when lowering the saturation of yellow. Try it in Photoshop with RGB and with Lab mode. Do not switch to grayscale mode, because it is using luminance correction, but the saturation slider in the Adjustment>Hue/Saturation menu.
You can also see the difference when playing with my color picker (just scroll down to the full-blown example), which represents colors in the CIE Lch space (it is using CIE Lab in the background).

Why is color segmentation easier on HSV?

I've heard that if you need to do a color segmentation on your software (create a binary image from a colored image by setting pixels to 1 if they meet certain threshold rules like R<100, G>100, 10< B < 123) it is better to first convert your image to HSV. Is this really true? And why?
The big reason is that it separates color information (chroma) from intensity or lighting (luma). Because value is separated, you can construct a histogram or thresholding rules using only saturation and hue. This in theory will work regardless of lighting changes in the value channel. In practice it is just a nice improvement. Even by singling out only the hue you still have a very meaningful representation of the base color that will likely work much better than RGB. The end result is a more robust color thresholding over simpler parameters.
Hue is a continuous representation of color so that 0 and 360 are the same hue which gives you more flexibility with the buckets you use in a histogram. Geometrically you can picture the HSV color space as a cone or cylinder with H being the degree, saturation being the radius, and value being the height. See the HSV wikipedia page.

Most "stable" color representation : RGB? HSV? CIELAB?

There are several color representations in computer science : the standard RGB, but also HSV, HSL, CIE XYZ, YCC, CIELAB, CIELUV, ... It seems to me that most of the times, these representation try to approximate human vision (colors perceptually identical should have similar representations)
But what I want to know is which representation is the most "stable" when it comes to pictures. I have an object, let's say a bottle of Coke, and I have thousands of pictures of this bottle, taken under very different circumstances (the main difference would be the how light or dark the picture is, but there's orientation, etc...)
My question is : what color representation will empirically give me the most stable representation of the colors of the bottle? The "red" color of the label should not vary too much. Well, I'll know it will vary, but I would like to know the most "stable" representation.
I've been taught that HSV is better than RGB for these kind of things, but I have no clue for the rest.
Edit (technical details) : I take a particular point of the bottle. I pick the corresponding pixels in a thousand pictures of this point. I now have a cloud of points, that depend on the representation. I want the representation that minimizes the "size" of this cloud, for example the one that minimizes the mean distance of the points of the cloud to its barycenter.
You might want to check out http://www.cs.harvard.edu/~sjg/papers/cspace.pdf, which proposes a new colorspace apparently designed to address this precise question.
I'm not aware of a colourspace that does what you want, but I do have some remarks:
RGB closely matches the way colours are displayed to us on monitors. It is one of the worst colourspaces available in terms of approximating human perception.
As for the other colourspaces: Some try to make sure colours that are perceptually close together are also close together in the colourspace. Others also try to ensure that perceptually similar differences in colour also produce similar differences in the colourspace, regardless of where in the colourspace you are.
The first means that if you think the difference in colour between blue A, and blue B is similar to the difference in colour between the blue A and blue C, then in the colourspace the distance between blue A and blue B will be similar to the distance between blue A and blue C, and they will all three be close together in the colourspace. I think this is called a perceptually smooth colourspace. CIE XYZ is an example of this.
The second means that if you think the difference in colour between blue A and blue B is similar to the difference in colour between red A and red B then in the colourspace the distance between blue A and blue B will be similar to the difference between red A and red B. This is called a perceptually uniform colourspace. CIE Lab is an example of this.
[edit 2011-07-29] As for your problem: Any of HSV, HSL, CIE XYZ, YCC, CIELAB, CIELUV, YUV separate out the illumination from the colour info in some way, so those are the better options. They provide some immunity from illumination changes, but won't help you when the colour temperature changes drastically or coloured light is used. XYZ and YUV are computationally less expensive to get to from RGB (which is what most cameras give you) but also less "good" than HSV, HSL, or CIELAB (the latter is often considered one of the best, but it is also one of the most difficult).
Depending on what you are searching for you could calibrate the color balance of the images. For example: suppose you are matching coca cola logos: You know that the letters in the logo are always white. So if they are not in your image you can use the colour they have to correct that, which gives you information about the other colours.
Our perception of the color of something is mostly determined by its hue; a colorspace such as HSV which gives a single value representing hue will work best.
The eye is a remarkable instrument though, and knowing the color of a single point is not enough. If the entire scene has a yellow or blue tint to it, the eye will compensate and your perception will be of a purer color - the orange Coke bottle will appear to be redder than it is. Likewise with darkness and brightness. If possible, you should try to compensate the image before taking the color sample.

CIE XYZ colorspace: do I have RGBA or XYZA?

http://zi.fi/shots/xyz.png
I plan to write a painting program based on linear combinations of xy plane points (0,1), (1,0) and (0,0). Such system works identically to RGB, except that the primaries are not within the gamut but at the corners of a triangle that encloses the entire gamut. I have seen the three points being referred to as X, Y and Z (upper case) somewhere, but I cannot find the page anymore (I marked them to the picture myself).
My pixel format stores the intensity of each of those three components the same way as RGB does, together with alpha value. This allows using pretty much any image manipulation operation designed for RGBA without modifying the code.
What is my format called? Is it XYZA, RGBA or something else? Google doesn't seem to know of XYZA. RGBA will get confused with sRGB + alpha (which I also need to use in the same program).
Notice that the primaries X, Y and Z and their intensities have little to do with the x, y and z coordinates (lower case) that are more commonly used.
http://en.wikipedia.org/wiki/RGB_color_space has the answer:
The CIE 1931 color space standard defines both the CIE RGB space, which is an RGB color space with monochromatic primaries, and the CIE XYZ color space, which works like an RGB color space except that it has non-physical primaries that can not be said to be red, green, and blue.
From this I interpret that XYZA is the correct way to call it.
Are you storing a float between 0.0 and 1.0 for each XYX and A intensity and then mapping that to the RGBA space?
You just have a custom format. It's not called anything special. (I don't believe that it really is a pixel format, it is in fact a color space or color coordinate in a color space being mapped to a certain pixel format.)

What is the formula for alpha blending for a number of pixels?

I have a number of RGBA pixels, each of them has an alpha component.
So I have a list of pixels: (p0 p1 p2 p3 p4 ... pn) where p_0_ is the front pixel and p_n_ is the farthest (at the back).
The last (or any) pixel is not necessary opaque, so the resulting blended pixel can be somehow transparent also.
I'm blending from the beginning of the list to the end, not vice-versa (yes, it is raytracing). So if the result at any moment becomes opaque enough I can stop with correct enough result.
I'll apply the blending algorithm in this way: ((((p0 # p1) # p2) # p3) ... )
Can anyone suggest me a correct blending formula not only for R, G and B, but for A component also?
UPD: I wonder how is it possible that for determined process of blending colors we can have many formulas? Is it some kind of aproximation? This looks crazy, as for me: formulas are not so different that we really gain efficiency or optimization. Can anyone clarify this?
Alpha-blending is one of those topics that has more depth than you might think. It depends on what the alpha value means in your system, and if you guess wrong, then you'll end up with results that look kind of okay, but that display weird artifacts.
Check out Porter and Duff's classic paper "Compositing Digital Images" for a great, readable discussion and all the formulas. You probably want the "over" operator.
It sounds like you're doing something closer to volume rendering. For a formula and references, see the Graphics FAQ, question 5.16 "How do I perform volume rendering?".
There are various possible ways of doing this, depending on how the RGBA values actually represent the properties of the materials.
Here's a possible algorithm. Start with final pixel colours lightr=lightg=lightb=0, lightleft=1;
For each r,g,b,a pixel encountered evaluate:
lightr += lightleft*r*(1-a)
lightg += lightleft*g*(1-a)
lightb += lightleft*b*(1-a)
lightleft *= 1-a;
(The RGBA values are normalised between 0 and 1, and I'm assuming that a=1 means opaque, a=0 means wholly transparent)
If the first pixel encountered is blue with opacity 50%, then 50% of the available colour is set to blue, and the rest unknown. If a red pixel with opacity 50% is next, then 25% of the remaining light is set to red, so the pixel has 50% blue, 25% red. If a green pixel with opacity 60% is next, then the pixel is 50% blue, 25% red, 15% green, with 10% of the light remaining.
The physical materials that correspond to this function are light-emitting but partially opaque materials: thus, a pixel in the middle of the stack can never darken the final colour: it can only prevent light behind it from increasing the final colour (by being black and fully opaque).

Resources