CIE XYZ colorspace: do I have RGBA or XYZA? - colors

http://zi.fi/shots/xyz.png
I plan to write a painting program based on linear combinations of xy plane points (0,1), (1,0) and (0,0). Such system works identically to RGB, except that the primaries are not within the gamut but at the corners of a triangle that encloses the entire gamut. I have seen the three points being referred to as X, Y and Z (upper case) somewhere, but I cannot find the page anymore (I marked them to the picture myself).
My pixel format stores the intensity of each of those three components the same way as RGB does, together with alpha value. This allows using pretty much any image manipulation operation designed for RGBA without modifying the code.
What is my format called? Is it XYZA, RGBA or something else? Google doesn't seem to know of XYZA. RGBA will get confused with sRGB + alpha (which I also need to use in the same program).
Notice that the primaries X, Y and Z and their intensities have little to do with the x, y and z coordinates (lower case) that are more commonly used.

http://en.wikipedia.org/wiki/RGB_color_space has the answer:
The CIE 1931 color space standard defines both the CIE RGB space, which is an RGB color space with monochromatic primaries, and the CIE XYZ color space, which works like an RGB color space except that it has non-physical primaries that can not be said to be red, green, and blue.
From this I interpret that XYZA is the correct way to call it.

Are you storing a float between 0.0 and 1.0 for each XYX and A intensity and then mapping that to the RGBA space?
You just have a custom format. It's not called anything special. (I don't believe that it really is a pixel format, it is in fact a color space or color coordinate in a color space being mapped to a certain pixel format.)

Related

How does one properly scale an XYZ color gamut bounding volume after computing it from color matching functions?

After computing the XYZ gamut bounding mesh below from spectral samples/color matching functions, how does one scale the resulting volume for compatibility with popular color spaces such as sRGB? More specifically, the size and scale of the volume depends on the number of samples and the integral approximation method used to compute it. How, then, can one determine the right values to scale such volumes to match known color spaces like sRGB, P3-Display, NTSC, PAL, etc?
It seemed like fitting the whole volume so that Y ranges from [0, 1] would work, but it had several problems:
When compared to a sub-volume generated by converting the sRGB color cube to XYZ space, the result protruded outside of the 'full gamut'.
Converting random XYZ values from the full gamut volume to sRGB and back, the final XYZ doesn't match the initial one.
Most (all?) standardized color spaces derive from CIE XYZ, so each must have some kind of function or transformation to and from the full XYZ Gamut, or at least each must have some unique parameters for a general function.
How does one determine the correct function and its parameters?
Short answer
If I understand your question, you are trying to accomplish is determining the sRGB gamut limits (boundary) relative to the XYZ space you have constructed.
Longer answer
I am assuming you are NOT trying to accomplish gamut mapping. This is non-trivial, and there are multiple methods (perceptual, absolute, relative, etc). I'm going to set gamut mapping aside, and instead focus on determining how some arbitrary color space fints inside your XYZ volume.
First to answer your granular questions:
After computing the XYZ gamut bounding mesh below from spectral samples, how does one scale the volume for compatibility with popular color spaces such as sRGB?
What spectral samples? From a spectrophotometer reading a test print under a given standard illuminant? Or where did they come from? A color matching experiment?
The math is a matter of integrating the spectral data to form the XYZ space, which you apparently have done. What illuminant (white point)??
It seemed like fitting the whole volume so that Y ranges from [0, 1] would work, but it had several problems:
Whole volume of what? The sRGB space? How did you convert the sRGB data to YXZ? OR is this really the question you are asking?
What are the proper scaling constants?
They depend on the spectral data and the adapted white point for the spectral data. sRGB is D65. Most printing is done using D50.
Does each color space have its own ranges for x, y, and z values? How can I determine them?
YES.
Every color space has a different transformation matrix depending on the coordinates of the R G and B primaries. The primaries can be imaginary, such as in ProPhoto.
Some Things
The math you are looking for you can find at brucelindbloom.com and also, you might want to check out Thomas Mansencal's ColorScience, a python library that's the swiss-army-knife of color.
sRGB
XYZ is a linear light space, wherein Y = 0.2 to Y = 0.4 is a doubling of luminance.
sRGB is not a linear space, there is a gamma curve or tone response curve on sRGB data, such that rgb(20,20,20) to rgb(40,40,40) is NOT a doubling of luminance.
The first thing that needs to be done is linearize the sRGB color data.
Then take the linear RGB and run it through the appropriate matrix. If the XYZ data is relative to a different adapting white point, then you need to do something like a Bradford transform to convert to the appropriate one for your XYZ space.
The Bruce Lindbloom site has some ready-to-go matrixes for a couple common situations.
The problem you are describing can be caused by either (or both) failing to linearize the sRGB data and/or not adapting the white point. And... possibly other factors.
If you can answer my questions regarding the source of the spectral data I can better assist.
Further research and experimentation implied that the XYZ volume should scale such that { max(X), max(Y), max(Z) } should equal the illuminant from the working space. In the case of sRGB, that illuminant (also called white point) is called D65.
Results look convincing, but expert confirmation would still be appreciated.

How does one shift hue in CIE-XYZ color space?

I need to implement my own hue rotation function within the CIE color space. I can't find any technical descriptions of how this is done. I found high level descriptions like "rotate around the Y axis in the XYZ color space", which makes sense because Y is the luminance.
I quickly did a dumb matrix rotation:
vec3 xyz = rgb_to_cie_xyz(color.r, color.g, color.b);
vec3 Y_axis = vec3(0,1,0);
mat4 mat = rotationMatrix(Y_axis, hue_angle);
vec4 res_xyz = mat * vec4(xyz, 1.0);
vec3 res = cie_xyz_to_rgb(res_xyz.x, res_xyz.y, res_xyz.z);
But later realized it's completely wrong learning more about cie space.
So my question is: A. How do you rotate hue in CIE/XYZ?
or B. Should I convert from XYZ to CIE LCH and change the H (hue) there, and then convert back to XYZ? (that sounds easy if I can find functions for it but would that even be correct / equivalent to changing hue in XYZ?)
or C. should I convert from XYZ to this 2D-xy CIE-xyY color space? How do you rotate hue on that?
[EDIT]
I have implemented for ex this code (tried another source & another source too), planning to convert from XYZ to LAB to LCH, change hue, then LCH to LAB to XYZ. But it doesn't seem to make the round trip. XYZ - LAB - XYZ - RGB works fine, looks identical. But XYZ - LAB - LCH - LAB - XYZ - RGB breaks; result color is completely different from source color. Is it not meant to be used like this (e.g. is it one way only?), what am I misunderstanding?
vec3 xyz = xyzFromRgb(color);
vec3 lab = labFromXyz(xyz);
vec3 lch = lchFromLab(lab);// doesn't work
//lch.z = lch.z + hue;
lab = labFromLch(lch);// doesn't work
xyz = xyzFromLab(lab);
vec3 rgb = rgbFromXyz(xyz);
my full code: https://github.com/gka/chroma.js/issues/295
Resources:
what is CIE and CIE-XYZ:
XYZ system is based on the color matching experiments. X, Y and Z are
extrapolations of RGB created mathematically to avoid negative numbers
and are called Tristimulus values. X-value in this model represents
approximately the red/green part of a color. Y-value represents
approximately the lightness and the Z-value corresponds roughly to the
blue/yellow part.
CIE LAB and CIE LCH:
The LCh color space, similar to CIELAB, is preferred by some
industry professionals because its system correlates well with how the
human eye perceives color. It has the same diagram as the Lab* color
space but uses cylindrical coordinates instead of rectangular
coordinates.
In this color space, L* indicates lightness, C* represents chroma, and
h is the hue angle. The value of chroma C* is the distance from the
lightness axis (L*) and starts at 0 in the center. Hue angle starts at
the +a* axis and is expressed in degrees (e.g., 0° is +a*, or red, and
90° is +b, or yellow).
How to convert between rgb and CIE XYZ (transformation matrixes)
CIE color has a lot of representations and subrepresentations, and it's not visualized or explained technically or consistently around the internets.. After reading many sources and checking many projects to converge on a clear picture, and as Giacomo said in the comments, yes, it seems the only way to change hue is to go from CIE XYZ to CIE LAB and then into a cylindrical hue shiftable representation which is CIE LCH.
The conversion from (RGB -) XYZ - LAB - LCH - LAB - XYZ (- RGB) just to change the hue, is normal and done everywhere, although it's very hard to find projects / code online that specifically say they're "changing hue in CIE color space" or that even have the word "hue" in them at all. It's also strange that you cannot find anything on github that converts straight up from XYZ to LCH or HSV to LCH given how many projects chain the intermediate steps to blend CIE colors, and the fact that the web is transitioning to using LCH color space.
I found this brilliant shadertoy while searching for lablch: https://www.shadertoy.com/view/lsdGzN which offers efficient and merged XYZ to LCH and LCH to XYZ conversions. ❤️
It has a lot of magic numbers in it though so by using it, I still haven't figured out what was wrong with my code/ports, but others are having issues too. I'm doing atan2 right, floats right, matrix mults right etc. 🤷‍♂️ I'll update my answer when I get around to figuring it out.
[Edit] It seems the problem here is there is loss of information going through all these color spaces and a lot of assumptions or approximations must happen to make a round trip. Shadertoy person prolly did some magic. Have to investigate further when I need to.

How to tell if an xyY color lies within the CIE 1931 gamut?

I'm trying to plot the CIE 1931 color gamut using math.
I take a xyY color with Y fixed to 1.0 then vary x and y from 0.0 to 1.0.
If I plot the resulting colors as an image (ie. the pixel at (x,y) is my xyY color converted to RGB) I get a pretty picture with the CIE 1931 color gamut somewhere in the middle of it, like this:
xyY from 0.0 to 1.0:
Now I want the classic tongue-shaped image so my question is: How do I cull pixels outside the range of the CIE 1931 color gamut?
ie. How can I tell if my xyY color is inside/outside the CIE 1931 color range?
I happened upon this question while searching for a slightly different but related issue, and what immediately caught my eye is the rendering at the top. It's identical to the rendering I had produced a few hours earlier, and trying to figure out why it didn't make sense is, in part, what led me here.
For readers: the rendering is what results when you convert from {x ∈ [0, 1], y ∈ [0, 1], Y = 1} to XYZ, convert that color to sRGB, and then clamp the individual components to [0, 1].
At first glance, it looks OK. At second glance, it looks off... it seems less saturated than expected, and there are visible transition lines at odd angles. Upon closer inspection, it becomes clear that the primaries aren't smoothly transitioning into each other. Much of the range, for example, between red and blue is just magenta—both R and B are 100% for almost the entire distance between them. When you then add a check to skip drawing any colors that have an out-of-range component, instead of clamping, everything disappears. It's all out-of-gamut. So what's going on?
I think I've got this one small part of colorimetry at least 80% figured out, so I'm setting this out, greatly simplified, for the edification of anyone else who might find it interesting or useful. I also try to answer the question.
(⚠️ Before I begin, an important note: valid RGB display colors in the xyY space can be outside the boundary of the CIE 1931 2° Standard Observer. This isn't the case for sRGB, but it is the case for Display P3, Rec. 2020, CIE RGB, and other wide gamuts. This is because the three primaries need to add up to the white point all by themselves, and so even monochromatic primaries must be incredibly, unnaturally luminous compared to the same wavelength under equivalent illumination.)
Coloring the chromaticity diagram
The xy chromaticity diagram isn't just a slice through xyY space. It's intrinsically two dimensional. A point in the xy plane represents chromaticity apart from luminance, so to the extent that there is a color there it is to represent as best as possible only the chromaticity, not any specific color. Normally the colors seem to be the brightest, most saturated colors for that chromaticity, or whatever's closest in the display's color space, but that's an arbitrary design decision.
Which is to say: to the extent that there are illustrative colors drawn they're necessarily fictitious, in much the same way that coloring an electoral map is purely a matter of data visualization: a convenience to aid comprehension. It's just that, in this case, we're using colors to visualize one aspect of colorimetry, so it's super easy to conflate the two things.
(Image credit: Michael Horvath)
The falsity, and necessity thereof, of the colors becomes obvious when we consider the full 3D shape of the visible spectrum in the xyY space. The classic spectral locus ("horse shoe") can easily be seen to be the base of a quasi-Gibraltian volume, widest at the spectral locus and narrowing to a summit (the white point) at {Y = 1}. If viewed as a top-down projection, then colors located on and near the spectral locus would be very dark (although still the brightest possible color for that chromaticity), and would grow increasingly luminous towards the center. If viewed as a slice of the xyY volume, through a particular value of Y, the colors would be equally luminous but would grow brighter overall and the shape of the boundary would shrink, again unevenly, with increasing Y, until it disappeared entirely. So far as I can tell, neither of these possibilities see much, if any, practical use, interesting though they may be.
Instead, the diagram is colored inside out: the gamut being plotted is colored with maximum intensities (each primary at its brightest, and then linear mixtures in the interior) and out-of-gamut colors are projected from the inner gamut triangle to the spectral locus. This is annoying because you can't simply use a matrix transformation to turn a point on the xy plane into a sensible color, but in terms of actually communicating useful and somewhat accurate information it seems, unfortunately, to be unavoidable.
(To clarify: it is actually possible to move a single chromaticity point into the sRGB space, and color the chromaticity diagram pixel-by-pixel with the most brightly saturated sRGB colors possible—it's just more complicated than a simple matrix transformation. To do so, first move the three-coordinate xyz chromaticity into sRGB. Then clamp any negative values to 0. Finally, scale the components uniformly such that the maximum component value is 1. Be aware this can be much slower than plotting the whitepoint and the primaries and then interpolating between them, depending on your rendering method and the efficiency of your data representations and their operations.)
Drawing the spectral locus
The most straightforward way to get the characteristic horseshoe shape is just to use a table of the empirical data.
(http://cvrl.ioo.ucl.ac.uk/index.htm, scroll down for the "historical" datasets that will most closely match other sources intended for the layperson. Their too-clever icon scheme for selecting data is that a dotted-line icon is for data sampled at 5nm, a solid line icon is for data sampled at 1nm.)
Construct a path with the points as vertices (you might want to trim some off the top, I cut it back to 700nm, the CIERGB red primary), and use the resulting shape as a mask. With 1nm samples, a polyline should be smooth enough for near any resolution: there's no need for fitting bezier curves or whatnot.
(Note: only every 5th point shown for illustrative purposes.)
If all we want to do is draw the standard horse shoe bounded by the triangle {x = 0, y = 0}, {0, 1}, and {1, 0} then that should suffice. Note that we can save rendering time by skipping any coordinates where x + y >= 1. If we want to do more complex things, like plot the changing boundary for different Y values, then we're talking about the color matching functions that define the XYZ space.
Color matching functions
(Image credit: User:Acdx - Own work, CC BY-SA 4.0)
The ground truth for the XYZ space is in the form of three functions that map spectral power distributions to {X, Y, Z} tristimulus values. A lot of data and calculations went into constructing the XYZ space, but it all gets baked into these three functions, which uniquely determine the {X, Y, Z} values for a given spectrum of light. In effect, what the functions do is define 3 imaginary primary colors, which can't be created with any actual light spectrum, but can be mixed together to create perceptible colors. Because they can be mixed, every non-negative point in the XYZ space is meaningful mathematically, but not every point corresponds to a real color.
The functions themselves are actually defined as lookup tables, not equations that can be calculated exactly. The Munsell Color Science Laboratory (https://www.rit.edu/science/munsell-color-lab) provides 1nm resolution samples: scroll down to "Useful Color Data" under "Educational Resources." Unfortunately, it's in Excel format. Other sources might provide 5nm data, and anything more precise than 1nm is probably a modern reconstruction which might not commute with the 1931 space.
(For interest: this paper—http://jcgt.org/published/0002/02/01/—provides analytic approximations with error within the variability of the original human subject data, but they're mostly intended for specific use cases. For our purposes, it's preferable, and simpler, to stick with the empirically sampled data.)
The functions are referred to as x̅, y̅, and z̅ (or x bar, y bar, and z bar.) Collectively, they're known as the CIE 1931 2 Degree Standard Observer. There's a separate 1964 standard observer constructed from a wider 10 degree field-of-view, with minor differences, which can be used instead of the 1931 standard observer, but which arguably creates a different color space. (The 1964 standard observer shouldn't be confused with the separate CIE 1964 color space.)
To calculate the tristimulus values, you take the inner product of (1) the spectrum of the color and (2) the color matching function. This just means that every point (or sample) in the spectrum is multiplied by the corresponding point (or sample) in the color matching function, which serves to reweight the data. Then, you take the integral (or summation, more accurately, since we're dealing with discrete samples) over the whole range of visible light ([360nm, 830nm].) The functions are normalized so that they have equal area under their curves, so an equal energy spectrum (the sampled value for every wavelength is the same) will have {X = Y = Z}. (FWIW, the Munsell Color Lab data are properly normalized, but they sum to 106 and change, for some reason.)
Taking another look at that 3D plot of the xyY space, we notice again that the familiar spectral locus shape seems to be the shape of the volume at {Y = 0}, i.e. where those colors are actually black. This now makes some sort of sense, since they are monochromatic colors, and their spectrums should consist of a single point, and thus when you take the integral over a single point you'll always get 0. However, that then raises the question: how do they have chromaticity at all, since the other two functions should also be 0?
The simplest explanation is that Y at the base of the shape is actually ever-so-slightly greater than zero. The use of sampling means that the spectrums for the monochromatic sources are not taken to be instantaneous values. Instead, they're narrow bands of the spectrum near their wavelengths. You can get arbitrarily close to instantaneous and still expect meaningful chromaticity, within the bounds of precision, so the limit as the sampling bandwidth goes to 0 is the ideal spectral locus, even if it disappears at exactly 0. However, the spectral locus as actually derived is just calculated from the single-sample values for the x̅, y̅, and z̅ color matching functions.
That means that you really just need one set of data—the lookup tables for x̅, y̅, and z̅. The spectral locus can be computed from each wavelength by just dividing x̅(wl) and y̅(wl) by x̅(wl) + y̅(wl) + z̅(wl).
(Image credit: Apple, screenshot from ColorSync Utility)
Sometimes you'll see a plot like this, with a dramatically arcing, rainbow-colored line swooping up and around the plot, and then back down to 0 at the far red end of the spectrum. This is just the y̅ function plotted along the spectral locus, scaled so that y̅ = Y. Note that this is not a contour of the 3D shape of the visible gamut. Such a contour would be well inside the spectral locus through the blue-green range, when plotted in 2 dimensions.
Delineating the visible spectrum in XYZ space
The final question becomes: given these three color matching functions, how do we use them to decide if a given {X, Y, Z} is within the gamut of human color perception?
Useful fact: you can't have luminosity by itself. Any real color will also have a non-zero value for one or both of the other functions. We also know Y by definition has a range of [0, 1], so we're really only talking about figuring whether {X, Z} is valid for a given Y.
Now the question becomes: what spectrums (simplified for our purposes: an array of 471 values, either 0 or 1, for the wavelengths [360nm, 830nm], band width 1nm), when weighted by y̅, will sum to Y?
The XYZ space is additive, like RGB, so any non-monochromatic light is equivalent to a linear combination of monochromatic colors at various intensities. In other words, any point inside of the spectral locus can be created by some combination of points situated exactly on the boundary. If you took the monochromatic CIE RGB primaries and just added up their tristimulus values, you'd get white, and the spectrum of that white would just be the spectrum of the three primaries superimposed, a thin band at the wavelength for each primary.
It follows, then, that every possible combination of monochromatic colors is within the gamut of human vision. However, there's a ton of overlap: different spectrums can produce the same perceived color. This is called metamerism. So, while it might be impractical to enumerate every possible individually perceptible color or spectrums that can produce them, it's actually relatively easy to calculate the overall shape of the space from a trivially enumerable set of spectrums.
What we do is step through the gamut wavelength-by-wavelength, and, for that given wavelength, we iteratively sum ever-larger slices of the spectrum starting from that point, until we either hit our Y target or run out of spectrum. You can picture this as going around a circle, drawing progressively larger arcs from one starting point and plotting the center of the resulting shape—when you get to an arc that is just the full circle, the centers coincide, and you get white, but until then the points you plot will spiral inward from the edge. Repeat that from every point on the circumference, and you'll have points spiraling in along every possible path, covering the gamut. You can actually see this spiraling in effect, sometimes, in 3D color space plots.
In practice, this takes the form of two loops, the outer loop going from 360 to 830, and the inner loop going from 1 to 470. In my implementation, what I did for the inner loop is save the current and last summed values, and once the sum exceeds the target I use the difference to calculate a fractional number of bands and push the outer loop's counter and that interpolated width onto an array, then break out of the inner loop. Interpolating the bands greatly smooths out the curves, especially in the prow.
Once we have the set of spectrums of the right luminance, we can calculate their X and Z values. For that, I have a higher order summation function that gets passed the function to sum and the interval. From there, the shape of the gamut on the chromaticity diagram for that Y is just the path formed by the derived {x, y} coordinates, as this method only enumerates the surface of the gamut, without interior points.
In effect, this is a simpler version of what libraries like the one mentioned in the accepted answer do: they create a 3D mesh via exhaustion of the continuous spectrum space and then interpolate between points to decide if an exact color is inside or outside the gamut. Yes, it's a pretty brute-force method, but it's simple, speedy, and effective enough for demonstrative and visualization purposes. Rendering a 20-step contour plot of the overall shape of the chromaticity space in a browser is effectively instantaneous, for instance, with nearly perfect curves.
There are a couple of places where a lack of precision can't be entirely smoothed over: in particular, two corners near orange are clipped. This is due to the shapes of the lines of partial sums in this region being a combination of (1) almost perfectly horizontal and (2) having a hard cusp at the corner. Since the points exactly at the cusp aren't at nice even values of Y, the flatness of the contours is more a problem because they're perpendicular to the mostly-vertical line of the cusp, so interpolating points to fit any given Y will be most pessimum in this region. Another problem is that the points aren't uniformly distributed, being concentrated very near to the cusp: the clipping of the corner corresponds to situations where an outlying point is interpolated. All these issues can clearly be seen in this plot (rendered with 20nm bins for clarity but, again, more precision doesn't eliminate the issue):
Conclusion
Of course, this is the sort of highly technical and pitfall-prone problem (PPP) that is often best outsourced to a quality 3rd party library. Knowing the basic techniques and science behind it, however, demystifies the entire process and helps us use those libraries effectively, and adapt our solutions as needs change.
You could use Colour and the colour.is_within_visible_spectrum definition:
>>> import numpy as np
>>> is_within_visible_spectrum(np.array([0.3205, 0.4131, 0.51]))
array(True, dtype=bool)
>>> a = np.array([[0.3205, 0.4131, 0.51],
... [-0.0005, 0.0031, 0.001]])
>>> is_within_visible_spectrum(a)
array([ True, False], dtype=bool)
Note that this definition expects CIE XYZ tristimulus values, so you would have to convert your CIE xyY colourspace values to XYZ by using colour.xyY_to_XYZ definition.

Difference between colors with a same rgb value in sRGB space and CIE RGB space

Could someone tell me why colors with a same rgb value (for example 127, 127, 127) look the exactly same in an image using sRGB space and one using CIE RGB space? Since one is non-linear (with gamma correction) and the other one is linear (without gamma correction), I think they should look kinda different. But image I've created looks exactly the same (I used Photoshop to create the former and for the latter, I tried Photoshop, OpenGL and OpenCV).
The difference is coming when you are manipulating an image or a color (changing the brightness or the saturation of an image). This is most visible when lowering the saturation of yellow. Try it in Photoshop with RGB and with Lab mode. Do not switch to grayscale mode, because it is using luminance correction, but the saturation slider in the Adjustment>Hue/Saturation menu.
You can also see the difference when playing with my color picker (just scroll down to the full-blown example), which represents colors in the CIE Lch space (it is using CIE Lab in the background).

Convert grayscale value to RGB representation?

How can I convert a grayscale value (0-255) to an RGB value/representation?
It is for using in an SVG image, which doesn't seem to come with a grayscale support, only RGB...
Note: this is not RGB -> grayscale, which is already answered in another question, e.g. Converting RGB to grayscale/intensity)
The quick and dirty approach is to repeat the grayscale intensity for each component of RGB. So, if you have grayscale 120, it translates to RGB (120, 120, 120).
This is quick and dirty because the effective luminance you get depends on the actual luminance of the R, G and B subpixels of the device that you're using.
If you have the greyscale value in the range 0..255 and want to produce a new value in the form 0x00RRGGBB, then a quick way to do this is:
int rgb = grey * 0x00010101;
or equivalent in your chosen language.
Conversion of a grayscale to RGB is simple. Simply use R = G = B = gray value. The basic idea is that color (as viewed on a monitor in terms of RGB) is an additive system.
http://en.wikipedia.org/wiki/Additive_color
Thus adding red to green yields yellow. Add in some blue to that mix in equal amounts, and you get a neutral color. Full on [red, green, blue] = [255 255 255] yields white. [0,0,0] yields monitor black. Intermediate values, when R=G=B are all equal will yield nominally neutral colors of the given level of gray.
A minor problem is depending on how you view the color, it may not be perfectly neutral. This will depend on how your monitor (or printer) is calibrated. There are interesting depths of color science we could go into from this point. I'll stop here.
Grey-scale means that all values have the same intensity. Set all channels (in RGB) equal to the the grey value and you will have the an RGB black and white image.
Woudln't setting R,G,and B to the same value (the greyscale value) for each pixel get you a correct shade of gray?
You may also take a look at my solution Faster assembly optimized way to convert RGB8 image to RGB32 image. Gray channel is simply repeated in all other channels.
The purpose was to find the fasted possible solution for conversion using x86/SSE.

Resources