normalizeColor in Brad Larson's Threshold filter - gpuimage

That might be easy to understand, but I don't get the use of the normalizeColor function in Brad Larson's GPUImage. You find it for e.g. in the colorObjectTracking example under Threshold.fsh:
vec3 normalizeColor(vec3 color)
{
return color / max(dot(color, vec3(1.0/3.0)), 0.3);
}
Here is what I get: You take the incoming color "color" and divide it either by 0.3 or by the dot product of the color vector and (1/3,1/3,1/3) if the result of the dot product is bigger than 0.3.
So two questions:
Why is necessary to normalize "color" to the average of it's elements?
Why is there a minimum limit of 0.3? (As I understand the max() function)
Thanks alot!
alti

A more appropriate place to ask this might have been on the project site itself, but I'll bite.
The point of the fragment shader there is to identify pixels in an image that are of a particular color. That function does a crude normalization for the brightness of a color, so that different lighting conditions could be accounted for when matching a color on an object. The max() operation there is just a cap to prevent things from getting really wacky at certain color values.
This particular fragment shader is entirely based on the example provided by Apple's Core Image engineers in their GPU Gems article entitled "Object Detection by Color: Using the GPU for Real-Time Video Image Processing", and they go into a little more detail about it there.
A better approach would be to get the proximity to a given color by removing the luminance component and instead examining the chrominance of a pixel. If you have the YUV source, you can do this pretty easily from the Cr and Cb components. My GPUImageChromaKeyFilter illustrates the extraction of YUV data from RGB inputs, with a thresholding then applied around the chrominance. This, too, was drawn from an example by Apple (I believe this was from their ChromaKey WWDC sample).

Related

Direct3D with sRGB - Gamma Corrected Colors

I'm coming from a Direct3D 9 background, and recently switched my custom game engine over to Direct3D12. After some research, it looked like using one of the *_SRGB formats was the way to go, because it corrected the gamma level.
Immediately, I noticed that everything nearly doubled in brightness, which was unexpected. When correcting a curve, I would expect some values to be brighter and some to be darker, but everything just appears brighter. However, I just accepted it and moved on. But now I'm noticing some other strange issues, and I'm not sure what's going on. Maybe someone could help me understand what I'm missing?
When I draw a primitive with a color value in either HLSL or C++, such as color(128,128,128,255) or float4(0.5,0.5,0.5,1.0), the resulting color I see on the screen is actually RGB 188,188,188. Is this to be expected? I'm reading the values of these colors in Adobe Photoshop 2022, which is in SRGB mode. Should the values not match up if both applications are using SRGB?
128 to 188 is really strange, but 0.5 to 0.73 is even stranger. How do I manually construct a color that comes out the way I constructed it? For example, one might use 0.5 to scale by "half brightness", but 0.73 is definitely not half brightness. It's almost white.
If our textures are painted on a PC, such as in Substance Painter or Photoshop, what is the point of converting all of these colors? If the artist can see the same color space that will be used to render, why tell the display to show us something else?
Before I switched to sRGB, I modeled in Blender, and my textures always looked the same between Blender and my game engine. If I start using sRGB, I'm worried that will not be the case. How are artists making that work?
Images that I've seen that were gamma corrected are often brighter and washed out. And images that were not gamma correct are usually dark and rich. Does gamma correction cause some type of saturation loss in darker color?
I appreciate any guidance. I've done research on this topic, but most of the information goes on endlessly about linear color space. Linear is nice, because it makes math easier, but half of the stuff we deal with in a 3D app is non-linear. At this point, I'm not sure its worth it.
1&2: Gamma correction is designed to convert between light intensity as it exists in the real world, i.e. the amount of photons that hit a camera sensor, and how human beings perceive light. So if a light source is emitting 50% less photons, we see it as as around 74% of the light (individual curves may vary) so 128 should become 188.
3&4: The point of linear color is to allow us to process images in a space where an increase in the number of photons is linearly related to the increase in the intensity values. Then the linear colors are gamma corrected before presenting them to the user. When you work in those programs, you are looking at gamma corrected images.
Basically, people don't look at linear color spaces. They look wrong to us. They only exist to allow the computer to do some processing. If you have shaders that do work in linear color space, saving your images in a linear format so that they don't have to remove the gamma, do the processing, and then reapply the gamma can have performance benefits.
The problem may be that you are gamma correcting images that are already gamma corrected. If the images look right to you, they may be gamma corrected, if they look dark, with the lighter areas seemingly emphasized, they may be linear. If you are adding colors/images that look right to you, before gamma is applied, you will have to put the colors/images through inverse gamma correction.
How Applications Display/Convert Color Spaces: (Edit)
Photoshop interprets what the numbers in an image mean through the currently applied color space. It is possible to both "assign" a color space, which changes how photoshop interprets the numbers, and "convert" a color space, which changes the numbers so that they look the same (or as close as possible) when interpreted through the new color space.
This first image is in the sRGB color space. I've painted a gray dot with the values of (127,127,127).
In this second image I have converted the image to a linear color space. It looks almost the same, because photoshop always applies gamma correction so that it looks right to you, but the first dot now has the value (54,54,54). I've added a second dot with the values (127,127,127) in this color space.
In this third image, I have assigned the sRGB color space. Now photoshop thinks the numbers are in the sRGB color space, so it thinks the image already has gamma correction, and is showing us something like the way linear color space looks.
For the final image, I did everything the opposite direction, drawing a dot with a value of (127,127,127), then converting back. The last dot now has a value of (187,187,187)

How can I even out colors so text is readable against them at any given hue and lightness?

Anyone who frequently does UI likely knows that for a given color hsl(H, 100%, 50%) (syntax is CSS) not all values of H will produce a color suitable to be placed under arbitrarily black or white text. The specific fact I'm noting is that certain colors (green) appear especially bright and other (blue) appear especially dark.
Well suppose I would like a user to be able to enter a color hue and have the color always appear with a consistent brightness so that one of either white or black text is guaranteed to always be readable on top of it. I would like all colors to also maintain the most vivid level of saturation they can given the constraint on brightness.
Here is a quick example of what I've tried so far. I start with a grid of squared like this rendered using a bunch of html div elements. Essentially these are hue values roughly from 0 to 360 along the horizontal axis and lightness values from roughly 0% to 100% along the vertical axis. All saturation value are set to 100%.
Using a JS library library called chroma.js, I now process all colors using the color.luminance function, whose definition seems to be to do what I'm looking for. I just passed the lightness of the hsl value in as the parameter to the function. I don't know for sure that this is the best way to accomplish my goal though since I'm not familiar with all the terminology at play here. Please note that my choice to use this library is by no means a constraint on how I want to go about this. It just represents my attempt at solving the problem.
The colors certainly now have a more consistent lightness, but the spectrum now seems particularly vivid around the orange to cyan area and particularly dull everywhere else. Also the colors seems to drop very quickly away from black at the top.
Hopefully this example helps a bit to express what I'm trying to accomplish here. Does any know what they best way to go about this is?
I found the solution! Check out HSLuv. It balances out all the hues in the spectrum so that at any given saturation and lightness, all hues will have the exact same perceived brightness to the human eye.
This solved my problem because now I can just set my text color to white (for example) and then as long as the text is readable against a certain HSLuv lightness it is guaranteed that it will be readable against any hue and saturation used in combination with that lightness. Magic.

CIE-L*u*v* color interpolation

I'm writing a vertex decimator that needs to interpolate vertex colors on a mesh. I'm reading Level of Detail for 3D Graphics for domain material. In the color interpolation secion, the book goes on to suggest using the CIE-Luv* color space to perform perceptual linear interpolation of colors.
The translation equations to and from the CIE XYZ color space are provided. I am able to implement the equations it provides, but Wikipedia leaves out numeric values of the following variables: u'n, v'n, and Yn.
The article say these values depend on a "specified white point" and its "luminance". It suggests u'n = 0.2009 and v'n = 0.4610 when using 2° observer and standard illuminant C. If I am using these, what would Yn be? I do not know enough physics to figure this out, and I have been unable to search for an answer on Google.
In the end, my question boils down to: What are satisfactory/appropriate values I can use for u'n, v'n, and Yn?
Also, I'm assuming I simply linearly interpolate piecewise each component of CIE-Luv* (L*, u*, and v*) when interpolating values in this color space. Is this correct?
These three values are left out its because they depend on the colorspace of the specific device (e.g. display, printer or camera). Since computer screens use an RGB colorspace where perceived grey are R=B=G, you can assume that the values are not device dependant. I can't remember the values of by heart, so I'll edit them in later.
The human eye perceives luminance/intensity logarithmically, however, a linear interpolation is close enough, especially since you don't know what the actual min and max screen levels are.
The human eye perceives the color angle linearly, however, you need to take into account that the angle id's cyclic, therefore, the interpolation of the min and max angles should equal min (or max) and not the half way point. E.g. average of purple and red should be purple.
I think that the perception of saturation is also logarithmic, however, can be approximated by a linear interpolation.
Edit:
It seems like most sites use the sRGB to XYZ formulas.
http://www.brucelindbloom.com/index.html?Eqn_RGB_XYZ_Matrix.html
http://www.easyrgb.com/index.php?X=MATH&H=02#text2
http://colormine.org/convert/rgb-to-xyz

A better Greyscale algorithm

I'm trying to create a spectral image with a constant grey-scale value for every row. I've written some fantastically slow code that basically tries 1000 different variation between black and white for a given hue and it finds the one whose grey-scale value most closely approximates the target value, resulting in the following image:
On my laptop screen (HP) there is a very noticeable 'dip' near the blue peak, where blue pixels near the bottom of the image appear much brighter than the neighbouring purple and cyan pixels. On my second screen (Acer, which has far superior colour display) the dip is smaller, but still there.
I use the following function to compute the grey-scale approximation of a colour:
Math.Abs(targetGrey - (0.2989 * R + 0.5870 * G + 0.1140 * B))
when I convert the image to grey-scale using Paint.NET, I get a perfect black to white gradient, so that part of the code at least works.
So, question: Is this purely an artefact of the display qualities of my screens? Or can the above mentioned grey-scale algorithm be improved upon to give a visually more consistent result?
EDIT: The problem seems to be mostly monitor calibration. Not, I repeat not, a problem with the code.
I'm wondering if its more to do with the way our eyes interpret the colors, rather than screen artifacts.
That said... I am using a very-high quality screen (Dell Ultrasharp, IPS) that has incredible color reproduction and I'm not sure what you mean by "dip" in the blue peak. So either I'm just not noticing it, or my screen doesn't show the same picture and it more color-accurate.
The output looks correct given the greyscale conversion you have used (which I believe is the standard one for sRGB colour spaces).
However - there are lots of tradeoffs in colour models and one of these is that you can get results which aren't visually quite what you want. In your case, the fact that there is a very low blue weight means that a greater amount of blue is needed to get any given greyscale value, hence the blue seems to start lower, at least in terms of how the human eye perceives it.
If your objective is to get a visually appealing spectral image, then I'd suggest altering your function to make the R,G,B weights more equal, and see if you like what you get.

RGB to monochrome conversion

How do I convert the RGB values of a pixel to a single monochrome value?
I found one possible solution in the Color FAQ. The luminance component Y (from the CIE XYZ system) captures what is most perceived by humans as color in one channel. So, use those coefficients:
mono = (0.2125 * color.r) + (0.7154 * color.g) + (0.0721 * color.b);
This MSDN article uses (0.299 * color.R + 0.587 * color.G + 0.114 * color.B);
This Wikipedia article uses (0.3* color.R + 0.59 * color.G + 0.11 * color.B);
This depends on what your motivations are. If you just want to turn an arbitrary image to grayscale and have it look pretty good, the conversions in other answers to this question will do.
If you are converting color photographs to black and white, the process can be both very complicated and subjective, requiring specific tweaking for each image. For an idea what might be involved, take a look at this tutorial from Adobe for Photoshop.
Replicating this in code would be fairly involved, and would still require user intervention to get the resulting image aesthetically "perfect" (whatever that means!).
As mentioned also, a grayscale translation (note that monochromatic images need not to be in grayscale) from an RGB-triplet is subject to taste.
For example, you could cheat, extract only the blue component, by simply throwing the red and green components away, and copying the blue value in their stead. Another simple and generally ok solution would be to take the average of the pixel's RGB-triplet and use that value in all three components.
The fact that there's a considerable market for professional and not-very-cheap-at-all-no-sirree grayscale/monochrome converter plugins for Photoshop alone, tells that the conversion is just as simple or complex as you wish.
The logic behind converting any RGB based picture to monochrome can is not a trivial linear transformation. In my opinion such a problem is better addressed by "Color Segmentation" techniques. You could achieve "Color segmentation" by k-means clustering.
See reference example from MathWorks site.
https://www.mathworks.com/examples/image/mw/images-ex71219044-color-based-segmentation-using-k-means-clustering
Original picture in colours.
After converting to monochrome using k-means clustering
How does this work?
Collect all pixel values from entire image. From an image which is W pixels wide and H pixels high, you will get W *H color values. Now, using k-means algorithm create 2 clusters (or bins) and throw the colours into the appropriate "bins". The 2 clusters represent your black and white shades.
Youtube video demonstrating image segmentation using k-means?
https://www.youtube.com/watch?v=yR7k19YBqiw
Challenges with this method
The k-means clustering algorithm is susceptible to outliers. A few random pixels with a color whose RGB distance is far away from the rest of the crowd could easily skew the centroids to produce unexpected results.
Just to point out in the self-selected answer, you have to LINEARIZE the sRGB values before you can apply the coefficients. This means removing the transfer curve.
To remove the power curve, divide the 8 bit R G and B channels by 255.0, then either use the sRGB piecewise transform, which is recommended for image procesing, OR you can cheat and raise each channel to the power of 2.2.
Only after linearizing can you apply the coefficients shown, (which also are not exactly correct in the selected answer).
The standard is 0.2126 0.7152 and 0.0722. Multiply each channel by its coefficient and sum them together for Y, the luminance. Then re-apply the gamma to Y and multiply by 255, then copy to all three channels, and boom you have a greyscale (monochrome) image.
Here it is all at once in one simple line:
// Andy's Easy Greyscale in one line.
// Send it sR sG sB channels as 8 bit ints, and
// it returns three channels sRgrey sGgrey sBgrey
// as 8 bit ints that display glorious grey.
sRgrey = sGgrey = sBgrey = Math.min(Math.pow((Math.pow(sR/255.0,2.2)*0.2126+Math.pow(sG/255.0,2.2)*0.7152+Math.pow(sB/255.0,2.2)*0.0722),0.454545)*255),255);
And that's it. Unless you have to parse hex strings....

Resources