If I'm designing af tool that must "screenshot" well for printed documentation, can I easily choose colors that look different even when printed in greyscale?
EDIT: I was hoping for some easy-to-use palette or tool, but the inputs given already is very insightfull for sure
Yes. Your best choice would be to choose colors that have a high level of relative contrast. Frankly, it might even be easiest for you to design your UI in greyscale in the first place. Basically, you're going to want to choose colors that are either lighter or darker than the colors around them by a decent amount.
You could calculate the luminance from the RGB values:
Y = 0.2126 R + 0.7152 G + 0.0722 B
And make sure your Y values for your selected colors are as distributed as evenly as possible.
Related
Anyone who frequently does UI likely knows that for a given color hsl(H, 100%, 50%) (syntax is CSS) not all values of H will produce a color suitable to be placed under arbitrarily black or white text. The specific fact I'm noting is that certain colors (green) appear especially bright and other (blue) appear especially dark.
Well suppose I would like a user to be able to enter a color hue and have the color always appear with a consistent brightness so that one of either white or black text is guaranteed to always be readable on top of it. I would like all colors to also maintain the most vivid level of saturation they can given the constraint on brightness.
Here is a quick example of what I've tried so far. I start with a grid of squared like this rendered using a bunch of html div elements. Essentially these are hue values roughly from 0 to 360 along the horizontal axis and lightness values from roughly 0% to 100% along the vertical axis. All saturation value are set to 100%.
Using a JS library library called chroma.js, I now process all colors using the color.luminance function, whose definition seems to be to do what I'm looking for. I just passed the lightness of the hsl value in as the parameter to the function. I don't know for sure that this is the best way to accomplish my goal though since I'm not familiar with all the terminology at play here. Please note that my choice to use this library is by no means a constraint on how I want to go about this. It just represents my attempt at solving the problem.
The colors certainly now have a more consistent lightness, but the spectrum now seems particularly vivid around the orange to cyan area and particularly dull everywhere else. Also the colors seems to drop very quickly away from black at the top.
Hopefully this example helps a bit to express what I'm trying to accomplish here. Does any know what they best way to go about this is?
I found the solution! Check out HSLuv. It balances out all the hues in the spectrum so that at any given saturation and lightness, all hues will have the exact same perceived brightness to the human eye.
This solved my problem because now I can just set my text color to white (for example) and then as long as the text is readable against a certain HSLuv lightness it is guaranteed that it will be readable against any hue and saturation used in combination with that lightness. Magic.
Given a set of colors, say colors on this webpage, and another palette of an equal number of colors, what would be a good way to map the former to the latter while:
preserving contrast between individual colors
preserving the relative intensity of the colors (not sure how important this would be)
Essentially, this webpage should be rendered in the new color palette while still legible.
What color space would be appropriate for this task?
Can you also point me to any related work?
Update: The mapping can surely be done manually but I intend to automate the mapping for any given set of colors and palette and so I'm looking for an algorithmic approach or rather an understanding of what properties need to be preserved in favor of legibility and beauty.
In general, I think it is better to transform colours into HSV, and then transform hue (and than back to original encoding). We use brightness (dark elements) and saturation (unselected buttons, etc) as semantic element, so it is better to preserve it.
In such way you maintain also the contrast.
But: normal HSV is not really a true physiological HSV: the most used formula are just set for gamut of RGB, and to give maximum range and being the parameter independent. On reality maximum V depend on H (if one want to compare V between different colours), and other effect.
And These and other visual effects (not in any HSV) will effect the visual result, so you may do it programmatically (e.g. JavaScript which read and override all colours, but this is a topic for other questions), but if you do a website professionally, you must still manually tweak some elements.
Note: last version of CSS, and many CSS preprocessors allow you to use HSV values, and apply programmatically saturation, light and hue changes.
This will likely seem like a very easy thing I'm trying to do but Google search has not turned up exactly what I'm looking for and I'd like to do this correctly.
Essentially I need to luminance match two bmps. They are simple circles (125x125 pixels) and their original color is only know to me by their (0-255 ranged) RGB value of 255,0,0. I need to find an RGB value of gray that is the same luminance of these circles.
All other luminance/brightness matching tutorials I have seen have been for pictures that have included, a variety of hues, brightnesses, etc. and I am not sure if those techniques will work in this (admittedly more simple) case.
I am hoping to be able to just figure out the RGB values so I can input them into an experiment builder program but I do have access to GIMP if any of its tools are needed or will help.
I apologize for this likely easy question but I know little of graphics, brightness measures, etc. I appreciate any help that can be provided.
ADDENDUM: I actually think this would be a good place to ask one additional question. Is there a formula for conversion of candela to (perhaps approximate?) RGB values? I'm basing these color values loosely off of candela values and would love to know if an equation/way of equating the two beyond guesswork exists.
You need to be careful about luminance-matching digital images, because the actual luminance depends on how they're displayed. In particular, you want to watch out for "gamma correction", which is a nonlinear mapping between the RGB values and the actual display brightness. Some images may have an internal "gamma" value associated with the data itself, and many display devices effectively apply a "gamma" to the RGB values they display.
However, for an image stored and displayed linearly (with an effective gamma of 1), there is a standard luminance measure for RGB values:
Y = 0.2126 * R + 0.7152 * G + 0.0722 * B
There are, actually, a number of standards, with different weights for the linear R, G, and B components. However, if you aren't sure exactly how your image will be displayed, you might as well pick one and stick with it...
Anyway, you can use this to solve your specific problem, as follows: you want a grey value (r,g,b) = (x,x,x) with the same luminance as a pure-red value of 255. Conveniently, the three luminance constants sum to 1.0. This gives you the following formula:
Y == 1.0 * x == 0.2126 * 255
--> x ~ 54
If you want to match a different color, or use different luminance weights (which still sum to 1.0), the procedure is the same: just weight the RGB values according to the luminance formula, then pick a grey value equal to the luminance.
I believe the answer already given is misleading (SO doesn't let me comment). As mentioned the formula given applies to intensities and you should watch out for gamma, see e.g. here:
http://www.poynton.com/notes/colour_and_gamma/GammaFAQ.html#luminance
Thus, the application example should use coefficients that account for gamma, or compensate the gamma by hand which it doesn't. Yes, the image could be linear (so you have actual intensities), but judging from the description the chance is close to zero that it is.
These coefficients yield 'luma', not luminance, but that is what you have asked for anyway. See:
http://www.poynton.com/notes/colour_and_gamma/ColorFAQ.html#RTFToC11
To summarize:
luma = 0.299 R + 0.587 G + 0.114 B
(r,g,b) = (luma, luma, luma)
The material should also help with your addendum question. I've found it to be very reliable, which is clearly an exception in this field.
I would like to sort a one-dimensional list of colors so that colors that a typical human would perceive as "like" each other are near each other.
Obviously this is a difficult or perhaps impossible problem to get "perfectly", since colors are typically described with three dimensions, but that doesn't mean that there aren't some sorting methods that look obviously more natural than others.
For example, sorting by RGB doesn't work very well, as it will sort in the following order, for example:
(1) R=254 G=0 B=0
(2) R=254 G=255 B=0
(3) R=255 G=0 B=0
(4) R=255 G=255 B=0
That is, it will alternate those colors red, yellow, red, yellow, with the two "reds" being essentially imperceivably different than each other, and the two yellows also being imperceivably different from each other.
But sorting by HLS works much better, generally speaking, and I think HSL even better than that; with either, the reds will be next to each other, and the yellows will be next to each other.
But HLS/HSL has some problems, too; things that people would perceive as "black" could be split far apart from each other, as could things that people would perceive as "white".
Again, I understand that I pretty much have to accept that there will be some splits like this; I'm just wondering if anyone has found a better way than HLS/HSL. And I'm aware that "better" is somewhat arbitrary; I mean "more natural to a typical human".
For example, a vague thought I've had, but have not yet tried, is perhaps "L is the most important thing if it is very high or very low", but otherwise it is the least important. Has anyone tried this? Has it worked well? What specifically did you decide "very low" and "very high" meant? And so on. Or has anyone found anything else that would improve upon HSL?
I should also note that I am aware that I can define a space-filling curve through the cube of colors, and order them one-dimensionally as they would be encountered while travelling along that curve. That would eliminate perceived discontinuities. However, it's not really what I want; I want decent overall large-scale groupings more than I want perfect small-scale groupings.
Thanks in advance for any help.
If you want to sort a list of colors in one dimension you first have to decide by what metrics you are going to sort them. The most sense to me is the perceived brightness (related question).
I have came across 4 algorithms to sort colors by brightness and compared them. Here is the result.
I generated colors in cycle where only about every 400th color was used. Each color is represented by 2x2 pixels, colors are sorted from darkest to lightest (left to right, top to bottom).
1st picture - Luminance (relative)
0.2126 * R + 0.7152 * G + 0.0722 * B
2nd picture - http://www.w3.org/TR/AERT#color-contrast
0.299 * R + 0.587 * G + 0.114 * B
3rd picture - HSP Color Model
sqrt(0.299 * R^2 + 0.587 * G^2 + 0.114 * B^2)
4td picture - WCAG 2.0 SC 1.4.3 relative luminance and contrast ratio formula
Pattern can be sometimes spotted on 1st and 2nd picture depending on the number of colors in one row. I never spotted any pattern on picture from 3rd or 4th algorithm.
If i had to choose i would go with algorithm number 3 since its much easier to implement and its about 33% faster than the 4th
You cannot do this without reducing the 3 color dimensions to a single measurement. There are many (infinite) ways of reducing this information, but it is not mathematically possible to do this in a way that ensures that two data points near each other on the reduced continuum will also be near each other in all three of their component color values. As a result, any formula of this type will potentially end up grouping dissimilar colors.
As you mentioned in your question, one way to sort of do this would be to fit a complex curve through the three-dimensional color space occupied by the data points you're trying to sort, and then reduce each data point to its nearest location on the curve and then to that point's distance along the curve. This would work, but in each case it would be a solution custom-tailored to a particular set of data points (rather than a generally applicable solution). It would also be relatively expensive (maybe), and simply wouldn't work on a data set that was not nicely distributed in a curved-line sort of way.
A simpler alternative (that would not work perfectly) would be to choose two "endpoint" colors, preferably on opposite sides of the color wheel. So, for example, you could choose Red as one endpoint color and Blue as the other. You would then convert each color data point to a value on a scale from 0 to 1, where a color that is highly Reddish would get a score near 0 and a color that is highly Bluish would get a score near 1. A score of .5 would indicate a color that either has no Red or Blue in it (a.k.a. Green) or else has equal amounts of Red and Blue (a.k.a. Purple). This approach isn't perfect, but it's the best you can do with this problem.
There are several standard techniques for reducing multiple dimensions to a single dimension with some notion of "proximity".
I think you should in particular check out the z-order transform.
You can implement a quick version of this by interleaving the bits of your three colour components, and sorting the colours based on this transformed value.
The following Java code should help you get started:
public static int zValue(int r, int g, int b) {
return split(r) + (split(g)<<1) + (split(b)<<2);
}
public static int split(int a) {
// split out the lowest 10 bits to lowest 30 bits
a=(a|(a<<12))&00014000377;
a=(a|(a<<8)) &00014170017;
a=(a|(a<<4)) &00303030303;
a=(a|(a<<2)) &01111111111;
return a;
}
There are two approaches you could take. The simple approach is to distil each colour into a single value, and the list of values can then be sorted. The complex approach would depend on all of the colours you have to sort; perhaps it would be an iterative solution that repeatedly shuffles the colours around trying to minimise the "energy" of the entire sequence.
My guess is that you want something simple and fast that looks "nice enough" (rather than trying to figure out the "optimum" aesthetic colour sort), so the simple approach is enough for you.
I'd say HSL is the way to go. Something like
sortValue = L * 5 + S * 2 + H
assuming that H, S and L are each in the range [0, 1].
Here's an idea I came up with after a couple of minutes' thought. It might be crap, or it might not even work at all, but I'll spit it out anyway.
Define a distance function on the space of colours, d(x, y) (where the inputs x and y are colours and the output is perhaps a floating-point number). The distance function you choose may not be terribly important. It might be the sum of the squares of the differences in R, G and B components, say, or it might be a polynomial in the differences in H, L and S components (with the components differently weighted according to how important you feel they are).
Then you calculate the "distance" of each colour in your list from each other, which effectively gives you a graph. Next you calculate the minimum spanning tree of your graph. Then you identify the longest path (with no backtracking) that exists in your MST. The endpoints of this path will be the endpoints of the final list. Next you try to "flatten" the tree into a line by bringing points in the "branches" off your path into the path itself.
Hmm. This might not work all that well if your MST ends up in the shape of a near-loop in colour space. But maybe any approach would have that problem.
How do I convert the RGB values of a pixel to a single monochrome value?
I found one possible solution in the Color FAQ. The luminance component Y (from the CIE XYZ system) captures what is most perceived by humans as color in one channel. So, use those coefficients:
mono = (0.2125 * color.r) + (0.7154 * color.g) + (0.0721 * color.b);
This MSDN article uses (0.299 * color.R + 0.587 * color.G + 0.114 * color.B);
This Wikipedia article uses (0.3* color.R + 0.59 * color.G + 0.11 * color.B);
This depends on what your motivations are. If you just want to turn an arbitrary image to grayscale and have it look pretty good, the conversions in other answers to this question will do.
If you are converting color photographs to black and white, the process can be both very complicated and subjective, requiring specific tweaking for each image. For an idea what might be involved, take a look at this tutorial from Adobe for Photoshop.
Replicating this in code would be fairly involved, and would still require user intervention to get the resulting image aesthetically "perfect" (whatever that means!).
As mentioned also, a grayscale translation (note that monochromatic images need not to be in grayscale) from an RGB-triplet is subject to taste.
For example, you could cheat, extract only the blue component, by simply throwing the red and green components away, and copying the blue value in their stead. Another simple and generally ok solution would be to take the average of the pixel's RGB-triplet and use that value in all three components.
The fact that there's a considerable market for professional and not-very-cheap-at-all-no-sirree grayscale/monochrome converter plugins for Photoshop alone, tells that the conversion is just as simple or complex as you wish.
The logic behind converting any RGB based picture to monochrome can is not a trivial linear transformation. In my opinion such a problem is better addressed by "Color Segmentation" techniques. You could achieve "Color segmentation" by k-means clustering.
See reference example from MathWorks site.
https://www.mathworks.com/examples/image/mw/images-ex71219044-color-based-segmentation-using-k-means-clustering
Original picture in colours.
After converting to monochrome using k-means clustering
How does this work?
Collect all pixel values from entire image. From an image which is W pixels wide and H pixels high, you will get W *H color values. Now, using k-means algorithm create 2 clusters (or bins) and throw the colours into the appropriate "bins". The 2 clusters represent your black and white shades.
Youtube video demonstrating image segmentation using k-means?
https://www.youtube.com/watch?v=yR7k19YBqiw
Challenges with this method
The k-means clustering algorithm is susceptible to outliers. A few random pixels with a color whose RGB distance is far away from the rest of the crowd could easily skew the centroids to produce unexpected results.
Just to point out in the self-selected answer, you have to LINEARIZE the sRGB values before you can apply the coefficients. This means removing the transfer curve.
To remove the power curve, divide the 8 bit R G and B channels by 255.0, then either use the sRGB piecewise transform, which is recommended for image procesing, OR you can cheat and raise each channel to the power of 2.2.
Only after linearizing can you apply the coefficients shown, (which also are not exactly correct in the selected answer).
The standard is 0.2126 0.7152 and 0.0722. Multiply each channel by its coefficient and sum them together for Y, the luminance. Then re-apply the gamma to Y and multiply by 255, then copy to all three channels, and boom you have a greyscale (monochrome) image.
Here it is all at once in one simple line:
// Andy's Easy Greyscale in one line.
// Send it sR sG sB channels as 8 bit ints, and
// it returns three channels sRgrey sGgrey sBgrey
// as 8 bit ints that display glorious grey.
sRgrey = sGgrey = sBgrey = Math.min(Math.pow((Math.pow(sR/255.0,2.2)*0.2126+Math.pow(sG/255.0,2.2)*0.7152+Math.pow(sB/255.0,2.2)*0.0722),0.454545)*255),255);
And that's it. Unless you have to parse hex strings....