How do hexadecimal color values work in low-level computing? - colors

i assume it depends on computer display...
but does it depend on Operating System??
for example, color codes: #ff0000, #2e2e2e - three bytes used, obviously..
but how are these data (color codes) interpreted on the lowest level??
How does application renders color on the lowest level??
Thanks in advance!!

These codes are a compact representation of three integers between 0 and 255: Red, Blue, and Green.
They are rendered by the video card using the RGB color model.

RGB is treated as red, green, blue, with each value being an integer from 0 to 255 inclusive. You could represent red for example as (255,0,0) or #FF0000, or many other different ways.
Whatever software is using the color tells your operating system's graphics drivers to output to your monitor. They vary from OS to OS, but the output that comes out of the port has to be standardized to the hardware.
http://en.wikipedia.org/wiki/RGB_color_model#RGB_devices

Sometimes it can depend on the operating system. Older versions of Mac OS and Next used RGB values with a different gamma coefficient from that produced naturally by a monitor. Their video systems would convert these values before displaying them. Today you will mostly encounter sRGB, which was an attempt by Microsoft and HP to specify the average display system at the time it was created. Sometimes you'll run into other systems such as Adobe RGB, which has the ability to display slightly more colors than sRGB.

Computers use the RGB color model. In RGB, everything starts off as black and then you add some red/green/blue on top of that. The more of each color you add, the brighter it gets. Adding an equal amount of red/green/blue will create shades of grey (white if the max possible of all three colors are added).
This closely matches how the human eye picks up colors, so it works well (no light is black, max light we can see is blinding white, and light can be in different wavelengths to specify it's colour. If we see a red green and blue light right next to each other, it appears white to our eye. Look at your computer screen under a magnifying glass and you will be able to see it has red green and blue dots which all turn on when it's white).
The color codes you mentioned are "hex" color codes. It is three hex numbers joined together. #ff0000 is "ff red", "00 green", "00 blue". ff is the highest possible two digit number in hex (it works out to 255 in standard decimal format), while 00 is the lowest possible two digit number (0 as a decimal number).
#2e2e2e is 2e of each red/green/blue, so creates a shade of grey. 2e is hex for the 47, which is much closer to 0 than to 255, so it creates a dark grey.
Hex is a "16 bit" number format, compared to the decimal format we are used which is 10 bit. This means you can have larger numbers with less digits, and 16 bit happens to be easier to work with for hardware video cards. The possible digits for the hex system are:
f, which is 15 in decimal
e, which is 14
d, which is 13
c, 12
b, 11
a, 10
9, which is 9 in decimal
8, which is 8
7, 7
6, 6
5, 5
4, 4
3, 3
2, 2
1, 1
0, 0
More info about hex: http://en.wikipedia.org/wiki/Hexadecimal
More info about RGB: http://en.wikipedia.org/wiki/RGB_color_model
And more info about "web colors", which is what you're using: http://en.wikipedia.org/wiki/Web_colors

Related

Why RGB values are represented as 8-bit integers?

Each RGB value is represented as an 8-bit integer (0-255). Why not store it as a decimal number to increase the color space? It should give more realistic looking picture.
Colours are sometimes represented by three floats instead of as 24 bits.
The 8 bit standard is historical: It goes back to the days of 8 bit architectures. 3 bytes can give the colour of a pixel without wasting any memory and having the same number of bits for each colour component.
This has some advantages: you can write the colour as a 6 digit hexadecimal number and have some idea of what the colour will be:
0xff0000 : Red
0x00ff00 : Green
0x0000ff : Blue
And so on. This is quite compact and efficient and has stuck around, as a colour can be held in a single integer value instead of three floats.

How to select N Colors from the Spectrum?

Given the RGB color white #ffffff, how would one split this into N colors?
Imagine a Rainbow, it has 7 colors.
How would one programatically yield these 7 colors? If you can arrive at 7 colors in this known spectrum, how would one yield say 70 colors of this spectrum in the same relative order? Meaning that this rainbow would contain 10 "steps" between Orange and Yellow for example. The Orange and Yellow are no longer side by side but separated by an interpolation of color between them.

Colours - R,G,B values. Making a colour appear "lighter" to the human eye. Can someone explain this to me please?

Just a short explanation how I came to this question. I have a ruby module which is basically a hash that gives me HTML "colours", like "slateblue", and gives me back an Array that holds the R,G,B values, like [106, 90, 205] for slateblue.
I googled how to make these R,G,B values into a lighter colour (for mouse cursor on hover effect), and several people told other people when they had a similar problem to just increase the R,G,B values. My current solution, which is a hack, is to add to the R,G,B values, like +20 (capped at 255), and then convert this into a hexstring #FF0000 something.
This seems to work okish but here is the thing now - I have absolutely no understanding about why this works.
Is it so that the 0 always denotes the lowest value of R/G/B and 255 the highest? If so, why is it capped at 255 and not at, don't know, 1024 or some other arbitrary number?
Using 8-bits per color channel - one each for red, green, and blue - yields a large number of colors (2^24 or 16777216), and is sufficient to be used in most applications. Note that there are other color formats with higher precision though.
0 is used for black, while 255 (the maximum stored in 8-bits) denotes "full-on" color.
Adding a specific number to each channel moves the entire color toward (255, 255, 255), or White. If you would like to be more exact in your lightening of the color, you might try converting your RGB color to HSL, doing your addition to the light component only, then converting back to RGB.
You can start research of HSL here: http://en.wikipedia.org/wiki/HSL_and_HSV

Calculate the apparent difference in color between two HSI color values

I have two color values in HSI (Hue Saturation and Intensity) and I want a number which represents the visual difference between the two colors. Hue is a number between 0 and 360 inclusive. Saturation is 0 to 1 and Intensity is 0 to 1.
Lets consider for example Red and Blue at Saturation of 100% and Intensity of 100%.
At this website is a way to display the color by entering in the following text.
red is:
hsv 0, 100%, 100%
blue is:
hsv 240, 100%, 100%
Clearly these are two very different colors, and so a simple way I could try to calculate the difference between colors is to use the Hue component and calculate the absolute difference in hue which would be 120 (360-240) since 360 is also equal to 0 in hue.
The problem arises where the Saturation or Intensity is very dark or light, consider a very dark red and blue.
dark red is:
hsv 0, 100%, 20%
dark blue is:
hsv 240, 100% 20%
Obviously the visual difference between these two colors is less than the bright red and blue colors, as a human would state if asked to compare the differences. What I mean here is, ask a friend "Which pair of colors is most different?" they will likely say the top bright red blue.
I am trying to calculate the difference between two colors as a human would notice. If a human being looked at two colors a and b, then two colors c and d, he could notice which ones are the most different. Firstly if the colors are bright (but not too bright) then the difference is hue based. If the colors are too bright such as white or too dark such as black or too grey then the differences are smaller.
It should be possible to have a function diff where x=diff(a,b) and y=diff(c,d) yields x and y, and I can use x and y to compare the differences to find the most different color or least different color.
The WCAG2.0 and 1.0 guidelines both make reference to different equations on perception of color difference:
contrast ratio (http: //www.w3.org/TR/2008/REC-WCAG20-20081211/Overview.html#contrast-ratiodef)
brigtness difference and 3. color difference (http://www.w3.org/TR/AERT#color-contrast).
I tried the Delta-e method(http: //colormine.org/delta-e-calculator/) but it is quasimetric so the difference measurement may change depending on the order you pass the two colors. If in your example you expect diff(a,b) to always equal diff(b,a) then this is not what you want(there may be different algorithms under this name that aren't quasimetric but I haven't looked into it past that site).
I think that the color difference metric is the closest to matching my expectations of color difference measurements. For your example it will yield that diff(a,b) > diff(c,d)
You can test it out for yourself using the tool at this website: http://www.dasplankton.de/ContrastA/
The general answer seems to be what David van Driessche said, to use Delta E. I found some Java code here: https://github.com/kennyliou/GAI
This is a answer to the question, may not be the best answer.

How is the organization of the 12 Bit RGB color format?

I know that the 12 bit RGB color pallette format , has 4 bit for each color R,G,B.
But how is the structure of the raw data i.e.
1.) Does each color have a byte corresponding to it which in turn has 4 padding bits , and 4 data bits for color data
or
2.) Its a packed format , i.e. Byte-1= (padding bits + 4-Rbits) Byte-2 = (4-Gbits+4Bits)
How is the packing done?
Thank You.
-AD
Where?
In memory, it can be anything---most likely it could be held in a 3-char array or struct...
In disk, since space is such a big deal, it'd likely be held in an even tighter format: 3 bytes representing two adjacent pixels: [RG][BR][GB], and packed/unpacked on write/read.
Still, it all depends on the specs of your format/platform.
I know that the 12 bit RGB color pallette format , has 4 bit for each color R,G,B.
Are you sure it's a palette format?
Generally, a palette format is made up of two distinct parts: The palette itself and the image. The palette is a look-up table of colour values, the format of which is implementation specific. The image is then a list of index-values for the pallet.
Palette formats are normally used to save memory, or sometimes to do neon-style animation (like the Windows95 loading screen had that blue strip at the bottom: the image was written to the screen once, and then some of the colours in the palette were rotated every few ms).
On a CD+G, each twelve bit palette entry is stored using the six lsb's of two consecutive bytes; there is no wasted space because the upper two bits are used for storing timing information. I think the Amiga just used three of the four nybbles in a 16-bit word. I'm not sure what other formats you might be thinking of.
On the Amiga the system is very simple:
$0fff = white... in decimal that is 0 (unused), 15 (max red), 15 (max green), 15 (max blue).
$0a04 = red with a hint of blue means it's Red-violet, a mix of Red with strength 10 and Blue strength 4 while Green isn't added at all.
R-G-B are each 8-bit numbers and $FFF = 4095 (+ black)
Each colour is 4 bits. Three times 4 bits = 12 bits, hence 12-bit color range.
Link to 12-bit RGB on Wiki listing systems using 12-bit RGB

Resources