What are the normalization numbers for YCbCr image? - colors

I don't seem to find how I can normalize YCbCr color format. For example, it is common to divide by 255 every number of an RGB image. So, what's is that for YCbCr? Or, it doesn't make any sense?

It is your assumption that values of RGB are between 0 to 255 (per channel). HTML standardized it, but there are many other possibility. 0 to 1 (and I think this will be the future) or 0 to 100 are standard choices. Video format have often a range or R, G, B from 16 to 235 (but allowing some extra values for super white or blacker black, just 0 and 255 are reserved), and just for 8 bits per channel. If you have 10 or 12 bits per channel, you have other maximum values (video format defines such limits). My screen is 10bit per channel, so my RGB is not 0 to 255.
Cb and Cr have usually values between 16 and 240. Y from 16 to 235, if 8 bit per channel. For more bits per channel, there are other limits.
Note: some video format allow also full range, so Y, Cb and Cr from 0 to 255.
You should keep in mind that RGB, YcbCr, etc. are just color models. The implementation defines exactly the range, the chromaticies, the gamma, and other details. Picture and video formats usually allow many kind of colour space (usually defined in the headers), so you should check which kind of RGB or YCbCr you are encoding/decoding.
ADDENDUM (from comments).
Just a division is not good, because some values are reserved (0 and 255 for 8 bits; a larger range for encoding with more bits per channel), and as I wrote above, other values should be displayed in an equivalent manner (as pure white or as pure black).
So you for X you should do (Y-16)/(235-16) and clip values to that result will be in the range 0 to (235-16). In an analog manner for C channel (but using maximum value of 240 (instead of 235).

Related

Why RGB values are represented as 8-bit integers?

Each RGB value is represented as an 8-bit integer (0-255). Why not store it as a decimal number to increase the color space? It should give more realistic looking picture.
Colours are sometimes represented by three floats instead of as 24 bits.
The 8 bit standard is historical: It goes back to the days of 8 bit architectures. 3 bytes can give the colour of a pixel without wasting any memory and having the same number of bits for each colour component.
This has some advantages: you can write the colour as a 6 digit hexadecimal number and have some idea of what the colour will be:
0xff0000 : Red
0x00ff00 : Green
0x0000ff : Blue
And so on. This is quite compact and efficient and has stuck around, as a colour can be held in a single integer value instead of three floats.

Why do hexadecimal colors have 2 digits per color? [duplicate]

This question already has answers here:
How does hexadecimal color work?
(6 answers)
Closed 8 years ago.
I understand the hexadecimal system is built on 0123456789ABCDEF representing 16 degrees. 0 being the darkest up to F being a pure form of that color. But why are there 2 digits representing each color (red green blue)? And how those two digits work together to form each colors value.
It's because the colors are represented as R-G-B, each primary color have a value between 0 and 255, which makes 256 possibility. Hexadecimal is a way to write numbers, just like binary or decimal, and hexadecimal requires 2 digits (FF, to be precise) to represent 256.
00 to FF represents, in decimal 0-255. 256 values, which is also the number of unique values you can represent in a single byte.
In programming, colors typically consist of 4 bytes, each with a 00-FF hexadecimal value. There's a red byte, green byte, blue byte, and there's a byte to represent the alpha channel.
Sometimes, however, rather than RGB, the three non-alpha bytes are representative of Hue, Saturation, and Brightness. The fourth one still is for the alpha channel though.

Why is the bpp information at 0x1C in this .bmp image wrong?

Address 1D indicates the image is eight bits per pixel, but it isn't, each pixel is represented by 3 bytes (24 bits).
At first, I thought photoshop did this in error, but I found that this format was used for all greyscale images.
Instead of using four bytes for pixel, why don't .bmp images use a value from 0 - FF to describe the greyscale value of each pixel?
EDIT: I was able to answer my own question about the file structure
from Wikipedia
The 8-bit per pixel (8bpp) format
supports 256 distinct colors and
stores 1 pixel per 1 byte.
Each byte
is an index into a table of up to 256
colors. This Color Table is in 32bpp 8.8.8.0.8 RGBAX format.
The color table shown in the hex editor is four bytes per pixel.
Far below that is the actual pixel array, which is 8 bits per pixel.
I figured that out by a calculation, the image is 64 x 64, 4096 pixels.
The pixel array starts at 436, and ends at 1437. In decimal, the difference between those two numbers is 4097, so the pixel array is exactly one byte per pixel.
I am still curious as to why a color table is necessary for a greyscale image, though
I am still curious as to why a color table is necessary for a greyscale image, though
It looks like bmp files have no special greyscale mode. So you cannot set in the header the format is greyscale, so you need the color table to define the colors you use. Even if all colors are greyscale.
If you look at the .png format, you can define that you are using a greyscale image, so you don't need a color table there. (but it would also be possible to use a color table to create a grayscale image).

How do hexadecimal color values work in low-level computing?

i assume it depends on computer display...
but does it depend on Operating System??
for example, color codes: #ff0000, #2e2e2e - three bytes used, obviously..
but how are these data (color codes) interpreted on the lowest level??
How does application renders color on the lowest level??
Thanks in advance!!
These codes are a compact representation of three integers between 0 and 255: Red, Blue, and Green.
They are rendered by the video card using the RGB color model.
RGB is treated as red, green, blue, with each value being an integer from 0 to 255 inclusive. You could represent red for example as (255,0,0) or #FF0000, or many other different ways.
Whatever software is using the color tells your operating system's graphics drivers to output to your monitor. They vary from OS to OS, but the output that comes out of the port has to be standardized to the hardware.
http://en.wikipedia.org/wiki/RGB_color_model#RGB_devices
Sometimes it can depend on the operating system. Older versions of Mac OS and Next used RGB values with a different gamma coefficient from that produced naturally by a monitor. Their video systems would convert these values before displaying them. Today you will mostly encounter sRGB, which was an attempt by Microsoft and HP to specify the average display system at the time it was created. Sometimes you'll run into other systems such as Adobe RGB, which has the ability to display slightly more colors than sRGB.
Computers use the RGB color model. In RGB, everything starts off as black and then you add some red/green/blue on top of that. The more of each color you add, the brighter it gets. Adding an equal amount of red/green/blue will create shades of grey (white if the max possible of all three colors are added).
This closely matches how the human eye picks up colors, so it works well (no light is black, max light we can see is blinding white, and light can be in different wavelengths to specify it's colour. If we see a red green and blue light right next to each other, it appears white to our eye. Look at your computer screen under a magnifying glass and you will be able to see it has red green and blue dots which all turn on when it's white).
The color codes you mentioned are "hex" color codes. It is three hex numbers joined together. #ff0000 is "ff red", "00 green", "00 blue". ff is the highest possible two digit number in hex (it works out to 255 in standard decimal format), while 00 is the lowest possible two digit number (0 as a decimal number).
#2e2e2e is 2e of each red/green/blue, so creates a shade of grey. 2e is hex for the 47, which is much closer to 0 than to 255, so it creates a dark grey.
Hex is a "16 bit" number format, compared to the decimal format we are used which is 10 bit. This means you can have larger numbers with less digits, and 16 bit happens to be easier to work with for hardware video cards. The possible digits for the hex system are:
f, which is 15 in decimal
e, which is 14
d, which is 13
c, 12
b, 11
a, 10
9, which is 9 in decimal
8, which is 8
7, 7
6, 6
5, 5
4, 4
3, 3
2, 2
1, 1
0, 0
More info about hex: http://en.wikipedia.org/wiki/Hexadecimal
More info about RGB: http://en.wikipedia.org/wiki/RGB_color_model
And more info about "web colors", which is what you're using: http://en.wikipedia.org/wiki/Web_colors

How is the organization of the 12 Bit RGB color format?

I know that the 12 bit RGB color pallette format , has 4 bit for each color R,G,B.
But how is the structure of the raw data i.e.
1.) Does each color have a byte corresponding to it which in turn has 4 padding bits , and 4 data bits for color data
or
2.) Its a packed format , i.e. Byte-1= (padding bits + 4-Rbits) Byte-2 = (4-Gbits+4Bits)
How is the packing done?
Thank You.
-AD
Where?
In memory, it can be anything---most likely it could be held in a 3-char array or struct...
In disk, since space is such a big deal, it'd likely be held in an even tighter format: 3 bytes representing two adjacent pixels: [RG][BR][GB], and packed/unpacked on write/read.
Still, it all depends on the specs of your format/platform.
I know that the 12 bit RGB color pallette format , has 4 bit for each color R,G,B.
Are you sure it's a palette format?
Generally, a palette format is made up of two distinct parts: The palette itself and the image. The palette is a look-up table of colour values, the format of which is implementation specific. The image is then a list of index-values for the pallet.
Palette formats are normally used to save memory, or sometimes to do neon-style animation (like the Windows95 loading screen had that blue strip at the bottom: the image was written to the screen once, and then some of the colours in the palette were rotated every few ms).
On a CD+G, each twelve bit palette entry is stored using the six lsb's of two consecutive bytes; there is no wasted space because the upper two bits are used for storing timing information. I think the Amiga just used three of the four nybbles in a 16-bit word. I'm not sure what other formats you might be thinking of.
On the Amiga the system is very simple:
$0fff = white... in decimal that is 0 (unused), 15 (max red), 15 (max green), 15 (max blue).
$0a04 = red with a hint of blue means it's Red-violet, a mix of Red with strength 10 and Blue strength 4 while Green isn't added at all.
R-G-B are each 8-bit numbers and $FFF = 4095 (+ black)
Each colour is 4 bits. Three times 4 bits = 12 bits, hence 12-bit color range.
Link to 12-bit RGB on Wiki listing systems using 12-bit RGB

Resources