Converting 8 bit color into RGB value - colors

I'm implementing global illumination in my game engine with "reflective shadow maps". RSM has i.a. color texture. To save memory. I'm packing 24 bit value into 8 bit value. Ok. I know how to pack it. But how do I unpack it? I had idea to create a 1D texture with 8 bit palette, with 255 different colors. My 8 bit color would be index of pixel in that texture.
I'm not sure how to generate this kind of texture.
Are there any mathematical ways to convert 8 bit value into rgb?
#edit
The color is in this format:
RRR GGG BB
#edit2:
And I'm packing my colour like this:
int packed = (red / 32 << 5) + (green / 32 << 2) + (blue / 64);
//the int is actually a byte, c# compiler is bitching if it's byte.
#edit3:
Alright, I found a way to do this I think. Tell me if it's wrong.
#edit4 It's wrong...
int r = (packed >> 5) * 32;
int g = ((packed >> 2) << 3) * 32;
int b = (packed << 6) * 64;

In javascript
Encode
encodedData = (Math.floor((red / 32)) << 5) + (Math.floor((green / 32)) << 2) + Math.floor((blue / 64));
Decode
red = (encodedData >> 5) * 32;
green = ((encodedData & 28) >> 2) * 32;
blue = (encodedData & 3) * 64;
While decoding we are using AND Gate/Operator to extract desired bits and discard leading bits. With green, we would then have to shift right to discard bits at right.
While encoding Math.floor is used to truncate decimal part, if rounded off it would create total value greater than 255 making it a 9 bit number.
UPDATE 1
It does not provide accurate results if we divide color by 32 or 64.
RRRGGGBB
R/G = 3bit, max value is 111 in binary which is 7 in decimal.
B = 2bit, max value is 11 in binary which is 3 in decimal.
We should divide R/G by value equal or greater than 255/7 and B by value equal or greater than 255/3.
We should also note that in place of Math.floor we should use Math.round because rounding off gives more accurate results.

To convert 8bit [0 - 255] value into 3bit [0, 7], the 0 is not a problem, but remember 255 should be converted to 7, so the formula should be Red3 = Red8 * 7 / 255.
To convert 24bit color into 8bit,
8bit Color = (Red * 7 / 255) << 5 + (Green * 7 / 255) << 2 + (Blue * 3 / 255)
To reverse,
Red = (Color >> 5) * 255 / 7
Green = ((Color >> 2) & 0x07) * 255 / 7
Blue = (Color & 0x03) * 255 / 3

Related

Signed float to hexadecimal number

How to convert float to a specific format in hexadecimal:
1 bit for sign, 15 bit for the integer value, and the rest 16 bits for values after the decimal point.
Example output should be ffff587a for -0.6543861, fff31a35 for -12.897631, 006bde10 for 107.8674316, 003bd030 for 59.8132324
I have written a program that can do the unsigned conversion, I am stuck at the signed part. Could anyone guide me on how I can achieve this in a very compact way?
def convert(num):
binary2 = ""
Int = int(num)
fract = num - Int
binary = '{:16b}'.format(Int & 0b1111111111111111)
for i in range (16):
fract *= 2
fract_bit = int(fract)
if fract_bit == 1:
fract -= fract_bit
binary2 += '1'
else:
binary2 += '0'
return int(binary + binary2, 2)
value = 107.867431640625
x = convert(value)
hex(x)
output: 0x6bde10
This is simply the Q16.16 fixed-point format. To convert a floating-point number to this format, simply multiply it by 216 (in Python, 1<<16 or 65536) and convert the product to an integer:
y = int(x * (1<<16))
To show its 32-bit two’s complement representation, add 232 if it is negative and then convert it to hexadecimal:
y = hex(y + (1<<32 if y < 0 else 0))
For example, the following prints “0xfff31a35”:
#!/usr/bin/python
x=-12.897631
y = int(x * (1<<16))
y = hex(y + (1<<32 if y < 0 else 0))
print(y)
This conversion truncates. If you want rounding, you can add .5 inside the int or you can add additional code for other types of rounding. You may also want to add code to handle overflows.

Code explanation for bitmap conversion

https://stackoverflow.com/a/2574798/159072
public static Bitmap BitmapTo1Bpp(Bitmap img)
{
int w = img.Width;
int h = img.Height;
//
Bitmap bmp = new Bitmap(w, h, PixelFormat.Format1bppIndexed);
BitmapData data = bmp.LockBits(new Rectangle(0, 0, w, h), ImageLockMode.ReadWrite, PixelFormat.Format1bppIndexed);
Why this addition and division?
byte[] scan = new byte[(w + 7) / 8];
for (int y = 0; y < h; y++)
{
for (int x = 0; x < w; x++)
{////Why this condition check?
if (x % 8 == 0)
//Why divide by 8?
scan[x / 8] = 0;
Color c = img.GetPixel(x, y);
//Why this condition check?
if (c.GetBrightness() >= 0.5)
{
// What is going on here?
scan[x / 8] |= (byte)(0x80 >> (x % 8));
}
}
// Why Martial.Copy() called here?
Marshal.Copy(scan, 0, (IntPtr)((long)data.Scan0 + data.Stride * y), scan.Length);
}
bmp.UnlockBits(data);
return bmp;
}
The code uses some basic bit-hacking techniques, required because it needs to set bits and the minimum storage element you can address in C# is a byte. I intentionally avoided using the BitArray class.
int w = img.Width;
I copy the Width and Height properties of the bitmap into a local variable to speed up the code, the properties are too expensive. Keep in mind that w are the number of pixels across the bitmap, it represents the number of bits in the final image.
byte[] scan = new byte[(w + 7) / 8];
The scan variable stores the pixels in one scan line of the bitmap. The 1bpp format uses 1 bit per pixel so the total number of bytes in a scan line is w / 8. I add 7 to ensure the value is rounded up, necessary because integer division always truncates. w = 1..7 requires 1 byte, w = 8..15 requires 2 bytes, etcetera.
if (x % 8 == 0) scan[x / 8] = 0;
The x % 8 expression represents the bit number, x / 8 is the byte number. This code sets all the pixels to Black when it progresses to the next byte in the scan line. Another way to do it would be re-allocating the byte[] in the outer loop or resetting it back to 0 with a for-loop.
if (c.GetBrightness() >= 0.5)
The pixel should be set to White when the source pixel is bright enough. Otherwise it leaves it at Black. Using Color.Brightness is a simple way to avoid dealing with the human eye's non-linear perception of brightness (luminance ~= 0.299 * red + 0.587 * green + 0.114 * blue).
scan[x / 8] |= (byte)(0x80 >> (x % 8));
Sets a bit to White in the scan line. As noted x % 8 is the bit number, it shifts 0x80 to the right by the bit number, they are stored in reverse order in this pixel format.

Resize (downsize) YUV420sp image

I am trying to resize (scale down) an image which comes in YUV420sp format. Is it possible to do such image resizing without converting it into RGB, so directly manipulating the YUV420sp pixel array? Where can I find such algorithm?
Thanks
YUV 4:2:0 planar looks like this:
----------------------
| Y | Cb|Cr |
----------------------
where:
Y = width x height pixels
Cb = Y / 4 pixels
Cr = Y / 4 pixels
Total num pixels (bytes) = width * height * 3 / 2
And the subsamling used like this:
Which means that each chroma-pixel-value is shared between 4 luma-pixels.
One approach is just to remove pixels, making sure that corresponding Y-Cb-Cr relationship are kept/recalculated.
Something close to the Nearest-neighbor interpolation but reversed.
Another approach is to first convert the 4:2:0 subsampling to 4:4:4
Here you have a 1 to 1 mapping between luma and chroma data.
This is the correct way to interpolate chroma between 4:2:0 and 4:2:2 (luma is already at correct resolution)
Code in python, follow html-link for c-dito.
Code is not very pythonic, just a direct translation of the c-version.
def __conv420to422(self, src, dst):
"""
420 to 422 - vertical 1:2 interpolation filter
Bit-exact with
http://www.mpeg.org/MPEG/video/mssg-free-mpeg-software.html
"""
w = self.width >> 1
h = self.height >> 1
for i in xrange(w):
for j in xrange(h):
j2 = j << 1
jm3 = 0 if (j<3) else j-3
jm2 = 0 if (j<2) else j-2
jm1 = 0 if (j<1) else j-1
jp1 = j+1 if (j<h-1) else h-1
jp2 = j+2 if (j<h-2) else h-1
jp3 = j+3 if (j<h-3) else h-1
pel = (3*src[i+w*jm3]
-16*src[i+w*jm2]
+67*src[i+w*jm1]
+227*src[i+w*j]
-32*src[i+w*jp1]
+7*src[i+w*jp2]+128)>>8
dst[i+w*j2] = pel if pel > 0 else 0
dst[i+w*j2] = pel if pel < 255 else 255
pel = (3*src[i+w*jp3]
-16*src[i+w*jp2]
+67*src[i+w*jp1]
+227*src[i+w*j]
-32*src[i+w*jm1]
+7*src[i+w*jm2]+128)>>8
dst[i+w*(j2+1)] = pel if pel > 0 else 0
dst[i+w*(j2+1)] = pel if pel < 255 else 255
return dst
Run this twice to get 4:4:4.
Then it's just a matter of removing rows and columns.
Or you can just quadruple the chroma-pixels to go from 4:2:0 to 4:4:4, remove rows and columns and then average 4 Cb/Cr values into 1 to get back to 4:2:0 again, it all depends on how strict you need to be :-)
Here is a Java function I use to scale down a YUV 420 (or NV21) by a factor of two.
The function takes the image in a byte array along with the width and height of the original image as an input and returns an image in a byte array which has width and heigh both equal to the half of the original width and height.
As a basis for my code I used this: Rotate an YUV byte array on Android
public static byte[] halveYUV420(byte[] data, int imageWidth, int imageHeight) {
byte[] yuv = new byte[imageWidth/2 * imageHeight/2 * 3 / 2];
// halve yuma
int i = 0;
for (int y = 0; y < imageHeight; y+=2) {
for (int x = 0; x < imageWidth; x+=2) {
yuv[i] = data[y * imageWidth + x];
i++;
}
}
// halve U and V color components
for (int y = 0; y < imageHeight / 2; y+=2) {
for (int x = 0; x < imageWidth; x += 4) {
yuv[i] = data[(imageWidth * imageHeight) + (y * imageWidth) + x];
i++;
yuv[i] = data[(imageWidth * imageHeight) + (y * imageWidth) + (x + 1)];
i++;
}
}
return yuv;
}
YUV420sp has the Y in one plane and the U&V in another. If you split the U& V into separate planes, you can then perform the same scaling operation on each of the 3 planes in turn, without first having to go from 4:2:0 -> 4:4:4.
Have a look at the source code for libyuv; it just scales the planes:
https://code.google.com/p/libyuv/source/browse/trunk/source/scale.cc

Convert 8 bit color component to 16 bit color component

I am writing a GTK+ application that is using Pango text attributes to determine the color and other aspects of the text written onto a GtkLabel using something like this:
attributes = pango_attr_list_new();
pango_attr_list_insert(attributes, pango_attr_foreground_new( G_MAXUINT16, G_MAXUINT16, G_MAXUINT16));
pango_attr_list_insert(attributes, pango_attr_size_new (18 * PANGO_SCALE));
pango_attr_list_insert(attributes, pango_attr_weight_new( PANGO_WEIGHT_BOLD ));
lblTotalCredits = gtk_label_new(NULL);
gtk_label_set_text(GTK_LABEL(lblTotalCredits),"0");
gtk_label_set_attributes(GTK_LABEL(lblTotalCredits), attributes);
pango_attr_foreground_new() expects each color component ( R,G,B ) to by 16 bits. Using Photoshop or other image processing tools I can find the color I want to use, but the R,G,B values are displayed as 8 bit R,G,B components.
How do I convert the 8 bit R,G,B values to the equivalent 16 bit R,G,B values so I end up with the same color?
For example, a golden color is specified as RGB ( 229, 202, 115 ) or hex e5ca73. How do I convert that so that each color component is 16 bits for pango functions?
8bit max: 0xFF
16bit max: 0xFFFF
convert (I use red as a demo) (untested):
guint32 colorhex = 0xe5ca73;
const guint8 red8 = (guint8)((colorhex & 0xFF0000) >> 16);
// guint16 red16 = (guint16)(((guint32)(r * 0xFFFF / 0xFF)) & 0xFFFF);
guint16 red16 = ((guint16)red8 << 8) | red8;

RGB 24 to 16-bit color conversion - Colors are darkening

I noticed that my routine to convert between RGB888 24-bit to 16-bit RGB565 resulted in darkening of the colors progressively each time a conversion took place... The formula uses linear interpolation like so...
typedef struct _RGB24 RGB24;
struct _RGB24 {
BYTE B;
BYTE G;
BYTE R;
};
RGB24 *s; // source
WORD *d; // destination
WORD r;
WORD g;
WORD b;
// Code to convert from 24-bit to 16 bit
r = (WORD)((double)(s[x].r * 31) / 255.0);
g = (WORD)((double)(s[x].g * 63) / 255.0);
b = (WORD)((double)(s[x].b * 31) / 255.0);
d[x] = (r << REDSHIFT) | (g << GREENSHIFT) | (b << BLUESHIFT);
// Code to convert from 16-bit to 24-bit
s[x].r = (BYTE)((double)(((d[x] & REDMASK) >> REDSHIFT) * 255) / 31.0);
s[x].g = (BYTE)((double)(((d[x] & GREENMASK) >> GREENSHIFT) * 255) / 63.0);
s[x].b = (BYTE)((double)(((d[x] & BLUEMASK) >> BLUESHIFT) * 255) / 31.0);
The conversion from 16-bit to 24-bit is similar but with reverse interpolation... I don't understand how the values keep getting lower and lower each time a color is cycled through the equation if they are opposites... Originally there was no cast to double, but I figured if I made it a floating point divide it would not have the falloff... but it still does...
When you convert your double values to WORD, the values are being truncated. For example,
(126 * 31)/ 255 = 15.439, which is truncated to 15. Because the values are truncated, they get progressively lower through each iteration. You need to introduce rounding (by adding 0.5 to the calculated values before converting them to integers)
Continuing the example, you then take 15 and convert back:
(15 * 255)/31 = 123.387 which truncates to 123
Don't use floating point for something simple like this. Normal way I've seen is to truncate on the down-conversion but extend on the up-conversion (so 0b11111 goes to 0b11111111).
// Code to convert from 24-bit to 16 bit
r = s[x].r >> (8-REDBITS);
g = s[x].g >> (8-GREENBITS);
b = s[x].b >> (8-BLUEBITS);
d[x] = (r << REDSHIFT) | (g << GREENSHIFT) | (b << BLUESHIFT);
// Code to convert from 16-bit to 24-bit
s[x].r = (d[x] & REDMASK) >> REDSHIFT; // 000abcde
s[x].r = s[x].r << (8-REDBITS) | s[x].r >> (2*REDBITS-8); // abcdeabc
s[x].g = (d[x] & GREENMASK) >> GREENSHIFT; // 00abcdef
s[x].g = s[x].g << (8-GREENBITS) | s[x].g >> (2*GREENBITS-8); // abcdefab
s[x].b = (d[x] & BLUEMASK) >> BLUESHIFT; // 000abcde
s[x].b = s[x].b << (8-BLUEBITS) | s[x].b >> (2*BLUEBITS-8); // abcdeabc
Casting double to WORD doesn't round the double value - it truncates the decimal digits. You need to use some kind of rounding routine to get rounding behavior. Typically you want to round half to even. There is a Stack Overflow question on how to round in C++ if you need it.
Also note that the conversion from 24 bit to 16 bits permanently loses information. It's impossible to fit 24 bits of information into 16 bits, of course. You can't get it back by conversion from 16 bits back to 24 bits.
it is because 16 bit works with the values multiplied with 2 for example
2*2*2*2 and it will come out as rrggbb and in same 32 bit case it will multiply the whole bit values with 2.
in short 16 bit 24 bit 32 bit works with multiplication of rgb with 2 and shows you the values in form of color.
for brief u should find the concept of bit color. check it on Wikipedia hope it will help you
Since you're converting to double anyway, at least use it to avoid overflow, i.e. replace
r = (WORD)((double)(s[x].r * 31) / 255.0);
with
r = (WORD)round(s[x].r / 255.0 * 31.0);
in this way the compiler should also fold 31.0/255.0 in a costant.
Obviously if this has to be repeated for huge quantities of pixels, it would be preferable to create and use a LUT (lookup table) instead.

Resources