Why is the bpp information at 0x1C in this .bmp image wrong? - colors

Address 1D indicates the image is eight bits per pixel, but it isn't, each pixel is represented by 3 bytes (24 bits).
At first, I thought photoshop did this in error, but I found that this format was used for all greyscale images.
Instead of using four bytes for pixel, why don't .bmp images use a value from 0 - FF to describe the greyscale value of each pixel?
EDIT: I was able to answer my own question about the file structure
from Wikipedia
The 8-bit per pixel (8bpp) format
supports 256 distinct colors and
stores 1 pixel per 1 byte.
Each byte
is an index into a table of up to 256
colors. This Color Table is in 32bpp 8.8.8.0.8 RGBAX format.
The color table shown in the hex editor is four bytes per pixel.
Far below that is the actual pixel array, which is 8 bits per pixel.
I figured that out by a calculation, the image is 64 x 64, 4096 pixels.
The pixel array starts at 436, and ends at 1437. In decimal, the difference between those two numbers is 4097, so the pixel array is exactly one byte per pixel.
I am still curious as to why a color table is necessary for a greyscale image, though

I am still curious as to why a color table is necessary for a greyscale image, though
It looks like bmp files have no special greyscale mode. So you cannot set in the header the format is greyscale, so you need the color table to define the colors you use. Even if all colors are greyscale.
If you look at the .png format, you can define that you are using a greyscale image, so you don't need a color table there. (but it would also be possible to use a color table to create a grayscale image).

Related

How do I compare the quality of Gifs and PNG using colors. Does calculating the bits per pixel work

Since gif uses 8 bit color dept and png uses 24, I can notice the difference between the two picture.
I want to find the way in which i can compare the colors of two images not by looking but with calculated datas.
What I have done till now is that I calculated the BPP of both gif and png image assuming that would be the best option to compare these two format.
I'm not sure if finding the bpp will give me the absolute color difference or if it is even the correct way.
Although a gif has maximum of 256 colors, it would still have high quality as long as the original "raw" image has less than 256 colors (Imaging a cartoon picture that only used 8 colors, no shading/blending etc). In your question you mentioned you could notice the difference between the two picture, then I assume you are talking about some "raw" image that has more than 256 colors, such as a natural photograph with the sky color gradually changing.
In this case, I think you might check the histogram of the images. For the gif image it will have at most 256 entries in the histogram; for the higher quality png image, it should have a histogram that has a matching shape (peaks and valleys), but more than 256 non-zero entries. If this is true you can almost be certain the png has a higher quality (assuming they are indeed the same picture).
You may even be able to further find out how the gif reduced the number of entries by combining several neighboring entries in the png's histogram into one entry.

convert an image from sRGB to indexed. Color palette is ~400 colors, final image to use a max of 70 colors

What I want to do:
I would like to change an image to indexed, provide the color map, and set the max number of colors in the image to 70 of the provided 400. I would like this final image to be exported in some kind of text based format.
What I have so far:
I have the color palette as a text list of ~400 HEX values and their RGB equivalents in an excel spreadsheet. I can create a csv or tab-separated file of whatever of the two formats are needed (hex or RGB).Using imagemagik, the -remap argument will do the palette conversion, and the -color argument to limit the number of colors to 70. If the indexed image is saved as bmp, I can easily import a BMP into c++/python/matlab/octave and do some operations on the array and write that to text file.
Where I'm stuck:
I'm struggling to efficiently get my text-based list of ascii values into a nice colormap image that I can feed to the -remap argument. I know I could manually create one by painstakingly creating an image and making one pixel each color in the colormap, but there must be a better way!
Other Stuff:
If you have any other advice about the process I've mentioned above or any suggestions as to how I can do this better/faster/more efficiently, I'm all ears!

Hold and modify

I have a graphics format where for each pixel the previous pixel's RGB value is used (for the first pixel on a line black is used), and then red, green or blue can be modified, or, a pixel can be set to any gray value (previous pixel's value isn't used). All this has been implemented (the easy part).
What would be the best way to convert 24 bit images to this format in the highest possible quality?
Any thoughts are appreciated.

Change pixels color [duplicate]

I have more then 1 week reading about selective color change of an image. It meand selcting a color from a color picker and then select a part of image in which I want to change the color and apply the changing of color form original color to color of color picker.
E.g. if I select a blue color in color picker and I also select a red part in the image I should be able to change red color to blue color in all the image.
Another example. If I have an image with red apples and oranges and if I select an apple on the image and a blue color in the color picket, then all apples should be changing the color from red to blue.
I have some ideas but of course I need something more concrete on how to do this
Thank you for reading
As a starting point, consider clustering the colors of your image. If you don't know how many clusters you want, then you will need methods to determine whether to merge or not two given clusters. For the moment, let us suppose that we know that number. For example, given the following image at left, I mapped its colors to 3 clusters, which have the mean colors as shown in the middle, and representing each cluster by its mean color gives the figure at right.
With the output at right, now what you need is a method to replace colors. Suppose the user clicks (a single point) somewhere in your image, then you know the positions in the original image that you will need to modify. For the next image, the user (me) clicked on a point that is contained by the "orange" cluster. Then he clicked on some blue hue. From that, you make a mask representing the points in the "orange" cluster and play with that. I considered a simple gaussian filter followed by a flat dilation 3x5. Then you replace the hues in the original image according to the produced mask (after the low pass filtering, the values on it are also considered as a alpha value for compositing the images).
Not perfect at all, but you could have a better clustering than me and also a much-less-primitive color replacement method. I intentionally skipped the details about clustering method, color space, and others, because I used only basic k-means on RGB without any pre-processing of the input. So you can consider the results above as a baseline for anything else you can do.
Given the image, a selected color, and a target new color - you can't do much that isn't ugly. You also need a range, some amount of variation in color, so you can say one pixel's color is "close enough" while another is clearly "different".
First step of processing: You create a mask image, which is grayscale and varying from 0.0 to 1.0 (or from zero to some maximum value we'll treat as 1.0), and the same size as the input image. For each input pixel, test if its color is sufficiently near the selected color. If it's "the same" or "close enough" put 1.0 in the mask. If it's different, put 0.0. If is sorta borderline, put an in-between value. Exactly how to do this depends on the details of the image.
This might work best in LAB space, and testing for sameness according to the angle of the A,B coordinates relative to their origin.
Once you have the mask, put it aside. Now color-transform the whole image. This might be best done in HSV space. Don't touch the V channel. Add a constant to S, modulo 360deg (or mod 256, if S is stored as bytes) and multiply S by a constant chosen so that the coordinates in HSV corresponding to the selected color is moved to the HSV coordinates for the target color. Convert the transformed S and H, with the unchanged L, back to RGB.
Finally, use the mask to blend the original image with the color-transformed one. Apply this to each channel - red, green, blue:
output = (1-mask)*original + mask*transformed
If you're doing it all in byte arrays, 0 is 0.0 and 255 is 1.0, and be careful of overflow and signed/unsigned problems.

How is the organization of the 12 Bit RGB color format?

I know that the 12 bit RGB color pallette format , has 4 bit for each color R,G,B.
But how is the structure of the raw data i.e.
1.) Does each color have a byte corresponding to it which in turn has 4 padding bits , and 4 data bits for color data
or
2.) Its a packed format , i.e. Byte-1= (padding bits + 4-Rbits) Byte-2 = (4-Gbits+4Bits)
How is the packing done?
Thank You.
-AD
Where?
In memory, it can be anything---most likely it could be held in a 3-char array or struct...
In disk, since space is such a big deal, it'd likely be held in an even tighter format: 3 bytes representing two adjacent pixels: [RG][BR][GB], and packed/unpacked on write/read.
Still, it all depends on the specs of your format/platform.
I know that the 12 bit RGB color pallette format , has 4 bit for each color R,G,B.
Are you sure it's a palette format?
Generally, a palette format is made up of two distinct parts: The palette itself and the image. The palette is a look-up table of colour values, the format of which is implementation specific. The image is then a list of index-values for the pallet.
Palette formats are normally used to save memory, or sometimes to do neon-style animation (like the Windows95 loading screen had that blue strip at the bottom: the image was written to the screen once, and then some of the colours in the palette were rotated every few ms).
On a CD+G, each twelve bit palette entry is stored using the six lsb's of two consecutive bytes; there is no wasted space because the upper two bits are used for storing timing information. I think the Amiga just used three of the four nybbles in a 16-bit word. I'm not sure what other formats you might be thinking of.
On the Amiga the system is very simple:
$0fff = white... in decimal that is 0 (unused), 15 (max red), 15 (max green), 15 (max blue).
$0a04 = red with a hint of blue means it's Red-violet, a mix of Red with strength 10 and Blue strength 4 while Green isn't added at all.
R-G-B are each 8-bit numbers and $FFF = 4095 (+ black)
Each colour is 4 bits. Three times 4 bits = 12 bits, hence 12-bit color range.
Link to 12-bit RGB on Wiki listing systems using 12-bit RGB

Resources