How do I compare the quality of Gifs and PNG using colors. Does calculating the bits per pixel work - colors

Since gif uses 8 bit color dept and png uses 24, I can notice the difference between the two picture.
I want to find the way in which i can compare the colors of two images not by looking but with calculated datas.
What I have done till now is that I calculated the BPP of both gif and png image assuming that would be the best option to compare these two format.
I'm not sure if finding the bpp will give me the absolute color difference or if it is even the correct way.

Although a gif has maximum of 256 colors, it would still have high quality as long as the original "raw" image has less than 256 colors (Imaging a cartoon picture that only used 8 colors, no shading/blending etc). In your question you mentioned you could notice the difference between the two picture, then I assume you are talking about some "raw" image that has more than 256 colors, such as a natural photograph with the sky color gradually changing.
In this case, I think you might check the histogram of the images. For the gif image it will have at most 256 entries in the histogram; for the higher quality png image, it should have a histogram that has a matching shape (peaks and valleys), but more than 256 non-zero entries. If this is true you can almost be certain the png has a higher quality (assuming they are indeed the same picture).
You may even be able to further find out how the gif reduced the number of entries by combining several neighboring entries in the png's histogram into one entry.

Related

how to choose a range for filtering points by RGB color?

I have an image and I am picking colors by RGB (data sampling). I select N points from a specific region in the image which has the "same" color. By "same" I mean, that part of the image belongs to an object, (let's say a yellow object). Each picked point in the RGB case has three values [R,G,B]. For example: [120,150,225]. And the maximum and minimum for each field are 255 and 0 respectively.
Let's assume that I picked N points from the region of the object in the image. The points obviously have different RGB values but from the same family (a gradient of the specific color).
Question:
I want to find a range for each RGB field that when I apply a color filter on the image the pixels related to that specific object remain (to be considered as inliers). Is it correct to find the maximum and minimum from the sampled points and consider them as the filter range? For example if the max and min of the field R are 120 ,170 respectively, can it be used as a the range that should be kept.
In my opinion, the idea is not true. Because when choosing the max and min of a set of sampled data some points will be out of that range and also there will be some point on the object that doesn't fit in this range.
What is a better solution to include more points as inliers?
If anybody needs to see collected data samples, please let me know.
I am not sure I fully grasp what you are asking for, but in my opinion filtering in RGB is not the way to go. You should use a different color space than RGB if you want to compare pixels of similar color. RGB is good for representing colors on a screen, but you actually want to look at the hue, saturation and intensity (lightness, or luminance) for analysing visible similarities in colors.
For example, you should convert your pixels to HSI or HSL color space first, then compare the different parameters you get. At that point, it is more natural to compare the resulting hue in a hue range, saturation in a saturation range, and so on.
Go here for further information on how to convert to and from RGB.
What happens here is that you implicitly try to reinvent either color indexing or histogram back-projection. You call it color filter but it is better to focus on probabilities than on colors and color spaces. Colors of course not super reliable and change with lighting (though hue tends to stay the same given non-colored illumination) that's why some color spaces are better than others. You can handle this separately but it seems that you are more interested in the principles of calculating "filtering operation" that will do segmentation of the foreground object from background. Hopefully.
In short, a histogram back-projection works by first creating a histogram for R, G, B within object area and then back-projecting them into the image in the following way. For each pixel in the image find its bin in the histogram, calculate its relative weight (probability) given overall sum of the bins and put this probability into the image. In such a way each pixel would have probability that it belongs to the object. You can improve it by dividing with probability of background if you want to model background too.
The result will be messy but somewhat resemble an object segment plus some background noise. It has to be cleaned and then reconnected into object using separate methods such as connected components, grab cut, morphological operation, blur, etc.

Can images be weighted averaging to obtain one image in RGB color space?

all
I have a few images of one object taken from different perspectives, so some part of the object
may be in the shadow. I hope to stitch the images to get one big image. I find the color in the
resultant image doesn't appear correct. Maybe I should average the images in HSV color space.
Can the color can be averaged in RGB color space? For my case, some part may be in shadow, and the images can be averaged i RGB color space?
If you are familiar with the color theory, please give me some information. Thanks.
Regards
Jogging

convert an image from sRGB to indexed. Color palette is ~400 colors, final image to use a max of 70 colors

What I want to do:
I would like to change an image to indexed, provide the color map, and set the max number of colors in the image to 70 of the provided 400. I would like this final image to be exported in some kind of text based format.
What I have so far:
I have the color palette as a text list of ~400 HEX values and their RGB equivalents in an excel spreadsheet. I can create a csv or tab-separated file of whatever of the two formats are needed (hex or RGB).Using imagemagik, the -remap argument will do the palette conversion, and the -color argument to limit the number of colors to 70. If the indexed image is saved as bmp, I can easily import a BMP into c++/python/matlab/octave and do some operations on the array and write that to text file.
Where I'm stuck:
I'm struggling to efficiently get my text-based list of ascii values into a nice colormap image that I can feed to the -remap argument. I know I could manually create one by painstakingly creating an image and making one pixel each color in the colormap, but there must be a better way!
Other Stuff:
If you have any other advice about the process I've mentioned above or any suggestions as to how I can do this better/faster/more efficiently, I'm all ears!

Hold and modify

I have a graphics format where for each pixel the previous pixel's RGB value is used (for the first pixel on a line black is used), and then red, green or blue can be modified, or, a pixel can be set to any gray value (previous pixel's value isn't used). All this has been implemented (the easy part).
What would be the best way to convert 24 bit images to this format in the highest possible quality?
Any thoughts are appreciated.

Why is the bpp information at 0x1C in this .bmp image wrong?

Address 1D indicates the image is eight bits per pixel, but it isn't, each pixel is represented by 3 bytes (24 bits).
At first, I thought photoshop did this in error, but I found that this format was used for all greyscale images.
Instead of using four bytes for pixel, why don't .bmp images use a value from 0 - FF to describe the greyscale value of each pixel?
EDIT: I was able to answer my own question about the file structure
from Wikipedia
The 8-bit per pixel (8bpp) format
supports 256 distinct colors and
stores 1 pixel per 1 byte.
Each byte
is an index into a table of up to 256
colors. This Color Table is in 32bpp 8.8.8.0.8 RGBAX format.
The color table shown in the hex editor is four bytes per pixel.
Far below that is the actual pixel array, which is 8 bits per pixel.
I figured that out by a calculation, the image is 64 x 64, 4096 pixels.
The pixel array starts at 436, and ends at 1437. In decimal, the difference between those two numbers is 4097, so the pixel array is exactly one byte per pixel.
I am still curious as to why a color table is necessary for a greyscale image, though
I am still curious as to why a color table is necessary for a greyscale image, though
It looks like bmp files have no special greyscale mode. So you cannot set in the header the format is greyscale, so you need the color table to define the colors you use. Even if all colors are greyscale.
If you look at the .png format, you can define that you are using a greyscale image, so you don't need a color table there. (but it would also be possible to use a color table to create a grayscale image).

Resources