I am attempting to make 4bit 16 color video with an adaptive color palette. I can convert each frame into a 16 color format, but I can't find an algorithm to automatically generate a 16 color palette for each frame.
I plan to change the color palette each frame to make the most use of the 16 colors, similar to Photo Shop's algorithm of generating the best 16 or 256 colors for a GIF. Can someone show me or point me towards an algorithm I can use to generate the adaptive palettes?
I am coding my project in Java since importing images and libraries was easier in it, but I can also code in C and C++. My goal for this project is so I can play videos on a Ti84CE, which will use 4bit 16 color palette color. The video linked below is what the quality would look like.
Here is what my program can generate so far with a fixed color palette. The fixed palette is not good at matching the source video too much as most colors become grey: https://media.giphy.com/media/m20vIaWvnYoCaqqfLL/giphy.gif
I know dithering is an option, but I will implement it after I can get adaptive palettes to work.
I tried internet searching for color palette generating algorithms but all the resources lead me to PhotoShop tutorials instead of coding algorithms.
I also tried generating random colors for palettes but the results were not much better. I wouldn't want to hand pick the palettes since its too tedious.
If we have an RGB image, most browsers and, in fact, monitors only support sRGB space. I am trying to understand something important. Does the monitor/web then convert each of the pixels in the image to sRGB and then display it? Meaning we are actually seeing the sRGB version of the image.
Also, if that is the case, which formula can we use to do the conversion, and if we did the conversion ourself, I assume we would get an image that 'looks' exactly the same as the original?
The pixel values in an image file are just numbers. If the image is "in" a space larger than sRGB (such as ProPhoto), the pixel values will only be "converted" to sRGB if you have color management enabled, OR you perform the conversion yourself.
A browser or device will only convert tagged images of a non-sRGB colorspace TO sRGB IF there is a color management engine.
With no color management, and a standard sRGB monitor, all images will displays "as if" they were sRGB, regardless of their colorspace. I.e. they may display incorrectly.
Even with color management, if the image is untagged, it will be displayed as whatever default (usually sRGB) the system is set to use.
As for formulas: the conversion is known generally as "gamut mapping" — you are literally mapping the chromaticity components from one space to another. There are multiple techniques and methods which I will discuss below with links.
If you want to do you own colorspace conversions, take a look at Bruce Lindbloom's site. If you want a color management engine you can play around with, check out Argyll, and here is a list of open source tools.
EDIT TO ADD: Terms & Techniques
Because there are other answers here with some "alternate" (spurious) information, I'm editing my answer here to add and help clarify.
Corrections
sRGB uses the same primary and whitepoint chromaticities as Rec.709 (aka HDTV). The tone response curve is slightly different.
sRGB was developed by HP and Microsoft in 1996, and was set as an international standard by IEC circa 1998.
W3.org (World Wide Web Consortium.) set sRGB as the standard for all web content, defined in CSS Color.
Side note, "HTML" is not a standards organization, it is a markup language. sRGB was added to the CSS 3 standard.
Profiles do nothing if there is no color management system in place.
And to be clear (in reference to another answer): RGB is a color MODEL, sRGB is a color SPACE, and a parsable description like an ICC profile of sRGB is a color PROFILE.
Second, sRGB is the STANDARD for web content, and RGB values with no profile displayed in a web browser are nominally assumed to be sRGB, and thus interpreted as sRGB in most browsers. HOWEVER, if the user has wide gamut (non-sRGB monitors) and no color management, then a non-color managed browser is typically displaying at the display's colorspace which can have unexpected results.
Terms
RGB is an additive COLOR MODEL. It is so named as it is a tristimulus model that uses three narrow band light "colors" (red green and blue) which are chosen to stimulate each of the eye's cone types as independently as possible.
sRGB is a colorSPACE. A color space is a subset of a color model, but adding in specifics such as the chromaticities of the primary colors, the white point, and the tone response curve (aka TRC or gamma).
sRGB-IEC61966-2.1.icc is an ICC color PROFILE of the sRGB colorspace, used to inform color management software to the specifics such that appropriate conversion can take place.
Some profiles relate to a specific device like a printer.
Some profiles are used as a "working space".
An ICC profile includes some of the math related information to apply the profile to a given target.
Color Management is a system that uses information about device profiles and working color space to handle the conversion for output, viewing, soft proofing on a monitor, etc. See This Crash Course on CM
LUT or LookUp Table is another file type that can be used to convert images or apply color "looks".
Gamut mapping is the technique to convert, or map, the color coordinates of one color space to the coordinates of a different color space. The method for doing this depends on a number of factors, and the desired result or rendering intent.
Rendering intent means the approach used, as in, should it be perceptual? Absolute colorimetric? Relative with black point compensation? Saturation? See this discussion of mapping and Argyll.
Techniques
Colorspace transforms and conversions are non-trivial. It's not as if you can just stick image data through a nominal formula and have a good result.
A reading through the several links above will help, and I'd also like to suggest Elle Stone's site. particularly regarding profiles.
Note: I use what I think the most common notation (so as the alternate notation in the previous answer): RGB is a colour model, [so formula to calculate various things, but without defined colorimetry, scale, and gamma; sRGB is a colour space, so with a defined gamut; with a colour space we know which colour could be described and which not; and profile is a characterisation of a device (so it defines device specific colour space), intent, and often also some calculation methods to transform colours.
sRGB was defined by computer manufacturers and software companies, to standardize colours, but with old screens and low resolution, it really didn't matter much. Note: They used the primary colour of Rec.709 (HDTV), but with a different white point and gamma (view conditions are different: we watched TV and movies in darker rooms; we have computer for work that we use in brighter lit rooms).
So the normal way (before colour profiles): An image had 3 channels with values from 0 to 255 each, one for red, one for green, one for blue. This was send directly to video memory, and the video card sent these values without modifying them (on digital RGB signals) to the screen. The screen used the 3 channel values for the intensities of the 3 sub-pixels. Note: contrast and brightness control [on CRT screens] permitted some correction.
Because the assumed colour space was sRGB (and screens were built to display sRGB), this was the standard, and it was standardized by HTML (as default colour space). So if your browser has not an explicit colour space (e.g. for an image), it will assume it is sRGB, so it will not change the values.
Screens improved, creation and modification of content started to be done on computers, and there are many media which have a different colour space, images started to specify the colour space: TV has a restricted range (16-235) and a different gamma (and white point), DCI-P3 (digital cinema) has different gamma and primaries (wide-gamut), printing requires often wider gamut (forget small CMYK printers), printing photos also requires different dynamic ranges, gamma, white, and colour space.
So now (assuming an RGB image, but note that many images are not RGB, but YCC (e.g. JPEG)), an image should have its own profile, which tells us the colour characteristic of the camera (so which red is the value 255,0,0). A colour aware program will check the output profile, and the input profile, and it will adapt the colours, so that the final result is near to the intended colour.
So, if you have an unprofiled or an sRGB image, and no profile for your screen (or a fake sRGB profile): the value 255,0,0 will display the most intense and "red-dest" Red that your screen can display.
If you have an unprofiled image, but a profile for the output screen: if the intent is "absolute": the screen tries as best as it can to match the colours according to sRGB. Out of gamut will be just as the nearer in gamut colour. The "relative" intent: it scales many values, so that you will not see highlights (same colour for many out of gamut values). Eyes will correct, so you will adapt (and we adapt quickly e.g. to unsaturated colour spaces as sRGB). The other intents are more about graphics, so it keeps the values: different as the original, but as distinct as possible (for plots and comics this could be good).
If you have a profiled image, it is nearly the same, just that you will find more differences.
An AdobeRGB image (but without profile) will be displayed with the correct saturation on most wide-gamut screens (with wide gamut enabled), and it will be displayed as unsaturated on a RGB screen (if there is no profile; "absolute and perceptual intent" could correct the lack of saturation).
On contrary, an sRGB image, but displayed on AdobeRGB will be seen as too saturated. If the image has a profile, the image will be seen correctly.
On an RGB image (usual formats) you cannot have colour out of gamut of such image: 255,0,0 and 0,255,0 and 0,0,255 are the primary colour of the image colour space, so you can describe only colours in its colour space (assume sRGB if none is specified). This is not true on some formats, where negative values, or values above "white values" are allowed, e.g. on format with float point values (openEXR).
Note: Wide gamut screens have often a hardware button to switch colour space, from the native to sRGB (and back), because many applications were not compatible with colour profiles, but we still need browsers and mails.
If you are interested, the book of Giorgianni et al. (from Kodak) is a good introduction: both authors worked at Kodak (so film [photo, movies], but they were working creating the PhotoCD), so with a lot of problems with screens and colour spaces, and intent. ICC (the standard for profile) is (in my opinion) the follow up of such book: the ICC site has various information about the topic.
In very simple terms: RGB is a color space while sRGB is a color profile.
A profile interprets the RGB values to a specific context e.g. device, software, browser etc. to make sure you have consistent colors across a wide range of devices where a user might see your picture.
RGB values without a profile are basically useless because the displaying device or software has to guess how to map the RGB values to the displays color space. An equivalent is to imagine you getting 100 bills of an unknown currency and being asked to convert this to your home currency. It doesn't work – you need to know how to map these two valuewise.
So basically you don't have to worry about interpreting your images yourself. For web is seems the soundest approach to always convert to the sRGB profile (the images color space is still RGB) and let the browser do the interpretation.
You'll find seemingly up to date info with a graphic of the main browsers and their ability to correctly display the sRGB profile on this EIZO page.
PS: To add to the general confusion around color management – a color space might sometimes be called color model while a color profile might sometimes be called color space.
I have a PNG-Image with alpha values and need to reduce the amount of colors. I need to have no more than 256 colors for all the colors in the image and so far everything I tried (from paint shop to leptonica, etc...) strips the image of the alpha channel and makes it unusable. Is there anything out there that does what I want ?
Edit: I do not want to use a 8-bit palette. I just need to reduce the numbers of color so that my own program can process the image.
Have you tried ImageMagick?
http://www.imagemagick.org/script/index.php
8-bit PNGs with alpha transparency will only render alpha on newer webbrowsers.
Here are some tools and website that does the conversion:
free pngquant
Adobe Fireworks
and website: http://www.8bitalpha.com/
Also, see similar question
The problem you describe is inherent in the PNG format. See the entry at Wikipedia and notice there's no entry in the color options table for Indexed & alpha. There's an ability to add an alpha value to each of the 256 colors, but typically only one palette entry will be made fully transparent and the rest will be fully opaque.
Paint Shop Pro has a couple of options for blending or simulating partial transparency in a paletted PNG - I know because I wrote it.
I am doing a project trying to simulate Google Analytic Map Overlays. Take a look at this link to see what I mean (you need to scroll down to where it says "Here is a geographical country-based visitor volume overview courtesy of Google Analytics"). The Flash mapping tool I have supports Hex Color codes (e.g. color='FFFFCC'). If I am not mistaken this is basically RGB coding?
I am looking for an algorithm where I can computationally create the color codes for a select number of shades of green.
It seems I really want HSV type calculation and not a RGB one.
I think the easiest way to accomplish this is to select a set of colors and then map then to different segments of your data. I suppose you may need more flexibility.
If you want to calculate color. You can use HSV internally, and then covert it to RGB using this algorithm:
http://www.cs.rit.edu/~ncs/color/t_convert.html
how can I see the color space of my image with openCV ?
I would like to be sure it is RGB, before to convert to another one using cvCvtColor() function
thanks
Unfortunately, OpenCV doesn't provide any sort of indication as to the color space in the IplImage structure, so if you blindly pick up an IplImage from somewhere there is just no way to know how it was encoded. Furthermore, no algorithm can definitively tell you if an image should be interpreted as HSV vs. RGB - it's all just a bunch of bytes to the machine (should this be HSV or RGB?). I recommend you wrap your IplImages in another struct (or even a C++ class with templates!) to help you keep track of this information. If you're really desperate and you're dealing only with a certain type of images (outdoor scenes, offices, faces, etc.) you could try computing some statistics on your images (e.g. build histogram statistics for natural RGB images and some for natural HSV images), and then try to classify your totally unknown image by comparing which color space your image is closer to.
txandi makes an interesting point. OpenCV has a BGR colorspace which is used by default. This is similar to the RGB colorspace except that the B and R channels are physically switched in the image. If the physical channel ordering is important to you, you will need to convert your image with this function: cvCvtColor(defaultBGR, imageRGB, CV_BGR2RGB).
As rcv said, there is no method to programmatically detect the color space by inspecting the three color channels, unless you have a priori knowledge of the image content (e.g., there is a marker in the image whose color is known). If you will be accepting images from unknown sources, you must allow the user to specify the color space of their image. A good default would be to assume RGB.
If you modify any of the pixel colors before display, and you are using a non-OpenCV viewer, you should probably use cvCvtColor(src,dst,CV_BGR2RGB) after you have finished running all of your color filters. If you are using OpenCV for the viewer or will be saving the images out to file, you should make sure they are in BGR color space.
The IplImage struct has a field named colorModel consisting of 4 chars. Unfortunately, OpenCV ignores this field. But you can use this field to keep track of different color models.
I basically split the channels and display each one to figure out the color space of the image I'm using. It may not be the best way, but it works for me.
For detailed explanation, you can refer the below link.
https://dryrungarage.wordpress.com/2018/03/11/image-processing-basics/