Can anyone show me a (language agnostic) way to assign a colour value to a bit field so that comparatively similar bit fields have a similar colour to each other. So for example
01100111
And
01110111
Are close in colour relatively. But
11011001
Is further away
By "further away" I mean distant in hue, saturation, brightness, etc...
If we have an array of all the bit fields then it would be possible to compare them all then produce a set of colours. But what if we don't know and we want one bit field to always be represented by one colour?
Or else we could pre compute all possible colour values for a given number if bits. How would I go about doing that?
You cannot do this, because (essentially) there are only 2 dimensions to your perceived color space, while treating the bits independently as you suggest makes a separate dimension for each bit.
Related
I was wondering if anyone knows if there is a custom colour space transformation command in ffmpeg? So that I can specify where I want the corners of the colour space to move to, which might be different in different situations.
The problem i'm facing at the moment is that the colour always seems to come out slightly different. If i use gamma correction in FFMPEG, then it translates the space, but doesn't transform or stretch the colour space triangle to fit the new colour space triangle. Which means that for example if I match a colour by way of empirical binary search, it won't match to the other colours.
Also, i can't seem to find a pre-defined colour space transformation in FFMPEG that matches the colour swatches I have.
I'd like to try and automate transformation of colour space in video by way of optimisation, so that it finds the corners of the colour space for me based on how it renders out, and what colour is expected, and was wondering if something like this is possible... ?
Thanks in advance for any insight or suggestions regarding this.
I have to find the nearest color. For example, I have two colors colorA1, colorA2 which are nearly same color. And also I have other color colorB1.
And I need such a method:
Color getNearestColor(colorA1, colorA2, colorB1). This method should give me the colorB2 which is calculated by using the difference of colorA1 and colorA2, then using their distance it should give me colorB2 which has the same distance as in colorA1 and colorA2.
Can you give some ideas how to implement it?
To find the nearest colour, you need a definition of "near", so a metric.
In Wikipedia you will find different metrics of color differences.
Personally I would use the 2*R*R + 4*G*G + 3*B*B. (no need for square roots, you will just compare same metrics). Easy to calculate, you can use just integers (if you use 32 bit integers, you will have no overflow).
Then find which colour has the smallest differences between your target colour.
The other methods are more precise, but in that case "RGB" is not enough. You need to know which colour space are using (probably you are in sRGB).
What is the relationship between color spaces (RGB, XYZ) and the color matching function? Let's say we have some color matching function in the color space XYZ (3 row matrix). We also have the transformation matrix which translates from XYZ coordinates to RGB coordinates.
My understanding is that there is some visual input, which is made up of the color spectrum S(y). The human eye does not see the world - it only sees its interpretation of the world. The human eye has 3 cone types LMS, each of which is responsible for processing RED, GREEN, or BLUE. The human eye sees the spectral color only because it's eye sums over RED, GREEN, BLUE vector, and this sum matches the color of the input. In order to match the color, there is a color matching function, which takes the input spectrum and produces the weights by which to multiply the primary RED, GREEN, BLUE color vector. These then get added and their output visually matches the spectral input, even though the spectrum had many many frequencies added, while the human eye was only adding 3. So we went from HUGE space to space where we can describe all with 3 vectors, summed as dictated by the color matching function.
The spectral input, color primaries, and color matching functions behave as described above and can be summarized in this formula:
where pi is the 3d vector of primary colors, c - color matching function is also a vector of 3 components, and finally s is the spectral input.
We have XYZ color space, and a corresponding color matching function which does what is described above. We are then given matrix T, which transforms XYZ coordinates to RGB coordinates. We already know T, and we need to use it to produce a new color matching function for the RGB color space.
I do not understand how the color space relates to choice of primaries pi(λ) and the choice of color functions ci(λ1).
I have been trying to understand about colours from months and after some research, i believe I have some insights which probably can help me answer your question.
I do not understand how the color space relates to choice of primaries
pi(λ)
Primaries are nothing but the wavelength of the colors that we choose to use for making all the other colors in space and that also defines the gamut of the colour space. So if you play with the applet provided in the link that is given below you can see that the whole gamut in the colour space changes when you change your primary.
Have a look at Alternative primaries and gamuts section.
Now I do not know how much you understand the RGB and XYZ or what do you mean when you say RGB here (assuming you are referring to sRGB gamut values); XYZ are actually Tristimulus values which are called rho, beta and gamma as shown in the image above and just for simplicity XYZ are converted to xy space from where you get your standard sRGB gamut.
Please go through this if you are interested in understanding how colour sensors work and converting sensor values to XYZ matrix
Please comment if I have missed any information or answer needs editing.
I think lots of issues with color selection are due technical problems people had to solve. Usually you are not trying to reproduce colors as accurately as possible, but to make them pleasant looking, cheap, fast to calculate on cpu.... If someone watches plains of New Zealand on TV he is very unlikely to know they really look like, but almost certainly wants to enjoy the picture and pay little for it.
Several reasons why you might want to use different color matching functions might include:
You are taking pictures under non-white light and you want your picture to look natural.
You are taking underwater pictures and want to compensate for the fact that water attenuates different frequencies at different speeds.
Your sensor is not perfect and you want to compensate for that.
On the other hand you might want to change your primaries due to some reason. For example your images might be taking a picture of a scene with limited amount of colors. By nudging your primaries a little you might get a "fuller" picture.
Finally sometimes you just have to compensate for some of the limitations you have with your devices. Your phosphorus on CRT TV will impose some restrictions. So will the noise in air when transmitting using PAL. On the other hand if you go digital you might be forced to have less than 36 bits per pixel. In that case you will have to make compromises and this will give you opportunity to lose as little as possible.
If you want a short tutorial visit Cambridge in colour.
Here is a Szeliski's textbook on photography, look at chapters 1 2 and 10.
Poyton has list of common transformations.
I have an image and I am picking colors by RGB (data sampling). I select N points from a specific region in the image which has the "same" color. By "same" I mean, that part of the image belongs to an object, (let's say a yellow object). Each picked point in the RGB case has three values [R,G,B]. For example: [120,150,225]. And the maximum and minimum for each field are 255 and 0 respectively.
Let's assume that I picked N points from the region of the object in the image. The points obviously have different RGB values but from the same family (a gradient of the specific color).
Question:
I want to find a range for each RGB field that when I apply a color filter on the image the pixels related to that specific object remain (to be considered as inliers). Is it correct to find the maximum and minimum from the sampled points and consider them as the filter range? For example if the max and min of the field R are 120 ,170 respectively, can it be used as a the range that should be kept.
In my opinion, the idea is not true. Because when choosing the max and min of a set of sampled data some points will be out of that range and also there will be some point on the object that doesn't fit in this range.
What is a better solution to include more points as inliers?
If anybody needs to see collected data samples, please let me know.
I am not sure I fully grasp what you are asking for, but in my opinion filtering in RGB is not the way to go. You should use a different color space than RGB if you want to compare pixels of similar color. RGB is good for representing colors on a screen, but you actually want to look at the hue, saturation and intensity (lightness, or luminance) for analysing visible similarities in colors.
For example, you should convert your pixels to HSI or HSL color space first, then compare the different parameters you get. At that point, it is more natural to compare the resulting hue in a hue range, saturation in a saturation range, and so on.
Go here for further information on how to convert to and from RGB.
What happens here is that you implicitly try to reinvent either color indexing or histogram back-projection. You call it color filter but it is better to focus on probabilities than on colors and color spaces. Colors of course not super reliable and change with lighting (though hue tends to stay the same given non-colored illumination) that's why some color spaces are better than others. You can handle this separately but it seems that you are more interested in the principles of calculating "filtering operation" that will do segmentation of the foreground object from background. Hopefully.
In short, a histogram back-projection works by first creating a histogram for R, G, B within object area and then back-projecting them into the image in the following way. For each pixel in the image find its bin in the histogram, calculate its relative weight (probability) given overall sum of the bins and put this probability into the image. In such a way each pixel would have probability that it belongs to the object. You can improve it by dividing with probability of background if you want to model background too.
The result will be messy but somewhat resemble an object segment plus some background noise. It has to be cleaned and then reconnected into object using separate methods such as connected components, grab cut, morphological operation, blur, etc.
Let's say that I have a list of valid color values like [0x67FF82, 0x808080, 0xffffff, ...] and given an input color, in hex, I want to find which color in the list of acceptable colors that the input color is closest to.
My thought is that I'd find the color in which the absolute value of the difference of the red, green, and blue values is smallest. Is this correct?
It sounds like you're looking for a way to quantify the "distance" between colors - in math, they'd call it a metric. Many people are intuitively pretty comfortable with the Euclidean metric for example - it's simply the distance between two points as measured with a ruler. In the case of colors, things are more complicated because of subjective perception of different colors.
There's a pretty mathy wikipedia article about color difference, which includes links to different implementations.
The difference or distance between two colors is a metric of interest in color science. It allows people to quantify a notion that would otherwise be described with adjectives, to the detriment of anyone whose work is color critical. Common definitions make use of the Euclidean distance in a device independent color space.
In particular, there's Python Colormath, an implementation in python that converts between different color encodings and also seems to have a function for calculating the distance between two colors. If you happen to be coding in python, that sounds helpful, although I unfortunately don't have any personal experience with that tool. There's also similar resources available for MATLAB and Excel provided by the authors of CIE2000, a leading color-difference formula.