How does ncurses' init_color function translate to traditional rbg colors? - ncurses

Sorry for the oddly worded title. I'd like to know how the ncurses init_color function maps it's input to colors. Essentially, most developers are used to colors being represented by red, green, and blue on a 0 - 255 scale, but init_color takes an int on a 0 - 1000 scale.
for example:
If I wanted to get the color (75, 0, 130) in ncurses, would I call init_color(COLOR_NAME, 300, 0, 520)?

short:
(n) * 1000 / 256
which is a little different from your numbers:
293 0 508
long: That of course assumes that the terminal description is written to match ncurses' documentation. But the assumption is from X/Open Curses:
The init_color() function redefines colour number color, on terminals that support the redefinition of colours, to have the red, green, and blue intensity components specified by red, green, and blue, respectively. Calling init_color() also changes all occurrences of the specified colour on the screen to the new definition.
The color_content() function identifies the intensity components of colour number color. It stores the red, green, and blue intensity components of this colour in the addresses pointed to by red, green, and blue, respectively.
For both functions, the color argument must be in the range from 0 to and including COLORS-1. Valid intensity values range from 0 (no intensity component) up to and including 1000 (maximum intensity in that component).

Related

How to define BGR color range? Map color code to color name

I want to create color mapping, define few color names and boundaries in range of which those colors should fall. For example (BGR format),
colors = {
'red': ((0, 0, 255), (125, 125, 255)),
'blue': ((255, 0, 0), (255, 125, 125)),
'yellow' ....
}
So if I receive color, let's say (255, 50, 119) I can call it blue. I want to make such mapping for at least colors of rainbow plus gray, black, white. Using Python and openCV.
The problem is that I don't really understand where to get those values for boundaries, is there kind of lowest / highest value for blue, red and so on?
I would suggest using HSV colourspace for comparing colours because it is less sensitive to variable lighting than RGB, where green in the sunlight might be rgb(20,255,10), but green in a shadow might be rgb(3,45,2), whereas both will have a very similar Hue in HSV colourspace.
So, to get started...
Create a little 10x1 numpy array and make the first pixel red, the second orange, then yellow, green, blue, indigo, violet then black, mid-grey and white. There's a table here.
Then convert to HSV colourspace and note the Hue values.
I have started some code...
#!/usr/local/bin/python3
import numpy as np
import imageio
import cv2
# Create black image 10x1
im = np.zeros([1,10,3], dtype=np.uint8)
# Fill with colours of rainbow and greys
im[0,0,:]=[255,0,0] # red
im[0,1,:]=[255,165,0] # orange
im[0,2,:]=[255,255,0] # yellow
im[0,3,:]=[0,255,0] # green
im[0,4,:]=[0,0,255] # blue
im[0,5,:]=[75,0,130] # indigo
im[0,6,:]=[238,130,238] # violet
im[0,7,:]=[0,0,0] # black
im[0,8,:]=[127,127,127] # grey
im[0,9,:]=[255,255,255] # white
imageio.imwrite("result.png",im)
hsv=cv2.cvtColor(im,cv2.COLOR_RGB2HSV)
print(hsv)
Check image:
Check colours with Imagemagick too:
convert result.png txt:
# ImageMagick pixel enumeration: 10,1,65535,srgb
0,0: (65535,0,0) #FF0000 red
1,0: (65535,42405,0) #FFA500 orange
2,0: (65535,65535,0) #FFFF00 yellow
3,0: (0,65535,0) #00FF00 lime
4,0: (0,0,65535) #0000FF blue
5,0: (19275,0,33410) #4B0082 indigo
6,0: (61166,33410,61166) #EE82EE violet
7,0: (0,0,0) #000000 black
8,0: (32639,32639,32639) #7F7F7F grey50
9,0: (65535,65535,65535) #FFFFFF white
Now look at the HSV array below - specifically the first column (Hue). You can see Red has a Hue=0, Orange is 19, Yellow is 30 and so on. Note too that the Black, Grey and White all have zero Saturation and Black has a low Value, Grey has a medium Value and White has a high Value.
[[[ 0 255 255]
[ 19 255 255]
[ 30 255 255]
[ 60 255 255]
[120 255 255]
[137 255 130]
[150 116 238]
[ 0 0 0]
[ 0 0 127]
[ 0 0 255]]]
Now you can make a data-structure in Python that stores, for each colour:
Lowest included Hue
Highest included Hue
Name
So, you might use:
... see note at bottom for Red
14,23,"Orange"
25,35,"Yellow"
55,65,"Green"
115,125,"Blue"
...
and so on - omit Black, Grey and White from the table.
So, how do you use this?
Well, When you get a colour to check, first convert the R, G and B values to HSV and look at the resulting Saturation - which is a measure of vividness of the colour. Garish colours will have high saturation, whereas lacklustre, greyish colours will have low saturation.
So, see if the Saturation is more than say 10% of the max possible, e.g. more than 25 on a scale of 0-255.
If the Saturation is below the limit, check the Value and assign Black if Value low, Grey if middling and White if Value is high.
If the Saturation is above the limit, check if it is within the lower and upper limits of one of your recorded Hues and name it accordingly.
So the code is something like this:
def ColorNameFromRGB(R,G,B)
# Calculate HSV from R,G,B - something like this
# Make a single pixel from the parameters
onepx=np.reshape(np.array([R,G,B],dtype=np.uint8),(1,1,3))
# Convert it to HSV
onepxHSV=cv2.cvtColor(onepx,cv2.COLOR_RGB2HSV)
...
...
if S<25:
if V<85:
return "black"
elsif V<170:
return "grey"
return "white"
# This is a saturated colour
Iterate through colour names table and return name of entry with matching Hue
There are 2 things to be aware of:
There is a discontinuity in the Hue values for Red, because the HSV colour wheel is a circular wheel and the Hue value for Red is at an angle of 0, so values above 350 and below 10 are all Reds. It so happens that OpenCV scales the 0-360 range by dividing by 2, meaning it comes out as 0-180... which neatly fits in a single unsigned byte. So, for Red, you need to check for Hue greater than 175 and less than 5, say.
Be careful to always generate an 8-bit image when looking up colours, as the Hue values are scaled differently on 16-bit and float images.
Define a distance between two colors. Then find the "closest" color name for the given color. Which definition of distance you will choose has to be guided by your requirements, because there is no "best" definition, as far as I know.
One possibility is distance in RGB space. The distance between two colors can be defined, for example, as the euclidean (L2) distance between the colors as represented by vectors in three dimensional space - distance(a,b) = (a-b).length() Alternatively, try the Manhattan (L1) metric if the result makes sense, because the euclidean distance in RGB space is more of a heuristic than a valid measurement.
Another possibility is to first convert to HSV space. Then the closest color will be the one that has the closest hue to the given color. Unless the given color has insufficient saturation, then the color is either white, gray or black, depending on the color's lightness.

How are CIE xyY Luminance Values for Color Primaries Determined?

In the sRGB color space, the luminance values for the red, green, and blue primaries are specified as 0.21216, 0.7152, and 0.0722, respectively. The white point is defined to have luminance 1. In other words, the sRGB values <1,0,0>, <0,1,0>, <0,0,1>, and <1,1,1> map to xyY values <0.64, 0.33, 21.216>, <0.3, 0.6, 71.52>, <0.15, 0.06, 7.217>, and <0.31273, 0.32902, 100> (with Y scaled by 100 by convention).
How are the luminance values for the primaries determined? Are they purely a function of the xy primaries, or a combination of the primaries and the illuminant (e.g. D65)? If so, what is the relationship? More generally, how can I determine the luminance values for an arbitrary set of primaries?
Finding the RGB-to-XYZ matrix is determined by the chromaticities (xy values) of the red, green, and blue primaries and by the chromaticies of the white point. The white point, in turn, is determined, at least in part, by the light source and by the color matching functions in use (for example, the D65 illuminant and the CIE 1931 standard observer, respectively).
The conversion is explained in further detail on Bruce Lindbloom's Web site:
http://www.brucelindbloom.com/index.html?Eqn_RGB_XYZ_Matrix.html
After generating the matrix, the luminances (Y values) of the three primaries are given in the second row of that matrix (see the pregenerated matrices further down on that page). Note that the formula given there takes the xy form of the primaries and the XYZ form of the white point, which can be converted from xy form by [x/y, 1, (1-(y+x))/y].

Getting alpha from normal and darken color

Given
darkColor = darken(normalColor, alpha)
darkColor and normalColor are known, alpha is unknown.
How can I calculate alpha?
How should I interpolate alpha if multiple color tuples (normalColor, darkColor) are presented?
As per Less docs, the below is how darken() function is defined:
Decrease the lightness of a color in the HSL color space by an absolute amount.
So given the normal color and its darkened version, the logic to find the percentage is to find out the lightness of both the normal color, the dark color and then subtract the latter from the former. Less has a built-in function to calculate the lightness() of a given color also and so it can be used directly.
#normalColor: #AAAAAA;
#darkColor: #6A6A6A; /* this is darken(#normalColor, 25%) */
#dummy{
percentage: lightness(#normalColor) - lightness(#darkColor);
}
Notes:
Calculation is lightness of normal color - lightness of dark color as darken decreases lightness.
The output is an approximate value and not accurate. For example, in the above case the output is 25.09803922% and not 25%. We cannot round down the output value also because deviation can be positive or negative. For example, if the dark color is #919191 (= darken(#normalColor, 10%)), the calculated output is 9.80392157%.
This method works only when the dark color is actually a darkened version of the normal color. That is, the hue and saturation of the two colors should be the same as the darken function modifies only the lightness.

Create a false color palette and associate pixel values with it

I have raw pixel data (640x480 pixels) from an infrared camera which stand for a specific measured temperature. These pixel values have a 16 bit range from 0 to 65535.
I can display the pixel values as 8 bit greyscale, which works very well.
But now I want to display those pixels by using a false color palette.
I noticed 2 challenges here:
1.) Creating a false color palette. This means not just a simple RGB or HSV palette...I am thinking of a transition from black to yellow, to orange, to red and finally to purple
2.) Associating the pixel values to a color on my palette (e.g. 0 = black, 65535 = purple, but 31521 = ???)
Do you have an idea how I should approach this problem? I use Qt4 and Python (PyQt) but also I would be very happy if you just share the way for a solution.
One simple way would be to define colors at certain points in your range - as in your example, 0 is black, 65535 is purple, maybe 10000 is red, whatever you want to do. Set up a table with those key rgb values, and then simply interpolate between the rgb values of the key values above and below your input value to find the rgb color for any given value.
eg. if you're looking up the color for the value 1000, and your table has
value=0, color=(0,0,0)
value=5000, color=(255, 0, 255)
Then you would interpolate between these values to get the color (51, 0, 51)
The easiest method is as follows:
Cast your unsigned short to a QRgb type, and use that in the QColor constructor.
unsigned short my_temp=...;
QColor my_clr((QRgb)my_temp);
This will make your values the colors between black and cyan.

Generate next color in spectrum

everyone. How would I generate the next color in the color spectrum? Like, a function that takes a red value, a green value, and a blue value for input and output. I could input solid red (RGB 255, 0, 0) and it would output an orangish-red.
EDIT: Some more background info: I'm assuming the H, S, and V values have numeric ranges from 0-255. The C program I'm writing increments the hue value if it is less than 256, resets it to 0 if it's not, converts the HSV to RGB, displays the color on the screen, and loops. I've tried a couple HSV-to-RGB functions, but they're not working.
Instead of the RGB domain for colors, you should work with HSV values. This way, you can modify the H value to move around the spectrum.
Do you have to work with RGB values? If you don't, then use HSL as #sukru suggested, otherwise, try to convert it into HSL by following the instructions here, then increment the H value by 1/12, and convert to RGB.

Resources