Is converting YCbCr to RGB reversible? - colors

I was playing around with some different image formats and ran across something I found odd. When converting from RGB to YCbCr and then back to RGB, the results are very similar to what I started with (the difference in pixels values is almost always less than 4). However, when I convert from YCbCr to RGB and then back to YCbCr, I often get vastly different values. Sometimes a values will differ by over 40.
I'm not sure why this is. I was under the impression that the colors which could be expressed through YCbCr were a subset of those in RGB, but it looks like this is completely wrong. Is there some known subset of colors in YCbCr that can be converted to RGB and then back to their original values?
The code I'm using to convert (based on this site):
def yuv2rgb(yuv):
ret = []
for rows in yuv:
row = []
for y, u, v in rows:
c = y - 16
d = u - 128
e = v - 128
r = clamp(1.164*c + 1.596*e, 16, 235)
g = clamp(1.164*c - 0.392*d - 0.813*e, 16, 240)
b = clamp(1.164*c + 2.017*d , 16, 240)
row.append([r, g, b])
ret.append(row)
return ret
def rgb2yuv(rgb):
ret = []
for rows in rgb:
row = []
for r, g, b in rows:
y = int( 0.257*r + 0.504*g + 0.098*b + 16)
u = int(-0.148*r - 0.291*g + 0.439*b + 128)
v = int( 0.439*r - 0.368*g - 0.071*b + 128)
row.append([y, u, v])
ret.append(row)
return ret
EDIT:
I created a rudimentary 3D graph of this issue. All the points are spots with a value difference less than 10. It makes a pretty interesting shape. X is Cb, Y is Cr, and Z is Y.

No, it is not, at all. [All disscused above is for 8 bit.] And it is obvious in the case of Full range R'G'B' to limited range YCbCr (there is just no bijection). For example, you can test it here:
https://res18h39.netlify.app/color
Full range R'G'B' values 238, 77, 45 are encoded into limited YCbCr with BT.601 matrix: you will get limited range 120, 90, 201 after school rounding, but if you will round it you will get 238, 77, 44 back in R'G'B'. And 238, 77, 44 value will go to the same value. Oopsie. Here it is: game over.
In the case with full range RGB to full range YCbCr... There are some values in YCbCr that will be negative R', G', B'. (For example in limited range YCbCr BT.709 values 139, 151, 24 will be RGB -21, 182, 181, just convert for full range YCbCr.) So again, no bijection.
Next, limited range R'G'B' to limited range YCbCr... Again, no bijection. Black in YCbCr is actually 16, 128, 128 and only this. All other 16, x, y are not allowed [they are in xvYCC, which is non-standard], while they are in R, G, B, and all 235, 128, 128 the same. And the previous negative R', G', B' also applies, of course.
And with limited range to full range, I do not know.

As I said in my comment, your first problem is that you're using y rather than c for calculations inside the loop in yuv2rgb.
The second problem is that you're clamping the RGB values to the wrong range: RGB should be 0..255.
The RGB calculations should look like this:
r = clamp(1.164*c + 1.596*e, 0, 255)
g = clamp(1.164*c - 0.392*d - 0.813*e, 0, 255)
b = clamp(1.164*c + 2.017*d , 0, 255)

As far as I know you should be able to convert from and to both formats with minimal precision loss.
The site you have mentioned has another set of conversion formulas called "RGB to full-range YCbCr" and "Full-range YCbCr to RGB", I believe these are the ones you should use and I think it should enable you to convert forward and back without any problems.
EDIT:
Since those formulas haven't worked for you, I'll share the formulas that I use for conversion between RGB and YUV in android:
R = clamp(1 * Y + 0 * (U - 128) + 1.13983 * (V - 128), 0, 255);
G = clamp(1 * Y + -0.39465 * (U - 128) + -0.5806 * (V - 128), 0, 255);
B = clamp(1 * Y + 2.03211 * (U - 128) + 0 * (V - 128), 0, 255);
Y = clamp(0.299 * R + 0.587 * G + 0.114 * B, 0, 255);
U = clamp(-0.14713 * R + -0.28886 * G + 0.436 * B + 128, 0, 255);
V = clamp(0.615 * R + -0.51499 * G + -0.10001 * B + 128, 0, 255);
I've just tried and it seems to work back and forth. Notice the sums and subtractions of 128, because this YUV representation is composed of unsigned byte ranges (0..255) and so is RGB (as usual), so if you really need a (16..235) and (16..240) ranges for your YCbCr you might need another formula.

Related

should I use basinhopping instead of minimize to find colors based on a small set of transforms?

question up front
def shade_func(color, offset):
return tuple([int(c * (1 - offset)) for c in color])
def tint_func(color, offset):
return tuple([int(c + (255 - c) * offset) for c in color])
def tone_func(color, offset):
return tuple([int(c * (1 - offset) + 128 * offset) for c in color])
given an objective over a collection of colors that returns the least distance to a target color, how do I ensure that basinhopping isn't better than minimization in scikit learn?
I was thinking that, for any one color, there will be up to 4 moments in a v-shaped curve, and so only one minimum. if the value with offset zero is itself a minimum, maybe it could be 5. Am I wrong? In any case each is a single optimum, so if we are only searching one color at a time, no reason to use basinhopping.
If we instead use basinhopping to scan all colors at once (we can scale the two different dimensions, in fact this is where the idea of a preprocessor function first came from), it scans them, but does not do such a compelling just of scanning all colors. Some colors it only tries once. I think it might completely skip some colors with large enough sets.
details
I was inspired by way artyclick shows colors and allows searching for them. If you look at an individual color, for example mauve you'll notice that it prominently displays the shades, tints, and tones of the color, rather like an artist might like. If you ask it for the name of a color, it will use a hidden unordered list of about a thousand color names, and some javascript to find the nearest colorname to the color you chose. In fact it will also show alternatives.
I noticed that quite often a shade, tint or tone of an alternative (or even the best match) was often a better match than the color it provided. For those who don't know about shade, tint and tone, there's a nice write up at Dunn Edward's Paints. It looks like shade and tint are the same but with signs reversed, if doing this on tuples representing colors. For tone it is different, a negative value would I think saturate the result.
I felt like there must be authoritative (or at least well sourced) colorname sources it could be using.
In terms of the results, since I want any color or its shade/tint/tone, I want a result like this:
{'color': '#aabbcc',
'offset': {'type': 'tint', 'value': 0.31060384614807254}}
So I can return the actual color name from the color, plus the type of color transform to get there and the amount of you have to go.
For distance of colors, there is a great algorithm that is meant to model human perception that I am using, called CIEDE 2000. Frankly, I'm just using a snippet I found that implements this, it could be wrong.
So now I want to take in two colors, compare their shade, tint, and tone to a target color, and return the one with the least distance. After I am done, I can reconstruct if it was a shade, tint or tone transform from the result just by running all three once and choosing the best fit. With that structure, I can iterate over every color, and that should do it. I use optimization because I don't want to hard code what offsets it should consider (though I am reconsidering this choice now!).
because I want to consider negatives for tone but not for shade/tint, my objective will have to transform that. I have to include two values to optimize, since the objection function will need to know what color to transform (or else the result will give me know way of knowing which color to use the offset with).
so my call should look something like the following:
result = min(minimize(objective, (i,0), bounds=[(i, i), (-1, 1)]) for i in range(len(colors)))
offset_type = resolve_offset_type(result)
with that in mind, I implemented this solution, over the past couple of days.
current solution
from scipy.optimize import minimize
import numpy as np
import math
def clamp(low, x, high):
return max(low, min(x, high))
def hex_to_rgb(hex_color):
hex_color = hex_color.lstrip('#')
return tuple(int(hex_color[i:i+2], 16) for i in (0, 2, 4))
def rgb_to_hex(rgb):
return '#{:02x}{:02x}{:02x}'.format(*rgb)
def rgb_to_lab(color):
# Convert RGB to XYZ color space
R = color[0] / 255.0
G = color[1] / 255.0
B = color[2] / 255.0
R = ((R + 0.055) / 1.055) ** 2.4 if R > 0.04045 else R / 12.92
G = ((G + 0.055) / 1.055) ** 2.4 if G > 0.04045 else G / 12.92
B = ((B + 0.055) / 1.055) ** 2.4 if B > 0.04045 else B / 12.92
X = R * 0.4124 + G * 0.3576 + B * 0.1805
Y = R * 0.2126 + G * 0.7152 + B * 0.0722
Z = R * 0.0193 + G * 0.1192 + B * 0.9505
return (X,Y,Z)
def shade_func(color, offset):
return tuple([int(c * (1 - offset)) for c in color])
def tint_func(color, offset):
return tuple([int(c + (255 - c) * offset) for c in color])
def tone_func(color, offset):
return tuple([int(c * (1 - offset) + 128 * offset) for c in color])
class ColorNameFinder:
def __init__(self, colors, distance=None):
if distance is None:
distance = ColorNameFinder.ciede2000
self.distance = distance
self.colors = [hex_to_rgb(color) for color in colors]
#classmethod
def euclidean(self, left, right):
return (left[0] - right[0]) ** 2 + (left[1] - right[1]) ** 2 + (left[2] - right[2]) ** 2
#classmethod
def ciede2000(self, color1, color2):
# Convert color to LAB color space
lab1 = rgb_to_lab(color1)
lab2 = rgb_to_lab(color2)
# Compute CIE 2000 color difference
C1 = math.sqrt(lab1[1] ** 2 + lab1[2] ** 2)
C2 = math.sqrt(lab2[1] ** 2 + lab2[2] ** 2)
a1 = math.atan2(lab1[2], lab1[1])
a2 = math.atan2(lab2[2], lab2[1])
dL = lab2[0] - lab1[0]
dC = C2 - C1
dA = a2 - a1
dH = 2 * math.sqrt(C1 * C2) * math.sin(dA / 2)
L = 1
C = 1
H = 1
LK = 1
LC = math.sqrt(math.pow(C1, 7) / (math.pow(C1, 7) + math.pow(25, 7)))
LH = math.sqrt(lab1[0] ** 2 + lab1[1] ** 2)
CB = math.sqrt(lab2[1] ** 2 + lab2[2] ** 2)
CH = math.sqrt(C2 ** 2 + dH ** 2)
SH = 1 + 0.015 * CH * LC
SL = 1 + 0.015 * LH * LC
SC = 1 + 0.015 * CB * LC
T = 0.0
if (a1 >= a2 and a1 - a2 <= math.pi) or (a2 >= a1 and a2 - a1 > math.pi):
T = 1
else:
T = 0
dE = math.sqrt((dL / L) ** 2 + (dC / C) ** 2 + (dH / H) ** 2 + T * (dC / SC) ** 2)
return dE
def __factory_objective(self, target, preprocessor=lambda x: x):
def fn(x):
print(x, preprocessor(x))
x = preprocessor(x)
color = self.colors[x[0]]
offset = x[1]
bound_offset = abs(offset)
offsets = [
shade_func(color, bound_offset),
tint_func(color, bound_offset),
tone_func(color, offset)]
least_error = min([(right, self.distance(target, right)) \
for right in offsets], key = lambda x: x[1])[1]
return least_error
return fn
def __resolve_offset_type(self, sample, target, offset):
bound_offset = abs(offset)
shade = shade_func(sample, bound_offset)
tint = tint_func(sample, bound_offset)
tone = tone_func(sample, offset)
lookup = {}
lookup[shade] = "shade"
lookup[tint] = "tint"
lookup[tone] = "tone"
offsets = [shade, tint, tone]
least_error = min([(right, self.distance(target, right)) for right in offsets], key = lambda x: x[1])[0]
return lookup[least_error]
def nearest_color(self, target):
target = hex_to_rgb(target)
preprocessor=lambda x: (int(x[0]), x[1])
result = min(\
[minimize( self.__factory_objective(target, preprocessor=preprocessor),
(i, 0),
bounds=[(i, i), (-1, 1)],
method='Powell') \
for i, color in enumerate(self.colors)], key=lambda x: x.fun)
color_index = int(result.x[0])
nearest_color = self.colors[color_index]
offset = preprocessor(result.x)[1]
offset_type = self.__resolve_offset_type(nearest_color, target, offset)
return {
"color": rgb_to_hex(nearest_color),
"offset": {
"type": offset_type,
"value": offset if offset_type == 'tone' else abs(offset)
}
}
let's demonstrate this with mauve. We'll define a target that is similar to a shade of mauve, include mauve in a list of colors, and ideally we'll get mauve back from our test.
colors = ['#E0B0FF', '#FF0000', '#000000', '#0000FF']
target = '#DFAEFE'
agent = ColorNameFinder(colors)
agent.nearest_color(target)
we do get mauve back:
{'color': '#e0b0ff',
'offset': {'type': 'shade', 'value': 0.0031060384614807254}}
the distance is 0.004991238317138219
agent.distance(hex_to_rgb(target), shade_func(hex_to_rgb(colors[0]), 0.0031060384614807254))
why use Powell's method?
in this arrangement, it is simply the best. No other method that uses bounds did a good job of scanning positives and negatives, and I had mixed results using the preprocessor to scale the values back to negative with bounds of (0,2).
I do notice that in the sample test, a range between about 0.003 and 0.0008 seems to produce the same distance, and that the values my approach considers includes a large number of these. is there a more efficient solution?
If I'm wrong, please let me know.
correctness of the color transformations
what is adding a negative amount of white? (in the case of a tint) I was thinking it is like adding a positive amount of black -- ie a shade, with signs reversed.
my implementation is not correct:
agent.distance(hex_to_rgb(target), shade_func(hex_to_rgb(colors[0]), 0.1)) - agent.distance(hex_to_rgb(target), tint_func(hex_to_rgb(colors[0]), -0.1))
produces 0.3239904390784106 instead of 0.
I'll probably be fixing that soon

Linearly evolutive color map

I am trying to create a colormap that should linearly vary according to a "w" value, from white-red to white-purple.
So...
For w = 1, the minimum value's color (0 for example) would be white and the maximum value's color (+ inf) would be red.
For w = 10 (example), the minimum value's color (0 for example) would be white and the maximum value's color (+ inf) would be orange.
For w = 30 (example), the minimum value's color (0 for example) would be white and the maximum value's color (+ inf) would be yellow.
and so on, until...
For w = 100 (example), the minimum value's color (0 for example) would be white and the maximum value's color (+ inf) would be purple.
I used this website to generate the image : https://g.co/kgs/utJPmw
I can get the first (w = 1) color map by using this code, but no idea on how to make it vary according to what I would like to :
import matplotlib.cm as cm
from matplotlib.colors import ListedColormap, LinearSegmentedColormap
color_map_1 = cm.get_cmap('Reds', 256)
newcolors_1 = color_map_1(np.linspace(0, 1, 256))
color_map_1 = ListedColormap(newcolors_1)
Any idea to do such a thing in python would be so much welcome,
Thank you guys
I finally found the solution. Maybe this is not the cleanest way, but it works very well for what I want to do. The colormaps I create can vary from white-red to white-purple (color spectrum). 765 variations are possible here, but by adding some small changes to the code, it could vary much more or less, depending on what you want.
In the following code : using the create_custom_colormap function, you get as an output cmap and color_map. cmap is the matrix containing the (r,g,b) values. color_map is the object that can be used in matplotlib (imshow) as an actual colormap, on any image.
Using the following code, define the function we will need for this job:
import matplotlib.pyplot as plt
import numpy as np
from matplotlib.colors import ListedColormap, LinearSegmentedColormap
def create_image():
'''
Create some random image on which we will apply the colormap. Any other image could replace this one, with or without extent.
'''
dx, dy = 0.015, 0.05
x = np.arange(-4.0, 4.0, dx)
y = np.arange(-4.0, 4.0, dy)
X, Y = np.meshgrid(x, y)
extent = np.min(x), np.max(x), np.min(y), np.max(y)
def z_fun(x, y):
return (1 - x / 2 + x**5 + y**6) * np.exp(-(x**2 + y**2))
Z2 = z_fun(X, Y)
return(extent, Z2)
def create_cmap(**kwargs):
'''
Create a color matrix and a color map using 3 lists of r (red), g (green) and b (blue) values.
Parameters:
- r (list of floats): red value, between 0 and 1
- g (list of floats): green value, between 0 and 1
- b (list of floats): blue value, between 0 and 1
Returns:
- color_matrix (numpy 2D array): contains all the rgb values for a given colormap
- color_map (matplotlib object): the color_matrix transformed into an object that matplotlib can use on figures
'''
color_matrix = np.empty([256,3])
color_matrix.fill(0)
color_matrix[:,0] = kwargs["r"]
color_matrix[:,1] = kwargs["g"]
color_matrix[:,2] = kwargs["b"]
color_map = ListedColormap(color_matrix)
return(color_matrix, color_map)
def standardize_timeseries_between(timeseries, borne_inf = 0, borne_sup = 1):
'''
For lisibility reasons, I defined r,g,b values between 0 and 255. But the matplotlib ListedColormap function expects values between 0 and 1.
Parameters:
timeseries (list of floats): can be one color vector in our case (either r, g o r b)
borne_inf (int): The minimum value in our timeseries will be replaced by this value
borne_sup (int): The maximum value in our timeseries will be replaced by this value
'''
timeseries_standardized = []
for i in range(len(timeseries)):
a = (borne_sup - borne_inf) / (max(timeseries) - min(timeseries))
b = borne_inf - a * min(timeseries)
timeseries_standardized.append(a * timeseries[i] + b)
timeseries_standardized = np.array(timeseries_standardized)
return(timeseries_standardized)
def create_custom_colormap(weight):
'''
This function is at the heart of the process. It takes only one < weight > parameter, that you can chose.
- For weight between 0 and 255, the colormaps that are created will vary between white-red (min-max) to white-yellow (min-max).
- For weight between 256 and 510, the colormaps that are created will vary between white-green (min-max) to white-cyan (min-max).
- For weight between 511 and 765, the colormaps that are created will vary between white-blue (min-max) to white-purple (min-max).
'''
if weight <= 255:
### 0>w<255
r = np.repeat(1, 256)
g = np.arange(0, 256, 1)
g = standardize_timeseries_between(g, weight/256, 1)
g = g[::-1]
b = np.arange(0, 256, 1)
b = standardize_timeseries_between(b, 1/256, 1)
b = b[::-1]
if weight > 255 and weight <= 255*2:
weight = weight - 255
### 255>w<510
g = np.repeat(1, 256)
r = np.arange(0, 256, 1)
r = standardize_timeseries_between(r, 1/256, 1)
r = r[::-1]
b = np.arange(0, 256, 1)
b = standardize_timeseries_between(b, weight/256, 1)
b = b[::-1]
if weight > 255*2 and weight <= 255*3:
weight = weight - 255*2
### 510>w<765
b = np.repeat(1, 256)
r = np.arange(0, 256, 1)
r = standardize_timeseries_between(r, weight/256, 1)
r = r[::-1]
g = np.arange(0, 256, 1)
g = standardize_timeseries_between(g, 1/256, 1)
g = g[::-1]
cmap, color_map = create_cmap(r=r, g=g, b=b)
return(cmap, color_map)
Use the function create_custom_colormap to get the colormap you want, by giving as argument to the function a value between 0 and 765 (see 5 examples in the figure below):
### Let us create some image (any other could be used).
extent, Z2 = create_image()
### Now create a color map, using the w value you want 0 = white-red, 765 = white-purple.
cmap, color_map = create_custom_colormap(weight=750)
### Plot the result
plt.imshow(Z2, cmap =color_map, alpha=0.7,
interpolation ='bilinear', extent=extent)
plt.colorbar()

How do I calculate the color and opacity of an transparent overlay given the background and overlayed background?

I'm trying to reverse engineer the colors used in a popular game.
See this low quality screenshot for an example:
The left side is the 'overlayed side' which has the background color and the overlay, and the right side is just the background.
My platform is technically simple HTML & CSS and I use Firefox, but I hope whatever technique I can use will transfer to most platforms.
I looked around on the internet and found 'color blend modes' and 'color mix modes', but I wasn't able to find information on the math behind each of these modes besides the options available.
Additionally, my simple test of see if it was a simple addition problem didn't work either:
Given RGBA values X, Y and Z, mix(X, Y) = Z.
X = rgba(32, 34, 35, 1.0)
Y = rgba(124, 123, 123, 0.65)
Z = rgba(93, 92, 93, 1.0)
Test: (X.r)(X.a) + (Y.r)(Y.a) = Z.r
32 * 1 + 124 * 0.65 = 156.65
We can also rule out that it isn't multiplication, division or any sort of basic operation here.
There's either a non-simple formula or a coefficient being used here that I'm not aware of.
I then looked into what Wikipedia had on the subject of blend modes and found an article on Alpha compositing. It looked to be describing pretty much what I wanted and had equations for me. I developed a little python script to see if it would work, but it didn't.
from typing import List
class Color(object):
def __init__(self, r: float, g: float, b: float, a: float = 1.0) -> None:
self.r, self.g, self.b, self.a = r, g, b, a
def __str__(self) -> str:
return f'Color({self.r}, {self.g}, {self.b}, {self.a:.2%})'
def __repr__(self) -> str:
return self.__str__()
def mix(a: Color, b: Color) -> Color:
Ao = a.a + (b.a * (1 - a.a))
Co: List[float] = []
for Ca, Cb in zip([a.r, a.g, a.b], [b.r, b.g, b.b]):
pre = (Ca + a.a) + ((Cb * b.a) * (1 - a.a))
Co.append(pre / Ao)
return Color(r=Co[0], g=Co[1], b=Co[2], a=Ao)
bg = Color(32, 34, 35)
overlay = Color(124, 123, 123, 0.65)
c = mix(overlay, bg)
# Color(32, 34, 35, 100.00%) + Color(124, 123, 123, 65.00%) = Color(135.85, 135.55, 135.9, 100.00%)
print(f'{bg} + {overlay} = ')
print(c)
Given these two colors, how would I determine what would be a suitable overlay I could use to reach the same 'overlayed' color? I assume there would be a very large number (but not infinite, as long as we're using integers and integer percentages).

Grayscale CMYK pixel

I need to convert a CMYK image to grayscaled CMYK image. At first i thought i can just use the same method as for RGB -> grayscale conversion, like (R + G + B) / 3 or max(r, g, b) / 2 + min(r, g, b). But unfortunately this works bad for CMYK, because CMY and K are redundant. And black cmyk (0, 0, 0, 255) will become 64. Should i just use
int grayscaled = max((c + m + y) / 3, k)
or could it be more tricky? I am not sure, because CMYK is actually a redundant color model, and the same color could be encoded in a different ways.
There are several ways to convert RGB to Grayscale, the average of the channels mentioned is one of them. Using the Y channel from the colorspace YCbCr is also a common option, as well the L channel from LAB, among other options.
Converting CMYK to RGB is cheap according to the formulas in http://www.easyrgb.com/index.php?X=MATH, and then you can convert to grayscale using well known approaches. Supposing the c, m, y, and k in range [0, 1], we can convert it to grayscale luma as in:
def cmyk_to_luminance(c, m, y, k):
c = c * (1 - k) + k
m = m * (1 - k) + k
y = y * (1 - k) + k
r, g, b = (1 - c), (1 - m), (1 - y)
y = 0.299 * r + 0.587 * g + 0.114 * b
return y
See also http://en.wikipedia.org/wiki/Grayscale for a bit more about this conversion to grayscale.

Selecting colors that are furthest apart

I'm working on a project that requires me to select "unique" colors for each item. At times there could be upwards of 400 items. Is there some way out there of selecting the 400 colors that differ the most? Is it as simple as just changing the RGB values by a fixed increment?
You could come up with an equal distribution of 400 colours by incrementing red, green and blue in turn by 34.
That is:
You know you have three colour channels: red, green and blue
You need 400 distinct combinations of R, G and B
So on each channel the number of increments you need is the cube root of 400, i.e. about 7.36
To span the range 0..255 with 7.36 increments, each increment must be about 255/7.36, i.e. about 34
Probably HSL or HSV would be a better representations than RGB for this task.
You may find that changing the hue gives better variability perception to the eye, so adjust your increments in a way that for every X units changed in S and L you change Y (with Y < X) units of hue, and adjust X and Y so you cover the spectrum with your desired amount of samples.
Here is my final code. Hopefully it helps someone down the road.
from PIL import Image, ImageDraw
import math, colorsys, os.path
# number of color circles needed
qty = 400
# the lowest value (V in HSV) can go
vmin = 30
# calculate how much to increment value by
vrange = 100 - vmin
if (qty >= 72):
vdiff = math.floor(vrange / (qty / 72))
else:
vdiff = 0
# set options
sizes = [16, 24, 32]
border_color = '000000'
border_size = 3
# initialize variables
hval = 0
sval = 50
vval = vmin
count = 0
while count < qty:
im = Image.new('RGBA', (100, 100), (0, 0, 0, 0))
draw = ImageDraw.Draw(im)
draw.ellipse((5, 5, 95, 95), fill='#'+border_color)
r, g, b = colorsys.hsv_to_rgb(hval/360.0, sval/100.0, vval/100.0)
r = int(r*255)
g = int(g*255)
b = int(b*255)
draw.ellipse((5+border_size, 5+border_size, 95-border_size, 95-border_size), fill=(r, g, b))
del draw
hexval = '%02x%02x%02x' % (r, g, b)
for size in sizes:
result = im.resize((size, size), Image.ANTIALIAS)
result.save(str(qty)+'/'+hexval+'_'+str(size)+'.png', 'PNG')
if hval + 10 < 360:
hval += 10
else:
if sval == 50:
hval = 0
sval = 100
else:
hval = 0
sval = 50
vval += vdiff
count += 1
Hey I came across this problem a few times in my projects where I wanted to display, say, clusters of points. I found that the best way to go was to use the colormaps from matplotlib (https://matplotlib.org/stable/tutorials/colors/colormaps.html) and
colors = plt.get_cmap("hsv")[np.linspace(0, 1, n_colors)]
this will output rgba colors so you can get the rgb with just
rgb = colors[:,:3]

Resources