Linearly evolutive color map - python-3.x

I am trying to create a colormap that should linearly vary according to a "w" value, from white-red to white-purple.
So...
For w = 1, the minimum value's color (0 for example) would be white and the maximum value's color (+ inf) would be red.
For w = 10 (example), the minimum value's color (0 for example) would be white and the maximum value's color (+ inf) would be orange.
For w = 30 (example), the minimum value's color (0 for example) would be white and the maximum value's color (+ inf) would be yellow.
and so on, until...
For w = 100 (example), the minimum value's color (0 for example) would be white and the maximum value's color (+ inf) would be purple.
I used this website to generate the image : https://g.co/kgs/utJPmw
I can get the first (w = 1) color map by using this code, but no idea on how to make it vary according to what I would like to :
import matplotlib.cm as cm
from matplotlib.colors import ListedColormap, LinearSegmentedColormap
color_map_1 = cm.get_cmap('Reds', 256)
newcolors_1 = color_map_1(np.linspace(0, 1, 256))
color_map_1 = ListedColormap(newcolors_1)
Any idea to do such a thing in python would be so much welcome,
Thank you guys

I finally found the solution. Maybe this is not the cleanest way, but it works very well for what I want to do. The colormaps I create can vary from white-red to white-purple (color spectrum). 765 variations are possible here, but by adding some small changes to the code, it could vary much more or less, depending on what you want.
In the following code : using the create_custom_colormap function, you get as an output cmap and color_map. cmap is the matrix containing the (r,g,b) values. color_map is the object that can be used in matplotlib (imshow) as an actual colormap, on any image.
Using the following code, define the function we will need for this job:
import matplotlib.pyplot as plt
import numpy as np
from matplotlib.colors import ListedColormap, LinearSegmentedColormap
def create_image():
'''
Create some random image on which we will apply the colormap. Any other image could replace this one, with or without extent.
'''
dx, dy = 0.015, 0.05
x = np.arange(-4.0, 4.0, dx)
y = np.arange(-4.0, 4.0, dy)
X, Y = np.meshgrid(x, y)
extent = np.min(x), np.max(x), np.min(y), np.max(y)
def z_fun(x, y):
return (1 - x / 2 + x**5 + y**6) * np.exp(-(x**2 + y**2))
Z2 = z_fun(X, Y)
return(extent, Z2)
def create_cmap(**kwargs):
'''
Create a color matrix and a color map using 3 lists of r (red), g (green) and b (blue) values.
Parameters:
- r (list of floats): red value, between 0 and 1
- g (list of floats): green value, between 0 and 1
- b (list of floats): blue value, between 0 and 1
Returns:
- color_matrix (numpy 2D array): contains all the rgb values for a given colormap
- color_map (matplotlib object): the color_matrix transformed into an object that matplotlib can use on figures
'''
color_matrix = np.empty([256,3])
color_matrix.fill(0)
color_matrix[:,0] = kwargs["r"]
color_matrix[:,1] = kwargs["g"]
color_matrix[:,2] = kwargs["b"]
color_map = ListedColormap(color_matrix)
return(color_matrix, color_map)
def standardize_timeseries_between(timeseries, borne_inf = 0, borne_sup = 1):
'''
For lisibility reasons, I defined r,g,b values between 0 and 255. But the matplotlib ListedColormap function expects values between 0 and 1.
Parameters:
timeseries (list of floats): can be one color vector in our case (either r, g o r b)
borne_inf (int): The minimum value in our timeseries will be replaced by this value
borne_sup (int): The maximum value in our timeseries will be replaced by this value
'''
timeseries_standardized = []
for i in range(len(timeseries)):
a = (borne_sup - borne_inf) / (max(timeseries) - min(timeseries))
b = borne_inf - a * min(timeseries)
timeseries_standardized.append(a * timeseries[i] + b)
timeseries_standardized = np.array(timeseries_standardized)
return(timeseries_standardized)
def create_custom_colormap(weight):
'''
This function is at the heart of the process. It takes only one < weight > parameter, that you can chose.
- For weight between 0 and 255, the colormaps that are created will vary between white-red (min-max) to white-yellow (min-max).
- For weight between 256 and 510, the colormaps that are created will vary between white-green (min-max) to white-cyan (min-max).
- For weight between 511 and 765, the colormaps that are created will vary between white-blue (min-max) to white-purple (min-max).
'''
if weight <= 255:
### 0>w<255
r = np.repeat(1, 256)
g = np.arange(0, 256, 1)
g = standardize_timeseries_between(g, weight/256, 1)
g = g[::-1]
b = np.arange(0, 256, 1)
b = standardize_timeseries_between(b, 1/256, 1)
b = b[::-1]
if weight > 255 and weight <= 255*2:
weight = weight - 255
### 255>w<510
g = np.repeat(1, 256)
r = np.arange(0, 256, 1)
r = standardize_timeseries_between(r, 1/256, 1)
r = r[::-1]
b = np.arange(0, 256, 1)
b = standardize_timeseries_between(b, weight/256, 1)
b = b[::-1]
if weight > 255*2 and weight <= 255*3:
weight = weight - 255*2
### 510>w<765
b = np.repeat(1, 256)
r = np.arange(0, 256, 1)
r = standardize_timeseries_between(r, weight/256, 1)
r = r[::-1]
g = np.arange(0, 256, 1)
g = standardize_timeseries_between(g, 1/256, 1)
g = g[::-1]
cmap, color_map = create_cmap(r=r, g=g, b=b)
return(cmap, color_map)
Use the function create_custom_colormap to get the colormap you want, by giving as argument to the function a value between 0 and 765 (see 5 examples in the figure below):
### Let us create some image (any other could be used).
extent, Z2 = create_image()
### Now create a color map, using the w value you want 0 = white-red, 765 = white-purple.
cmap, color_map = create_custom_colormap(weight=750)
### Plot the result
plt.imshow(Z2, cmap =color_map, alpha=0.7,
interpolation ='bilinear', extent=extent)
plt.colorbar()

Related

should I use basinhopping instead of minimize to find colors based on a small set of transforms?

question up front
def shade_func(color, offset):
return tuple([int(c * (1 - offset)) for c in color])
def tint_func(color, offset):
return tuple([int(c + (255 - c) * offset) for c in color])
def tone_func(color, offset):
return tuple([int(c * (1 - offset) + 128 * offset) for c in color])
given an objective over a collection of colors that returns the least distance to a target color, how do I ensure that basinhopping isn't better than minimization in scikit learn?
I was thinking that, for any one color, there will be up to 4 moments in a v-shaped curve, and so only one minimum. if the value with offset zero is itself a minimum, maybe it could be 5. Am I wrong? In any case each is a single optimum, so if we are only searching one color at a time, no reason to use basinhopping.
If we instead use basinhopping to scan all colors at once (we can scale the two different dimensions, in fact this is where the idea of a preprocessor function first came from), it scans them, but does not do such a compelling just of scanning all colors. Some colors it only tries once. I think it might completely skip some colors with large enough sets.
details
I was inspired by way artyclick shows colors and allows searching for them. If you look at an individual color, for example mauve you'll notice that it prominently displays the shades, tints, and tones of the color, rather like an artist might like. If you ask it for the name of a color, it will use a hidden unordered list of about a thousand color names, and some javascript to find the nearest colorname to the color you chose. In fact it will also show alternatives.
I noticed that quite often a shade, tint or tone of an alternative (or even the best match) was often a better match than the color it provided. For those who don't know about shade, tint and tone, there's a nice write up at Dunn Edward's Paints. It looks like shade and tint are the same but with signs reversed, if doing this on tuples representing colors. For tone it is different, a negative value would I think saturate the result.
I felt like there must be authoritative (or at least well sourced) colorname sources it could be using.
In terms of the results, since I want any color or its shade/tint/tone, I want a result like this:
{'color': '#aabbcc',
'offset': {'type': 'tint', 'value': 0.31060384614807254}}
So I can return the actual color name from the color, plus the type of color transform to get there and the amount of you have to go.
For distance of colors, there is a great algorithm that is meant to model human perception that I am using, called CIEDE 2000. Frankly, I'm just using a snippet I found that implements this, it could be wrong.
So now I want to take in two colors, compare their shade, tint, and tone to a target color, and return the one with the least distance. After I am done, I can reconstruct if it was a shade, tint or tone transform from the result just by running all three once and choosing the best fit. With that structure, I can iterate over every color, and that should do it. I use optimization because I don't want to hard code what offsets it should consider (though I am reconsidering this choice now!).
because I want to consider negatives for tone but not for shade/tint, my objective will have to transform that. I have to include two values to optimize, since the objection function will need to know what color to transform (or else the result will give me know way of knowing which color to use the offset with).
so my call should look something like the following:
result = min(minimize(objective, (i,0), bounds=[(i, i), (-1, 1)]) for i in range(len(colors)))
offset_type = resolve_offset_type(result)
with that in mind, I implemented this solution, over the past couple of days.
current solution
from scipy.optimize import minimize
import numpy as np
import math
def clamp(low, x, high):
return max(low, min(x, high))
def hex_to_rgb(hex_color):
hex_color = hex_color.lstrip('#')
return tuple(int(hex_color[i:i+2], 16) for i in (0, 2, 4))
def rgb_to_hex(rgb):
return '#{:02x}{:02x}{:02x}'.format(*rgb)
def rgb_to_lab(color):
# Convert RGB to XYZ color space
R = color[0] / 255.0
G = color[1] / 255.0
B = color[2] / 255.0
R = ((R + 0.055) / 1.055) ** 2.4 if R > 0.04045 else R / 12.92
G = ((G + 0.055) / 1.055) ** 2.4 if G > 0.04045 else G / 12.92
B = ((B + 0.055) / 1.055) ** 2.4 if B > 0.04045 else B / 12.92
X = R * 0.4124 + G * 0.3576 + B * 0.1805
Y = R * 0.2126 + G * 0.7152 + B * 0.0722
Z = R * 0.0193 + G * 0.1192 + B * 0.9505
return (X,Y,Z)
def shade_func(color, offset):
return tuple([int(c * (1 - offset)) for c in color])
def tint_func(color, offset):
return tuple([int(c + (255 - c) * offset) for c in color])
def tone_func(color, offset):
return tuple([int(c * (1 - offset) + 128 * offset) for c in color])
class ColorNameFinder:
def __init__(self, colors, distance=None):
if distance is None:
distance = ColorNameFinder.ciede2000
self.distance = distance
self.colors = [hex_to_rgb(color) for color in colors]
#classmethod
def euclidean(self, left, right):
return (left[0] - right[0]) ** 2 + (left[1] - right[1]) ** 2 + (left[2] - right[2]) ** 2
#classmethod
def ciede2000(self, color1, color2):
# Convert color to LAB color space
lab1 = rgb_to_lab(color1)
lab2 = rgb_to_lab(color2)
# Compute CIE 2000 color difference
C1 = math.sqrt(lab1[1] ** 2 + lab1[2] ** 2)
C2 = math.sqrt(lab2[1] ** 2 + lab2[2] ** 2)
a1 = math.atan2(lab1[2], lab1[1])
a2 = math.atan2(lab2[2], lab2[1])
dL = lab2[0] - lab1[0]
dC = C2 - C1
dA = a2 - a1
dH = 2 * math.sqrt(C1 * C2) * math.sin(dA / 2)
L = 1
C = 1
H = 1
LK = 1
LC = math.sqrt(math.pow(C1, 7) / (math.pow(C1, 7) + math.pow(25, 7)))
LH = math.sqrt(lab1[0] ** 2 + lab1[1] ** 2)
CB = math.sqrt(lab2[1] ** 2 + lab2[2] ** 2)
CH = math.sqrt(C2 ** 2 + dH ** 2)
SH = 1 + 0.015 * CH * LC
SL = 1 + 0.015 * LH * LC
SC = 1 + 0.015 * CB * LC
T = 0.0
if (a1 >= a2 and a1 - a2 <= math.pi) or (a2 >= a1 and a2 - a1 > math.pi):
T = 1
else:
T = 0
dE = math.sqrt((dL / L) ** 2 + (dC / C) ** 2 + (dH / H) ** 2 + T * (dC / SC) ** 2)
return dE
def __factory_objective(self, target, preprocessor=lambda x: x):
def fn(x):
print(x, preprocessor(x))
x = preprocessor(x)
color = self.colors[x[0]]
offset = x[1]
bound_offset = abs(offset)
offsets = [
shade_func(color, bound_offset),
tint_func(color, bound_offset),
tone_func(color, offset)]
least_error = min([(right, self.distance(target, right)) \
for right in offsets], key = lambda x: x[1])[1]
return least_error
return fn
def __resolve_offset_type(self, sample, target, offset):
bound_offset = abs(offset)
shade = shade_func(sample, bound_offset)
tint = tint_func(sample, bound_offset)
tone = tone_func(sample, offset)
lookup = {}
lookup[shade] = "shade"
lookup[tint] = "tint"
lookup[tone] = "tone"
offsets = [shade, tint, tone]
least_error = min([(right, self.distance(target, right)) for right in offsets], key = lambda x: x[1])[0]
return lookup[least_error]
def nearest_color(self, target):
target = hex_to_rgb(target)
preprocessor=lambda x: (int(x[0]), x[1])
result = min(\
[minimize( self.__factory_objective(target, preprocessor=preprocessor),
(i, 0),
bounds=[(i, i), (-1, 1)],
method='Powell') \
for i, color in enumerate(self.colors)], key=lambda x: x.fun)
color_index = int(result.x[0])
nearest_color = self.colors[color_index]
offset = preprocessor(result.x)[1]
offset_type = self.__resolve_offset_type(nearest_color, target, offset)
return {
"color": rgb_to_hex(nearest_color),
"offset": {
"type": offset_type,
"value": offset if offset_type == 'tone' else abs(offset)
}
}
let's demonstrate this with mauve. We'll define a target that is similar to a shade of mauve, include mauve in a list of colors, and ideally we'll get mauve back from our test.
colors = ['#E0B0FF', '#FF0000', '#000000', '#0000FF']
target = '#DFAEFE'
agent = ColorNameFinder(colors)
agent.nearest_color(target)
we do get mauve back:
{'color': '#e0b0ff',
'offset': {'type': 'shade', 'value': 0.0031060384614807254}}
the distance is 0.004991238317138219
agent.distance(hex_to_rgb(target), shade_func(hex_to_rgb(colors[0]), 0.0031060384614807254))
why use Powell's method?
in this arrangement, it is simply the best. No other method that uses bounds did a good job of scanning positives and negatives, and I had mixed results using the preprocessor to scale the values back to negative with bounds of (0,2).
I do notice that in the sample test, a range between about 0.003 and 0.0008 seems to produce the same distance, and that the values my approach considers includes a large number of these. is there a more efficient solution?
If I'm wrong, please let me know.
correctness of the color transformations
what is adding a negative amount of white? (in the case of a tint) I was thinking it is like adding a positive amount of black -- ie a shade, with signs reversed.
my implementation is not correct:
agent.distance(hex_to_rgb(target), shade_func(hex_to_rgb(colors[0]), 0.1)) - agent.distance(hex_to_rgb(target), tint_func(hex_to_rgb(colors[0]), -0.1))
produces 0.3239904390784106 instead of 0.
I'll probably be fixing that soon

Mapping RGB data to values in legend

This is a follow-up to my previous question here
I've been trying to convert the color data in a heatmap to RGB values.
source image
In the below image, to the left is a subplot present in panel D of the source image. This has 6 x 6 cells (6 rows and 6 columns). On the right, we see the binarized image, with white color highlighted in the cell that is clicked after running the code below. The input for running the code is the below image. The ouput is(mean = [ 27.72 26.83 144.17])is the mean of BGR color in the cell that is highlighted in white on the right image below.
A really nice solution that was provided as an answer to my previous question is the following (ref)
import cv2
import numpy as np
# print pixel value on click
def mouse_callback(event, x, y, flags, params):
if event == cv2.EVENT_LBUTTONDOWN:
# get specified color
row = y
column = x
color = image[row, column]
print('color = ', color)
# calculate range
thr = 20 # ± color range
up_thr = color + thr
up_thr[up_thr < color] = 255
down_thr = color - thr
down_thr[down_thr > color] = 0
# find points in range
img_thr = cv2.inRange(image, down_thr, up_thr) # accepted range
height, width, _ = image.shape
left_bound = x - (x % round(width/6))
right_bound = left_bound + round(width/6)
up_bound = y - (y % round(height/6))
down_bound = up_bound + round(height/6)
img_rect = np.zeros((height, width), np.uint8) # bounded by rectangle
cv2.rectangle(img_rect, (left_bound, up_bound), (right_bound, down_bound), (255,255,255), -1)
img_thr = cv2.bitwise_and(img_thr, img_rect)
# get points around specified point
img_spec = np.zeros((height, width), np.uint8) # specified mask
last_img_spec = np.copy(img_spec)
img_spec[row, column] = 255
kernel = np.ones((3,3), np.uint8) # dilation structuring element
while cv2.bitwise_xor(img_spec, last_img_spec).any():
last_img_spec = np.copy(img_spec)
img_spec = cv2.dilate(img_spec, kernel)
img_spec = cv2.bitwise_and(img_spec, img_thr)
cv2.imshow('mask', img_spec)
cv2.waitKey(10)
avg = cv2.mean(image, img_spec)[:3]
mean.append(np.around(np.array(avg), 2))
print('mean = ', np.around(np.array(avg), 2))
# print(mean) # appends data to variable mean
if __name__ == '__main__':
mean = [] #np.zeros((6, 6))
# create window and callback
winname = 'img'
cv2.namedWindow(winname)
cv2.setMouseCallback(winname, mouse_callback)
# read & display image
image = cv2.imread('ip2.png', 1)
#image = image[3:62, 2:118] # crop the image to 6x6 cells
#---- resize image--------------------------------------------------
# appended this to the original code
print('Original Dimensions : ', image.shape)
scale_percent = 220 # percent of original size
width = int(image.shape[1] * scale_percent / 100)
height = int(image.shape[0] * scale_percent / 100)
dim = (width, height)
# resize image
image = cv2.resize(image, dim, interpolation=cv2.INTER_AREA)
# ----------------------------------------------------------------------
cv2.imshow(winname, image)
cv2.waitKey() # press any key to exit
cv2.destroyAllWindows()
What do I want to do next?
The mean of the RGB values thus obtained has to be mapped to the values in the following legend provided in the source image,
I would like to ask for suggestions on how to map the RGB data to the values in the legend.
Note: In my previous post it has been suggested that one could
fit the RGB values into an equation which gives continuous results.
Any suggestions in this direction will also be helpful.
EDIT:
Answering the comment below
I did the following to measure the RGB values of legend
Input image:
This image has 8 cells in columns width and 1 cell in rows height
Changed these lines of code:
left_bound = x - (x % round(width/8)) # 6 replaced with 8
right_bound = left_bound + round(width/8) # 6 replaced with 8
up_bound = y - (y % round(height/1)) # 6 replaced with 1
down_bound = up_bound + round(height/1) # 6 replaced with 1
Mean obtained for each cell/ each color in legend from left to right:
mean = [ 82.15 174.95 33.66]
mean = [45.55 87.01 17.51]
mean = [8.88 8.61 5.97]
mean = [16.79 17.96 74.46]
mean = [ 35.59 30.53 167.14]
mean = [ 37.9 32.39 233.74]
mean = [120.29 118. 240.34]
mean = [238.33 239.56 248.04]
You can try to apply piece wise approach, make pair wise transitions between colors:
c[i->i+1](t)=t*(R[i+1],G[i+1],B[i+1])+(1-t)*(R[i],G[i],B[i])
Do the same for these values:
val[i->i+1](t)=t*val[i+1]+(1-t)*val[i]
Where i - index of color in legend scale, t - parameter in [0:1] range.
So, you have continuous mapping of 2 values, and just need to find color parameters i and t closest to sample and find value from mapping.
Update:
To find the color parameters you can think about every pair of neighbour legend colors as a pair of 3d points, and your queried color as external 3d point. Now you just meed to find a length of perpendicular from the external point to a line, then, iterating over legend color pairs, find the shortest perpendicular (now you have i).
Then find intersection point of the perpendicular and the line. This point will be located at the distance A from line start and if line length is L then parameter value t=A/L.
Update2:
Simple brutforce solution to illustrate piece wise approach:
#include "opencv2/opencv.hpp"
#include <string>
#include <iostream>
using namespace std;
using namespace cv;
int main(int argc, char* argv[])
{
Mat Image=cv::Mat::zeros(100,250,CV_32FC3);
std::vector<cv::Scalar> Legend;
Legend.push_back(cv::Scalar(82.15,174.95,33.66));
Legend.push_back(cv::Scalar(45.55, 87.01, 17.51));
Legend.push_back(cv::Scalar(8.88, 8.61, 5.97));
Legend.push_back(cv::Scalar(16.79, 17.96, 74.46));
Legend.push_back(cv::Scalar(35.59, 30.53, 167.14));
Legend.push_back(cv::Scalar(37.9, 32.39, 233.74));
Legend.push_back(cv::Scalar(120.29, 118., 240.34));
Legend.push_back(cv::Scalar(238.33, 239.56, 248.04));
std::vector<float> Values;
Values.push_back(-4);
Values.push_back(-2);
Values.push_back(0);
Values.push_back(2);
Values.push_back(4);
Values.push_back(8);
Values.push_back(16);
Values.push_back(32);
int w = 30;
int h = 10;
for (int i = 0; i < Legend.size(); ++i)
{
cv::rectangle(Image, Rect(i * w, 0, w, h), Legend[i]/255, -1);
}
std::vector<cv::Scalar> Smooth_Legend;
std::vector<float> Smooth_Values;
for (int i = 0; i < Legend.size()-1; ++i)
{
cv::Scalar c1 = Legend[i];
cv::Scalar c2 = Legend[i + 1];
float v1 = Values[i];
float v2 = Values[i+1];
for (int j = 0; j < w; ++j)
{
float t = (float)j / (float)w;
Scalar c = c2 * t + c1 * (1 - t);
float v = v2 * t + v1 * (1 - t);
float x = i * w + j;
line(Image, Point(x, h), Point(x, h + h), c/255, 1);
Smooth_Values.push_back(v);
Smooth_Legend.push_back(c);
}
}
Scalar qp = cv::Scalar(5, 0, 200);
float d_min = FLT_MAX;
int ind = -1;
for (int i = 0; i < Smooth_Legend.size(); ++i)
{
float d = cv::norm(qp- Smooth_Legend[i]);
if (d < d_min)
{
ind = i;
d_min = d;
}
}
std::cout << Smooth_Values[ind] << std::endl;
line(Image, Point(ind, 3 * h), Point(ind, 4 * h), Scalar::all(255), 2);
circle(Image, Point(ind, 4 * h), 3, qp/255,-1);
putText(Image, std::to_string(Smooth_Values[ind]), Point(ind, 70), FONT_HERSHEY_DUPLEX, 1, Scalar(0, 0.5, 0.5), 0.002);
cv::imshow("Legend", Image);
cv::imwrite("result.png", Image*255);
cv::waitKey();
}
The result:
Python:
import cv2
import numpy as np
height=100
width=250
Image = np.zeros((height, width,3), np.float)
legend = np.array([ (82.15,174.95,33.66),
(45.55,87.01,17.51),
(8.88,8.61,5.97),
(16.79,17.96,74.46),
( 35.59,0.53,167.14),
( 37.9,32.39,233.74),
(120.29,118.,240.34),
(238.33,239.56,248.04)], np.float)
values = np.array([-4,-2,0,2,4,8,16,32], np.float)
# width of cell, also defines number
# of one segment transituin subdivisions.
# Larger values will give more accuracy, but will woek slower.
w = 30
# Only fo displaying purpose. Height of bars in result image.
h = 10
# Plot legend cells ( to check correcrness only )
for i in range(len(legend)):
col=legend[i]
cv2.rectangle(Image, (i * w, 0, w, h), col/255, -1)
# Start form smoorhed scales for color and according values
Smooth_Legend=[]
Smooth_Values=[]
for i in range(len(legend)-1): # iterate known knots
c1 = legend[i] # start color point
c2 = legend[i + 1] # end color point
v1 = values[i] # start value
v2 = values[i+1] # emd va;ie
for j in range(w): # slide inside [start:end] interval.
t = float(j) / float(w) # map it to [0:1] interval
c = c2 * t + c1 * (1 - t) # transition between c1 and c2
v = v2 * t + v1 * (1 - t) # transition between v1 and v2
x = i * w + j # global scale coordinate (for drawing)
cv2.line(Image, (x, h), (x, h + h), c/255, 1) # draw one tick of smoothed scale
Smooth_Values.append(v) # append smoothed values for next step
Smooth_Legend.append(c) # append smoothed color for next step
# queried color
qp = np.array([5, 0, 200])
# initial value for minimal distance set to large value
d_min = 1e7
# index for clolor search
ind = -1
# search for minimal distance from queried color to smoothed scale color
for i in range(len(Smooth_Legend)):
# distance
d = cv2.norm(qp-Smooth_Legend[i])
if (d < d_min):
ind = i
d_min = d
# ind contains index of the closest color in smoothed scale
# and now we can extract according value from smoothed values scale
print(Smooth_Values[ind]) # value mapped to queried color.
# plot pointer (to check ourself)
cv2.line(Image, (ind, 3 * h), (ind, 4 * h), (255,255,255), 2);
cv2.circle(Image, (ind, 4 * h), 3, qp/255,-1);
cv2.putText(Image, str(Smooth_Values[ind]), (ind, 70), cv2.FONT_HERSHEY_DUPLEX, 1, (0, 0.5, 0.5), 1);
# show window
cv2.imshow("Legend", Image)
# save to file
cv2.imwrite("result.png", Image*255)
cv2.waitKey()

Magnetic dipole in python

I looked at this code:
import numpy as np
from matplotlib import pyplot as plt
def dipole(m, r, r0):
"""
Calculation of field B in point r. B is created by a dipole moment m located in r0.
"""
# R = r - r0 - subtraction of elements of vectors r and r0, transposition of array
R = np.subtract(np.transpose(r), r0).T
# Spatial components of r are the outermost axis
norm_R = np.sqrt(np.einsum("i...,i...", R, R)) # einsum - Einsteinova sumace
# Dot product of R and m
m_dot_R = np.tensordot(m, R, axes=1)
# Computation of B
B = 3 * m_dot_R * R / norm_R**5 - np.tensordot(m, 1 / norm_R**3, axes=0)
B *= 1e-7 # abbreviation for B = B * 1e-7, multiplication B of 1e-7, permeability of vacuum: 4\pi * 10^(-7)
# The result is the magnetic field B
return B
X = np.linspace(-1, 1)
Y = np.linspace(-1, 1)
Bx, By = dipole(m=[0, 1], r=np.meshgrid(X, Y), r0=[-0.2,0.8])
plt.figure(figsize=(8, 8))
plt.streamplot(X, Y, Bx, By)
plt.margins(0, 0)
plt.show()
It shows the following figure:
Is it possible to get coordinates of one line of force? I don't understand how it is plotted.
The streamplot returns a container object 'StreamplotSet' with two parts:
lines: a LineCollection of the streamlines
arrows: a PatchCollection containing FancyArrowPatch objects (these are the triangular arrows)
c.lines.get_paths() gives all the segments. Iterating through these segments, their vertices can be examined. When a segment starts where the previous ended, both belong to the same curve. Note that each segment is a short straight line; many segments are used together to form a streamline curve.
The code below demonstrates iterating through the segments. To show what's happening, each segment is converted to an array of 2D points suitable for plt.plot. Default, plt.plot colors each curve with a new color (repeating every 10). The dots show where each of the short straight segments are located.
To find one particular curve, you could hover with the mouse over the starting point, and note the x coordinate of that point. And then test for that coordinate in the code. As an example, the curve that starts near x=0.48 is drawn in a special way.
import numpy as np
from matplotlib import pyplot as plt
from matplotlib import patches
def dipole(m, r, r0):
R = np.subtract(np.transpose(r), r0).T
norm_R = np.sqrt(np.einsum("i...,i...", R, R))
m_dot_R = np.tensordot(m, R, axes=1)
B = 3 * m_dot_R * R / norm_R**5 - np.tensordot(m, 1 / norm_R**3, axes=0)
B *= 1e-7
return B
X = np.linspace(-1, 1)
Y = np.linspace(-1, 1)
Bx, By = dipole(m=[0, 1], r=np.meshgrid(X, Y), r0=[-0.2,0.8])
plt.figure(figsize=(8, 8))
c = plt.streamplot(X, Y, Bx, By)
c.lines.set_visible(False)
paths = c.lines.get_paths()
prev_end = None
start_indices = []
for index, segment in enumerate(paths):
if not np.array_equal(prev_end, segment.vertices[0]): # new segment
start_indices.append(index)
prev_end = segment.vertices[-1]
for i0, i1 in zip(start_indices, start_indices[1:] + [len(paths)]):
# get all the points of the curve that starts at index i0
curve = np.array([paths[i].vertices[0] for i in range(i0, i1)] + [paths[i1 - 1].vertices[-1]])
special_x_coord = 0.48
for i0, i1 in zip(start_indices, start_indices[1:] + [len(paths)]):
# get all the points of the curve that starts at index i0
curve = np.array([paths[i].vertices[0] for i in range(i0, i1)] + [paths[i1 - 1].vertices[-1]])
if abs(curve[0,0] - special_x_coord) < 0.01: # draw one curve in a special way
plt.plot(curve[:, 0], curve[:, 1], '-', lw=10, alpha=0.3)
else:
plt.plot(curve[:, 0], curve[:, 1], '.', ls='-')
plt.margins(0, 0)
plt.show()

Create Image using Matplotlib imshow meshgrid and custom colors

I am trying to create an image where the x axis is the width, and y axis is the height of the image. And where each point can be given a color based on a RBG mapping. From looking at imshow() from Matplotlib I guess I need to create a meshgrid on the form (NxMx3) where 3 is a tuple or something similar with the rbg colors.
But so far I have not managed to understand how to do that. Lets say I have this example:
import numpy as np
import matplotlib.pyplot as plt
from matplotlib.colors import LinearSegmentedColormap
x_min = 1
x_max = 5
y_min = 1
y_max = 5
Nx = 5 #number of steps for x axis
Ny = 5 #number of steps for y axis
x = np.linspace(x_min, x_max, Nx)
y = np.linspace(y_min, y_max, Ny)
#Can then create a meshgrid using this to get the x and y axis system
xx, yy = np.meshgrid(x, y)
#imagine I have some funcion that does someting based on the x and y values
def somefunc(x_value, y_value):
#do something and return rbg based on that
return x_value + y_value
res = somefunc(xx, yy)
cmap = LinearSegmentedColormap.from_list('mycmap', ['white', 'blue', 'black'])
plt.figure(dpi=100)
plt.imshow(res, cmap=cmap, interpolation='bilinear')
plt.show()
And this creates a plot, but what would I have to do if my goal was to give spesific rbg values based on x and y values inside somefunc and make the resulting numpy array into a N x M x 3 array
I tried to make the somefunc function return a tuple of rbg values to use (r, b g) but that does not seem to work
It will of course completely depend on what you want to do with the values you supply to the function. So let's assume you just want to put the x values as the red channel and the y values as the blue channel, this could look like
def somefunc(x_value, y_value):
return np.dstack((x_value/5., np.zeros_like(x_value), y_value/5.))
Complete example:
import numpy as np
import matplotlib.pyplot as plt
x_min = 1
x_max = 5
y_min = 1
y_max = 5
Nx = 5 #number of steps for x axis
Ny = 5 #number of steps for y axis
x = np.linspace(x_min, x_max, Nx)
y = np.linspace(y_min, y_max, Ny)
#Can then create a meshgrid using this to get the x and y axis system
xx, yy = np.meshgrid(x, y)
#imagine I have some funcion that does someting based on the x and y values
def somefunc(x_value, y_value):
return np.dstack((x_value/5., np.zeros_like(x_value), y_value/5.))
res = somefunc(xx, yy)
plt.figure(dpi=100)
plt.imshow(res)
plt.show()
If you already have a (more complicated) function that returns an RGB tuple you may loop over the grid to fill an empty array with the values of the function.
#If you already have some function that returns an RGB tuple
def somefunc(x_value, y_value):
if x_value > 2 and y_value < 3:
return np.array(((y_value+1)/4., (y_value+2)/5., 0.43))
elif x_value <=2:
return np.array((y_value/5., (x_value+3)/5., 0.0))
else:
return np.array((x_value/5., (y_value+5)/10., 0.89))
# you may loop over the grid to fill a new array with those values
res = np.zeros((xx.shape[0],xx.shape[1],3))
for i in range(xx.shape[0]):
for j in range(xx.shape[1]):
res[i,j,:] = somefunc(xx[i,j],yy[i,j])
plt.figure(dpi=100)
plt.imshow(res)

Selecting colors that are furthest apart

I'm working on a project that requires me to select "unique" colors for each item. At times there could be upwards of 400 items. Is there some way out there of selecting the 400 colors that differ the most? Is it as simple as just changing the RGB values by a fixed increment?
You could come up with an equal distribution of 400 colours by incrementing red, green and blue in turn by 34.
That is:
You know you have three colour channels: red, green and blue
You need 400 distinct combinations of R, G and B
So on each channel the number of increments you need is the cube root of 400, i.e. about 7.36
To span the range 0..255 with 7.36 increments, each increment must be about 255/7.36, i.e. about 34
Probably HSL or HSV would be a better representations than RGB for this task.
You may find that changing the hue gives better variability perception to the eye, so adjust your increments in a way that for every X units changed in S and L you change Y (with Y < X) units of hue, and adjust X and Y so you cover the spectrum with your desired amount of samples.
Here is my final code. Hopefully it helps someone down the road.
from PIL import Image, ImageDraw
import math, colorsys, os.path
# number of color circles needed
qty = 400
# the lowest value (V in HSV) can go
vmin = 30
# calculate how much to increment value by
vrange = 100 - vmin
if (qty >= 72):
vdiff = math.floor(vrange / (qty / 72))
else:
vdiff = 0
# set options
sizes = [16, 24, 32]
border_color = '000000'
border_size = 3
# initialize variables
hval = 0
sval = 50
vval = vmin
count = 0
while count < qty:
im = Image.new('RGBA', (100, 100), (0, 0, 0, 0))
draw = ImageDraw.Draw(im)
draw.ellipse((5, 5, 95, 95), fill='#'+border_color)
r, g, b = colorsys.hsv_to_rgb(hval/360.0, sval/100.0, vval/100.0)
r = int(r*255)
g = int(g*255)
b = int(b*255)
draw.ellipse((5+border_size, 5+border_size, 95-border_size, 95-border_size), fill=(r, g, b))
del draw
hexval = '%02x%02x%02x' % (r, g, b)
for size in sizes:
result = im.resize((size, size), Image.ANTIALIAS)
result.save(str(qty)+'/'+hexval+'_'+str(size)+'.png', 'PNG')
if hval + 10 < 360:
hval += 10
else:
if sval == 50:
hval = 0
sval = 100
else:
hval = 0
sval = 50
vval += vdiff
count += 1
Hey I came across this problem a few times in my projects where I wanted to display, say, clusters of points. I found that the best way to go was to use the colormaps from matplotlib (https://matplotlib.org/stable/tutorials/colors/colormaps.html) and
colors = plt.get_cmap("hsv")[np.linspace(0, 1, n_colors)]
this will output rgba colors so you can get the rgb with just
rgb = colors[:,:3]

Resources