Trying to "manually" convert to hsv a bgr image in opencv-python - python-3.x

The algorithm I'm using is:
def rgb2hsv(r, g, b):
r, g, b = r/255.0, g/255.0, b/255.0
mx = max(r, g, b)
mn = min(r, g, b)
df = mx-mn
if mx == mn:
h = 0
elif mx == r:
h = (60 * ((g-b)/df) + 360) % 360
elif mx == g:
h = (60 * ((b-r)/df) + 120) % 360
elif mx == b:
h = (60 * ((r-g)/df) + 240) % 360
if mx == 0:
s = 0
else:
s = df/mx
v = mx
return h, s, v
def my_bgr_to_hsv(bgr_image):
height, width, c = bgr_image.shape
hsv_image = np.zeros(shape = bgr_image.shape)
#The R,G,B values are divided by 255 to change the range from 0..255 to 0..1:
for h in range(height):
for w in range(width):
b,g,r = bgr_image[h,w]
hsv_image[h,w] = rgb2hsv(r,g,b)
return hsv_image
The problem that I'm getting is that when I want to display the image, I get only a black screen.
This is how I'm trying to display the image:
cv.imshow("hello", cv.cvtColor(np.uint8(hsv_image), cv.COLOR_HSV2BGR))
As you can see I convert it back to bgr in order to use cv.imshow, as it only uses bgr.
I don't think I understand enough of opencv or numpy to debug it.
Simply using imshow, shows the original picture in the wrong colors, which makes me think it can't be completely wrong.

Your Hue values are scaled on the range 0 to 359, which is not going to fit into an unsigned 8 bit number, and your Saturations and Values are scaled on the range 0 to 1 which doesn't match your Hues for scale and is going to result in everything rounding to zero (black) when you cinvert it to an unsigned 8 bit number.
I suggest you multiply Saturation and Value by 255 and divide your Hue by 2.

Related

should I use basinhopping instead of minimize to find colors based on a small set of transforms?

question up front
def shade_func(color, offset):
return tuple([int(c * (1 - offset)) for c in color])
def tint_func(color, offset):
return tuple([int(c + (255 - c) * offset) for c in color])
def tone_func(color, offset):
return tuple([int(c * (1 - offset) + 128 * offset) for c in color])
given an objective over a collection of colors that returns the least distance to a target color, how do I ensure that basinhopping isn't better than minimization in scikit learn?
I was thinking that, for any one color, there will be up to 4 moments in a v-shaped curve, and so only one minimum. if the value with offset zero is itself a minimum, maybe it could be 5. Am I wrong? In any case each is a single optimum, so if we are only searching one color at a time, no reason to use basinhopping.
If we instead use basinhopping to scan all colors at once (we can scale the two different dimensions, in fact this is where the idea of a preprocessor function first came from), it scans them, but does not do such a compelling just of scanning all colors. Some colors it only tries once. I think it might completely skip some colors with large enough sets.
details
I was inspired by way artyclick shows colors and allows searching for them. If you look at an individual color, for example mauve you'll notice that it prominently displays the shades, tints, and tones of the color, rather like an artist might like. If you ask it for the name of a color, it will use a hidden unordered list of about a thousand color names, and some javascript to find the nearest colorname to the color you chose. In fact it will also show alternatives.
I noticed that quite often a shade, tint or tone of an alternative (or even the best match) was often a better match than the color it provided. For those who don't know about shade, tint and tone, there's a nice write up at Dunn Edward's Paints. It looks like shade and tint are the same but with signs reversed, if doing this on tuples representing colors. For tone it is different, a negative value would I think saturate the result.
I felt like there must be authoritative (or at least well sourced) colorname sources it could be using.
In terms of the results, since I want any color or its shade/tint/tone, I want a result like this:
{'color': '#aabbcc',
'offset': {'type': 'tint', 'value': 0.31060384614807254}}
So I can return the actual color name from the color, plus the type of color transform to get there and the amount of you have to go.
For distance of colors, there is a great algorithm that is meant to model human perception that I am using, called CIEDE 2000. Frankly, I'm just using a snippet I found that implements this, it could be wrong.
So now I want to take in two colors, compare their shade, tint, and tone to a target color, and return the one with the least distance. After I am done, I can reconstruct if it was a shade, tint or tone transform from the result just by running all three once and choosing the best fit. With that structure, I can iterate over every color, and that should do it. I use optimization because I don't want to hard code what offsets it should consider (though I am reconsidering this choice now!).
because I want to consider negatives for tone but not for shade/tint, my objective will have to transform that. I have to include two values to optimize, since the objection function will need to know what color to transform (or else the result will give me know way of knowing which color to use the offset with).
so my call should look something like the following:
result = min(minimize(objective, (i,0), bounds=[(i, i), (-1, 1)]) for i in range(len(colors)))
offset_type = resolve_offset_type(result)
with that in mind, I implemented this solution, over the past couple of days.
current solution
from scipy.optimize import minimize
import numpy as np
import math
def clamp(low, x, high):
return max(low, min(x, high))
def hex_to_rgb(hex_color):
hex_color = hex_color.lstrip('#')
return tuple(int(hex_color[i:i+2], 16) for i in (0, 2, 4))
def rgb_to_hex(rgb):
return '#{:02x}{:02x}{:02x}'.format(*rgb)
def rgb_to_lab(color):
# Convert RGB to XYZ color space
R = color[0] / 255.0
G = color[1] / 255.0
B = color[2] / 255.0
R = ((R + 0.055) / 1.055) ** 2.4 if R > 0.04045 else R / 12.92
G = ((G + 0.055) / 1.055) ** 2.4 if G > 0.04045 else G / 12.92
B = ((B + 0.055) / 1.055) ** 2.4 if B > 0.04045 else B / 12.92
X = R * 0.4124 + G * 0.3576 + B * 0.1805
Y = R * 0.2126 + G * 0.7152 + B * 0.0722
Z = R * 0.0193 + G * 0.1192 + B * 0.9505
return (X,Y,Z)
def shade_func(color, offset):
return tuple([int(c * (1 - offset)) for c in color])
def tint_func(color, offset):
return tuple([int(c + (255 - c) * offset) for c in color])
def tone_func(color, offset):
return tuple([int(c * (1 - offset) + 128 * offset) for c in color])
class ColorNameFinder:
def __init__(self, colors, distance=None):
if distance is None:
distance = ColorNameFinder.ciede2000
self.distance = distance
self.colors = [hex_to_rgb(color) for color in colors]
#classmethod
def euclidean(self, left, right):
return (left[0] - right[0]) ** 2 + (left[1] - right[1]) ** 2 + (left[2] - right[2]) ** 2
#classmethod
def ciede2000(self, color1, color2):
# Convert color to LAB color space
lab1 = rgb_to_lab(color1)
lab2 = rgb_to_lab(color2)
# Compute CIE 2000 color difference
C1 = math.sqrt(lab1[1] ** 2 + lab1[2] ** 2)
C2 = math.sqrt(lab2[1] ** 2 + lab2[2] ** 2)
a1 = math.atan2(lab1[2], lab1[1])
a2 = math.atan2(lab2[2], lab2[1])
dL = lab2[0] - lab1[0]
dC = C2 - C1
dA = a2 - a1
dH = 2 * math.sqrt(C1 * C2) * math.sin(dA / 2)
L = 1
C = 1
H = 1
LK = 1
LC = math.sqrt(math.pow(C1, 7) / (math.pow(C1, 7) + math.pow(25, 7)))
LH = math.sqrt(lab1[0] ** 2 + lab1[1] ** 2)
CB = math.sqrt(lab2[1] ** 2 + lab2[2] ** 2)
CH = math.sqrt(C2 ** 2 + dH ** 2)
SH = 1 + 0.015 * CH * LC
SL = 1 + 0.015 * LH * LC
SC = 1 + 0.015 * CB * LC
T = 0.0
if (a1 >= a2 and a1 - a2 <= math.pi) or (a2 >= a1 and a2 - a1 > math.pi):
T = 1
else:
T = 0
dE = math.sqrt((dL / L) ** 2 + (dC / C) ** 2 + (dH / H) ** 2 + T * (dC / SC) ** 2)
return dE
def __factory_objective(self, target, preprocessor=lambda x: x):
def fn(x):
print(x, preprocessor(x))
x = preprocessor(x)
color = self.colors[x[0]]
offset = x[1]
bound_offset = abs(offset)
offsets = [
shade_func(color, bound_offset),
tint_func(color, bound_offset),
tone_func(color, offset)]
least_error = min([(right, self.distance(target, right)) \
for right in offsets], key = lambda x: x[1])[1]
return least_error
return fn
def __resolve_offset_type(self, sample, target, offset):
bound_offset = abs(offset)
shade = shade_func(sample, bound_offset)
tint = tint_func(sample, bound_offset)
tone = tone_func(sample, offset)
lookup = {}
lookup[shade] = "shade"
lookup[tint] = "tint"
lookup[tone] = "tone"
offsets = [shade, tint, tone]
least_error = min([(right, self.distance(target, right)) for right in offsets], key = lambda x: x[1])[0]
return lookup[least_error]
def nearest_color(self, target):
target = hex_to_rgb(target)
preprocessor=lambda x: (int(x[0]), x[1])
result = min(\
[minimize( self.__factory_objective(target, preprocessor=preprocessor),
(i, 0),
bounds=[(i, i), (-1, 1)],
method='Powell') \
for i, color in enumerate(self.colors)], key=lambda x: x.fun)
color_index = int(result.x[0])
nearest_color = self.colors[color_index]
offset = preprocessor(result.x)[1]
offset_type = self.__resolve_offset_type(nearest_color, target, offset)
return {
"color": rgb_to_hex(nearest_color),
"offset": {
"type": offset_type,
"value": offset if offset_type == 'tone' else abs(offset)
}
}
let's demonstrate this with mauve. We'll define a target that is similar to a shade of mauve, include mauve in a list of colors, and ideally we'll get mauve back from our test.
colors = ['#E0B0FF', '#FF0000', '#000000', '#0000FF']
target = '#DFAEFE'
agent = ColorNameFinder(colors)
agent.nearest_color(target)
we do get mauve back:
{'color': '#e0b0ff',
'offset': {'type': 'shade', 'value': 0.0031060384614807254}}
the distance is 0.004991238317138219
agent.distance(hex_to_rgb(target), shade_func(hex_to_rgb(colors[0]), 0.0031060384614807254))
why use Powell's method?
in this arrangement, it is simply the best. No other method that uses bounds did a good job of scanning positives and negatives, and I had mixed results using the preprocessor to scale the values back to negative with bounds of (0,2).
I do notice that in the sample test, a range between about 0.003 and 0.0008 seems to produce the same distance, and that the values my approach considers includes a large number of these. is there a more efficient solution?
If I'm wrong, please let me know.
correctness of the color transformations
what is adding a negative amount of white? (in the case of a tint) I was thinking it is like adding a positive amount of black -- ie a shade, with signs reversed.
my implementation is not correct:
agent.distance(hex_to_rgb(target), shade_func(hex_to_rgb(colors[0]), 0.1)) - agent.distance(hex_to_rgb(target), tint_func(hex_to_rgb(colors[0]), -0.1))
produces 0.3239904390784106 instead of 0.
I'll probably be fixing that soon

Linearly evolutive color map

I am trying to create a colormap that should linearly vary according to a "w" value, from white-red to white-purple.
So...
For w = 1, the minimum value's color (0 for example) would be white and the maximum value's color (+ inf) would be red.
For w = 10 (example), the minimum value's color (0 for example) would be white and the maximum value's color (+ inf) would be orange.
For w = 30 (example), the minimum value's color (0 for example) would be white and the maximum value's color (+ inf) would be yellow.
and so on, until...
For w = 100 (example), the minimum value's color (0 for example) would be white and the maximum value's color (+ inf) would be purple.
I used this website to generate the image : https://g.co/kgs/utJPmw
I can get the first (w = 1) color map by using this code, but no idea on how to make it vary according to what I would like to :
import matplotlib.cm as cm
from matplotlib.colors import ListedColormap, LinearSegmentedColormap
color_map_1 = cm.get_cmap('Reds', 256)
newcolors_1 = color_map_1(np.linspace(0, 1, 256))
color_map_1 = ListedColormap(newcolors_1)
Any idea to do such a thing in python would be so much welcome,
Thank you guys
I finally found the solution. Maybe this is not the cleanest way, but it works very well for what I want to do. The colormaps I create can vary from white-red to white-purple (color spectrum). 765 variations are possible here, but by adding some small changes to the code, it could vary much more or less, depending on what you want.
In the following code : using the create_custom_colormap function, you get as an output cmap and color_map. cmap is the matrix containing the (r,g,b) values. color_map is the object that can be used in matplotlib (imshow) as an actual colormap, on any image.
Using the following code, define the function we will need for this job:
import matplotlib.pyplot as plt
import numpy as np
from matplotlib.colors import ListedColormap, LinearSegmentedColormap
def create_image():
'''
Create some random image on which we will apply the colormap. Any other image could replace this one, with or without extent.
'''
dx, dy = 0.015, 0.05
x = np.arange(-4.0, 4.0, dx)
y = np.arange(-4.0, 4.0, dy)
X, Y = np.meshgrid(x, y)
extent = np.min(x), np.max(x), np.min(y), np.max(y)
def z_fun(x, y):
return (1 - x / 2 + x**5 + y**6) * np.exp(-(x**2 + y**2))
Z2 = z_fun(X, Y)
return(extent, Z2)
def create_cmap(**kwargs):
'''
Create a color matrix and a color map using 3 lists of r (red), g (green) and b (blue) values.
Parameters:
- r (list of floats): red value, between 0 and 1
- g (list of floats): green value, between 0 and 1
- b (list of floats): blue value, between 0 and 1
Returns:
- color_matrix (numpy 2D array): contains all the rgb values for a given colormap
- color_map (matplotlib object): the color_matrix transformed into an object that matplotlib can use on figures
'''
color_matrix = np.empty([256,3])
color_matrix.fill(0)
color_matrix[:,0] = kwargs["r"]
color_matrix[:,1] = kwargs["g"]
color_matrix[:,2] = kwargs["b"]
color_map = ListedColormap(color_matrix)
return(color_matrix, color_map)
def standardize_timeseries_between(timeseries, borne_inf = 0, borne_sup = 1):
'''
For lisibility reasons, I defined r,g,b values between 0 and 255. But the matplotlib ListedColormap function expects values between 0 and 1.
Parameters:
timeseries (list of floats): can be one color vector in our case (either r, g o r b)
borne_inf (int): The minimum value in our timeseries will be replaced by this value
borne_sup (int): The maximum value in our timeseries will be replaced by this value
'''
timeseries_standardized = []
for i in range(len(timeseries)):
a = (borne_sup - borne_inf) / (max(timeseries) - min(timeseries))
b = borne_inf - a * min(timeseries)
timeseries_standardized.append(a * timeseries[i] + b)
timeseries_standardized = np.array(timeseries_standardized)
return(timeseries_standardized)
def create_custom_colormap(weight):
'''
This function is at the heart of the process. It takes only one < weight > parameter, that you can chose.
- For weight between 0 and 255, the colormaps that are created will vary between white-red (min-max) to white-yellow (min-max).
- For weight between 256 and 510, the colormaps that are created will vary between white-green (min-max) to white-cyan (min-max).
- For weight between 511 and 765, the colormaps that are created will vary between white-blue (min-max) to white-purple (min-max).
'''
if weight <= 255:
### 0>w<255
r = np.repeat(1, 256)
g = np.arange(0, 256, 1)
g = standardize_timeseries_between(g, weight/256, 1)
g = g[::-1]
b = np.arange(0, 256, 1)
b = standardize_timeseries_between(b, 1/256, 1)
b = b[::-1]
if weight > 255 and weight <= 255*2:
weight = weight - 255
### 255>w<510
g = np.repeat(1, 256)
r = np.arange(0, 256, 1)
r = standardize_timeseries_between(r, 1/256, 1)
r = r[::-1]
b = np.arange(0, 256, 1)
b = standardize_timeseries_between(b, weight/256, 1)
b = b[::-1]
if weight > 255*2 and weight <= 255*3:
weight = weight - 255*2
### 510>w<765
b = np.repeat(1, 256)
r = np.arange(0, 256, 1)
r = standardize_timeseries_between(r, weight/256, 1)
r = r[::-1]
g = np.arange(0, 256, 1)
g = standardize_timeseries_between(g, 1/256, 1)
g = g[::-1]
cmap, color_map = create_cmap(r=r, g=g, b=b)
return(cmap, color_map)
Use the function create_custom_colormap to get the colormap you want, by giving as argument to the function a value between 0 and 765 (see 5 examples in the figure below):
### Let us create some image (any other could be used).
extent, Z2 = create_image()
### Now create a color map, using the w value you want 0 = white-red, 765 = white-purple.
cmap, color_map = create_custom_colormap(weight=750)
### Plot the result
plt.imshow(Z2, cmap =color_map, alpha=0.7,
interpolation ='bilinear', extent=extent)
plt.colorbar()

Mapping RGB data to values in legend

This is a follow-up to my previous question here
I've been trying to convert the color data in a heatmap to RGB values.
source image
In the below image, to the left is a subplot present in panel D of the source image. This has 6 x 6 cells (6 rows and 6 columns). On the right, we see the binarized image, with white color highlighted in the cell that is clicked after running the code below. The input for running the code is the below image. The ouput is(mean = [ 27.72 26.83 144.17])is the mean of BGR color in the cell that is highlighted in white on the right image below.
A really nice solution that was provided as an answer to my previous question is the following (ref)
import cv2
import numpy as np
# print pixel value on click
def mouse_callback(event, x, y, flags, params):
if event == cv2.EVENT_LBUTTONDOWN:
# get specified color
row = y
column = x
color = image[row, column]
print('color = ', color)
# calculate range
thr = 20 # ± color range
up_thr = color + thr
up_thr[up_thr < color] = 255
down_thr = color - thr
down_thr[down_thr > color] = 0
# find points in range
img_thr = cv2.inRange(image, down_thr, up_thr) # accepted range
height, width, _ = image.shape
left_bound = x - (x % round(width/6))
right_bound = left_bound + round(width/6)
up_bound = y - (y % round(height/6))
down_bound = up_bound + round(height/6)
img_rect = np.zeros((height, width), np.uint8) # bounded by rectangle
cv2.rectangle(img_rect, (left_bound, up_bound), (right_bound, down_bound), (255,255,255), -1)
img_thr = cv2.bitwise_and(img_thr, img_rect)
# get points around specified point
img_spec = np.zeros((height, width), np.uint8) # specified mask
last_img_spec = np.copy(img_spec)
img_spec[row, column] = 255
kernel = np.ones((3,3), np.uint8) # dilation structuring element
while cv2.bitwise_xor(img_spec, last_img_spec).any():
last_img_spec = np.copy(img_spec)
img_spec = cv2.dilate(img_spec, kernel)
img_spec = cv2.bitwise_and(img_spec, img_thr)
cv2.imshow('mask', img_spec)
cv2.waitKey(10)
avg = cv2.mean(image, img_spec)[:3]
mean.append(np.around(np.array(avg), 2))
print('mean = ', np.around(np.array(avg), 2))
# print(mean) # appends data to variable mean
if __name__ == '__main__':
mean = [] #np.zeros((6, 6))
# create window and callback
winname = 'img'
cv2.namedWindow(winname)
cv2.setMouseCallback(winname, mouse_callback)
# read & display image
image = cv2.imread('ip2.png', 1)
#image = image[3:62, 2:118] # crop the image to 6x6 cells
#---- resize image--------------------------------------------------
# appended this to the original code
print('Original Dimensions : ', image.shape)
scale_percent = 220 # percent of original size
width = int(image.shape[1] * scale_percent / 100)
height = int(image.shape[0] * scale_percent / 100)
dim = (width, height)
# resize image
image = cv2.resize(image, dim, interpolation=cv2.INTER_AREA)
# ----------------------------------------------------------------------
cv2.imshow(winname, image)
cv2.waitKey() # press any key to exit
cv2.destroyAllWindows()
What do I want to do next?
The mean of the RGB values thus obtained has to be mapped to the values in the following legend provided in the source image,
I would like to ask for suggestions on how to map the RGB data to the values in the legend.
Note: In my previous post it has been suggested that one could
fit the RGB values into an equation which gives continuous results.
Any suggestions in this direction will also be helpful.
EDIT:
Answering the comment below
I did the following to measure the RGB values of legend
Input image:
This image has 8 cells in columns width and 1 cell in rows height
Changed these lines of code:
left_bound = x - (x % round(width/8)) # 6 replaced with 8
right_bound = left_bound + round(width/8) # 6 replaced with 8
up_bound = y - (y % round(height/1)) # 6 replaced with 1
down_bound = up_bound + round(height/1) # 6 replaced with 1
Mean obtained for each cell/ each color in legend from left to right:
mean = [ 82.15 174.95 33.66]
mean = [45.55 87.01 17.51]
mean = [8.88 8.61 5.97]
mean = [16.79 17.96 74.46]
mean = [ 35.59 30.53 167.14]
mean = [ 37.9 32.39 233.74]
mean = [120.29 118. 240.34]
mean = [238.33 239.56 248.04]
You can try to apply piece wise approach, make pair wise transitions between colors:
c[i->i+1](t)=t*(R[i+1],G[i+1],B[i+1])+(1-t)*(R[i],G[i],B[i])
Do the same for these values:
val[i->i+1](t)=t*val[i+1]+(1-t)*val[i]
Where i - index of color in legend scale, t - parameter in [0:1] range.
So, you have continuous mapping of 2 values, and just need to find color parameters i and t closest to sample and find value from mapping.
Update:
To find the color parameters you can think about every pair of neighbour legend colors as a pair of 3d points, and your queried color as external 3d point. Now you just meed to find a length of perpendicular from the external point to a line, then, iterating over legend color pairs, find the shortest perpendicular (now you have i).
Then find intersection point of the perpendicular and the line. This point will be located at the distance A from line start and if line length is L then parameter value t=A/L.
Update2:
Simple brutforce solution to illustrate piece wise approach:
#include "opencv2/opencv.hpp"
#include <string>
#include <iostream>
using namespace std;
using namespace cv;
int main(int argc, char* argv[])
{
Mat Image=cv::Mat::zeros(100,250,CV_32FC3);
std::vector<cv::Scalar> Legend;
Legend.push_back(cv::Scalar(82.15,174.95,33.66));
Legend.push_back(cv::Scalar(45.55, 87.01, 17.51));
Legend.push_back(cv::Scalar(8.88, 8.61, 5.97));
Legend.push_back(cv::Scalar(16.79, 17.96, 74.46));
Legend.push_back(cv::Scalar(35.59, 30.53, 167.14));
Legend.push_back(cv::Scalar(37.9, 32.39, 233.74));
Legend.push_back(cv::Scalar(120.29, 118., 240.34));
Legend.push_back(cv::Scalar(238.33, 239.56, 248.04));
std::vector<float> Values;
Values.push_back(-4);
Values.push_back(-2);
Values.push_back(0);
Values.push_back(2);
Values.push_back(4);
Values.push_back(8);
Values.push_back(16);
Values.push_back(32);
int w = 30;
int h = 10;
for (int i = 0; i < Legend.size(); ++i)
{
cv::rectangle(Image, Rect(i * w, 0, w, h), Legend[i]/255, -1);
}
std::vector<cv::Scalar> Smooth_Legend;
std::vector<float> Smooth_Values;
for (int i = 0; i < Legend.size()-1; ++i)
{
cv::Scalar c1 = Legend[i];
cv::Scalar c2 = Legend[i + 1];
float v1 = Values[i];
float v2 = Values[i+1];
for (int j = 0; j < w; ++j)
{
float t = (float)j / (float)w;
Scalar c = c2 * t + c1 * (1 - t);
float v = v2 * t + v1 * (1 - t);
float x = i * w + j;
line(Image, Point(x, h), Point(x, h + h), c/255, 1);
Smooth_Values.push_back(v);
Smooth_Legend.push_back(c);
}
}
Scalar qp = cv::Scalar(5, 0, 200);
float d_min = FLT_MAX;
int ind = -1;
for (int i = 0; i < Smooth_Legend.size(); ++i)
{
float d = cv::norm(qp- Smooth_Legend[i]);
if (d < d_min)
{
ind = i;
d_min = d;
}
}
std::cout << Smooth_Values[ind] << std::endl;
line(Image, Point(ind, 3 * h), Point(ind, 4 * h), Scalar::all(255), 2);
circle(Image, Point(ind, 4 * h), 3, qp/255,-1);
putText(Image, std::to_string(Smooth_Values[ind]), Point(ind, 70), FONT_HERSHEY_DUPLEX, 1, Scalar(0, 0.5, 0.5), 0.002);
cv::imshow("Legend", Image);
cv::imwrite("result.png", Image*255);
cv::waitKey();
}
The result:
Python:
import cv2
import numpy as np
height=100
width=250
Image = np.zeros((height, width,3), np.float)
legend = np.array([ (82.15,174.95,33.66),
(45.55,87.01,17.51),
(8.88,8.61,5.97),
(16.79,17.96,74.46),
( 35.59,0.53,167.14),
( 37.9,32.39,233.74),
(120.29,118.,240.34),
(238.33,239.56,248.04)], np.float)
values = np.array([-4,-2,0,2,4,8,16,32], np.float)
# width of cell, also defines number
# of one segment transituin subdivisions.
# Larger values will give more accuracy, but will woek slower.
w = 30
# Only fo displaying purpose. Height of bars in result image.
h = 10
# Plot legend cells ( to check correcrness only )
for i in range(len(legend)):
col=legend[i]
cv2.rectangle(Image, (i * w, 0, w, h), col/255, -1)
# Start form smoorhed scales for color and according values
Smooth_Legend=[]
Smooth_Values=[]
for i in range(len(legend)-1): # iterate known knots
c1 = legend[i] # start color point
c2 = legend[i + 1] # end color point
v1 = values[i] # start value
v2 = values[i+1] # emd va;ie
for j in range(w): # slide inside [start:end] interval.
t = float(j) / float(w) # map it to [0:1] interval
c = c2 * t + c1 * (1 - t) # transition between c1 and c2
v = v2 * t + v1 * (1 - t) # transition between v1 and v2
x = i * w + j # global scale coordinate (for drawing)
cv2.line(Image, (x, h), (x, h + h), c/255, 1) # draw one tick of smoothed scale
Smooth_Values.append(v) # append smoothed values for next step
Smooth_Legend.append(c) # append smoothed color for next step
# queried color
qp = np.array([5, 0, 200])
# initial value for minimal distance set to large value
d_min = 1e7
# index for clolor search
ind = -1
# search for minimal distance from queried color to smoothed scale color
for i in range(len(Smooth_Legend)):
# distance
d = cv2.norm(qp-Smooth_Legend[i])
if (d < d_min):
ind = i
d_min = d
# ind contains index of the closest color in smoothed scale
# and now we can extract according value from smoothed values scale
print(Smooth_Values[ind]) # value mapped to queried color.
# plot pointer (to check ourself)
cv2.line(Image, (ind, 3 * h), (ind, 4 * h), (255,255,255), 2);
cv2.circle(Image, (ind, 4 * h), 3, qp/255,-1);
cv2.putText(Image, str(Smooth_Values[ind]), (ind, 70), cv2.FONT_HERSHEY_DUPLEX, 1, (0, 0.5, 0.5), 1);
# show window
cv2.imshow("Legend", Image)
# save to file
cv2.imwrite("result.png", Image*255)
cv2.waitKey()

OpenCV: Segment each digit from the given image. Digits are written in each cell of a row matrix. Each cell is bounded by margins

I have been trying to recognise handwritten letters (digits/alphabet) from a form-document. As it is known that form-documents have 1d row cells, where the applicant has to fill their information within those bounded cells. However, I'm unable to segment the digits(currently my input consists only digits) from the bounding boxes.
I went through the following steps:
Reading the image (as a grayscale image) via "imread" method of opencv2. Initial Image size:19 x 209(in pixels).
pic = "crop/cropped000.jpg"
newImg = cv2.imread(pic, 0)
Resizing the image 200% its original size via "resize" method of opencv2. I used INTER_AREA Interpolation. Resized Image size: 38 x 418(in pixels)
h,w = newImg.shape
resizedImg = cv2.resize(newImg, (2*w,2*h), interpolation=cv2.INTER_AREA)
Applied Canny edge detection.
v = np.median(resizedImg)
sigma = 0.33
lower = int(max(0, (1.0 - sigma) * v))
upper = int(min(255, (1.0 + sigma) * v))
edgedImg = cv2.Canny(resizedImg, lower, upper)
Cropped the contours and saved them as images in 'BB' directory.
im2, contours, hierarchy = cv2.findContours(edgedImg.copy(),cv2.RETR_TREE,cv2.CHAIN_APPROX_SIMPLE)
num = 0
for c in contours:
x, y, w, h = cv2.boundingRect(c)
num += 1
new_img = resizedImg[y:y+h, x:x+w]
cv2.imwrite('BB/'+str(num).zfill(3) + '.jpg', new_img)
Entire code in summary:
pic = "crop/cropped000.jpg"
newImg = cv2.imread(pic, 0)
h,w = newImg.shape
print(newImg.shape)
resizedImg = cv2.resize(newImg, (2*w,2*h), interpolation=cv2.INTER_AREA)
print(resizedImg.shape)
v = np.median(resizedImg)
sigma = 0.33
lower = int(max(0, (1.0 - sigma) * v))
upper = int(min(255, (1.0 + sigma) * v))
edgedImg = cv2.Canny(resizedImg, lower, upper)
im2, contours, hierarchy = cv2.findContours(edgedImg.copy(),cv2.RETR_TREE,cv2.CHAIN_APPROX_SIMPLE)
num = 0
for c in contours:
x, y, w, h = cv2.boundingRect(c)
num += 1
new_img = resizedImg[y:y+h, x:x+w]
cv2.imwrite('BB/'+str(num).zfill(3) + '.jpg', new_img)
Images produced are posted here:
https://imgur.com/a/GStIcdj
I had to double the image size because Canny edge detection was producing double-edges for an object (However, it still does). I have also played with other openCV functionalities like Thresholding, Gaussian Blur, Dilate, Erode but all in vain.
# we need one more parameter for Date cell width : as this could be different for diff bank
def crop_image_data_from_date_field(image, new_start_h, new_end_h, new_start_w, new_end_w, cell_width):
#for date each cell has same height and width : here width: 25 px so cord will be changed based on width
cropped_image_list = []
starting_width = new_start_w
for i in range(1,9): # as date has only 8 fields: DD/MM/YYYY
cropped_img = image[new_start_h:new_end_h, new_start_w + 1 :new_start_w+22]
new_start_w = starting_width + (i*cell_width)
cropped_img = cv2.resize(cropped_img, (28, 28))
image_name = 'cropped_date/cropped_'+ str(i) + '.png'
cv2.imwrite(image_name, cropped_img)
cropped_image_list.append(image_name)
# print('cropped_image_list : ',cropped_image_list,len(cropped_image_list))
# rec_value = handwritten_digit_recog.recog_digits(cropped_image_list)
recvd_value = custom_predict.predict_digit(cropped_image_list)
# print('recvd val : ',recvd_value)
return recvd_value
you need to specify each cell width and it's x,y,w,h.
I think this will help you.

Selecting colors that are furthest apart

I'm working on a project that requires me to select "unique" colors for each item. At times there could be upwards of 400 items. Is there some way out there of selecting the 400 colors that differ the most? Is it as simple as just changing the RGB values by a fixed increment?
You could come up with an equal distribution of 400 colours by incrementing red, green and blue in turn by 34.
That is:
You know you have three colour channels: red, green and blue
You need 400 distinct combinations of R, G and B
So on each channel the number of increments you need is the cube root of 400, i.e. about 7.36
To span the range 0..255 with 7.36 increments, each increment must be about 255/7.36, i.e. about 34
Probably HSL or HSV would be a better representations than RGB for this task.
You may find that changing the hue gives better variability perception to the eye, so adjust your increments in a way that for every X units changed in S and L you change Y (with Y < X) units of hue, and adjust X and Y so you cover the spectrum with your desired amount of samples.
Here is my final code. Hopefully it helps someone down the road.
from PIL import Image, ImageDraw
import math, colorsys, os.path
# number of color circles needed
qty = 400
# the lowest value (V in HSV) can go
vmin = 30
# calculate how much to increment value by
vrange = 100 - vmin
if (qty >= 72):
vdiff = math.floor(vrange / (qty / 72))
else:
vdiff = 0
# set options
sizes = [16, 24, 32]
border_color = '000000'
border_size = 3
# initialize variables
hval = 0
sval = 50
vval = vmin
count = 0
while count < qty:
im = Image.new('RGBA', (100, 100), (0, 0, 0, 0))
draw = ImageDraw.Draw(im)
draw.ellipse((5, 5, 95, 95), fill='#'+border_color)
r, g, b = colorsys.hsv_to_rgb(hval/360.0, sval/100.0, vval/100.0)
r = int(r*255)
g = int(g*255)
b = int(b*255)
draw.ellipse((5+border_size, 5+border_size, 95-border_size, 95-border_size), fill=(r, g, b))
del draw
hexval = '%02x%02x%02x' % (r, g, b)
for size in sizes:
result = im.resize((size, size), Image.ANTIALIAS)
result.save(str(qty)+'/'+hexval+'_'+str(size)+'.png', 'PNG')
if hval + 10 < 360:
hval += 10
else:
if sval == 50:
hval = 0
sval = 100
else:
hval = 0
sval = 50
vval += vdiff
count += 1
Hey I came across this problem a few times in my projects where I wanted to display, say, clusters of points. I found that the best way to go was to use the colormaps from matplotlib (https://matplotlib.org/stable/tutorials/colors/colormaps.html) and
colors = plt.get_cmap("hsv")[np.linspace(0, 1, n_colors)]
this will output rgba colors so you can get the rgb with just
rgb = colors[:,:3]

Resources