I am trying to use the following code to calculate the delta_e.
from colormath.color_objects import sRGBColor, LabColor
from colormath.color_conversions import convert_color
from colormath.color_diff import delta_e_cie2000
# Red Color
color1_rgb = sRGBColor(1.0, 0.0, 0.0);
# Blue Color
color2_rgb = sRGBColor(0.0, 0.0, 1.0);
# Convert from RGB to Lab Color Space
color1_lab = convert_color(color1_rgb, LabColor);
# Convert from RGB to Lab Color Space
color2_lab = convert_color(color2_rgb, LabColor);
# Find the color difference
delta_e = delta_e_cie2000(color1_lab, color2_lab);
print "The difference between the 2 color = ", delta_e
However, the input that I have is the name of the color, such as 'Black', 'White', 'Red', etc. I originally thought of using webcolours.name_to_rgb to convert the name to the RGB format, but this gives me IntegerRGB and not sRGB that I would need. How should I go about converting them from IntegerRGB to sRGB? Alternatively, is it possible to get the sRGB from the name directly? Many thanks!
Related
I want display sRGB values based on CIE LHab values, i didn't really know the topic aroud color theory but here is my code, i use colour library.
Did i miss something?
#Use Illuminant d65
d65 = [0.31382,0.33100]
# Maximun lightness of 100
lightess = 100
# Maximun chroma of 90
chroma = 90
# Create primary hue
hue = np.arange(0,360,45)
# Create np array
primary_rgb = np.array([[lightess,chroma, x] for x in hue])
# Convert to CIE L*a*b
primary_lab = colour.LCHab_to_Lab(primary_rgb)
# Convert to XYZ
primary_xyz = colour.Lab_to_XYZ(primary_lab)
# Convert to sRGB color
primary_rgb = colour.XYZ_to_sRGB(primary_xyz,d65,'Bradford')
# Denormalize values
primary_rgb*255
Output out of range with negative values...
array([[ 409.91335532, 170.93938038, 260.71868158],
[ 393.03002494, 198.83037084, 134.96104706],
[ 300.27298956, 250.59731666, 58.49528246],
[ 157.31758891, 283.79165255, 123.85945153],
[-1256.38350547, 296.51665099, 254.2577884 ],
[-2417.70063864, 292.21019209, 380.58920247],
[ -374.81508589, 264.85047515, 434.59056034],
[ 315.68646752, 211.99574857, 383.26874897]])
I want a correct ouput
The problem here is that you are constructing a hue sweep that covers a significant portion of the CIE Lab space, doing so, some of the colours, i.e. the negative ones, will be outside sRGB gamut:
import colour
import numpy as np
D65 = colour.CCS_ILLUMINANTS["CIE 1964 10 Degree Standard Observer"]["D65"]
hue = np.arange(0, 360, 45)
LCHab = colour.utilities.tstack([np.full(hue.shape, 100), np.full(hue.shape, 90), hue])
Lab = colour.LCHab_to_Lab(LCHab)
XYZ = colour.Lab_to_XYZ(Lab, D65)
sRGB = (
colour.cctf_encoding(
np.clip(colour.XYZ_to_sRGB(XYZ, apply_cctf_encoding=False), 0, 1)
)
* 255
)
print(sRGB)
figure, axes = colour.plotting.plot_RGB_colourspaces_in_chromaticity_diagram_CIE1976UCS(
"sRGB", diagram_opacity=0.25, standalone=False
)
uv = colour.Luv_to_uv(colour.XYZ_to_Luv(XYZ, D65))
axes.scatter(uv[..., 0], uv[..., 1])
colour.plotting.render()
I was working on the code "Discrete distribution as horizontal bar chart", found here LINK, using Matplotlib 3.1.1
I've been circling around the question for a while, but I still can't figure it out: what's the meaning of the instruction: category_colors = plt.get_cmap('RdYlGn')(np.linspace(0.15, 0.85, data.shape[1])) ?
As np.linspace(0.15, 0.85, data.shape[1]) resolves to array([0.15 , 0.325, 0.5 , 0.675, 0.85 ]), I first thought that the program was using the colormap RdYlGn (supposed to go from color=0.0 to color=1.0) and was then taking the 5 specific colors located at point 0.15, etc., 0.85
But, printing category_colors resolves to a (5, 4) array:
array([[0.89888504, 0.30549789, 0.20676663, 1. ],
[0.99315648, 0.73233372, 0.42237601, 1. ],
[0.99707805, 0.9987697 , 0.74502115, 1. ],
[0.70196078, 0.87297193, 0.44867359, 1. ],
[0.24805844, 0.66720492, 0.3502499 , 1. ]])
I don't understand what these numbers refer to ???
plt.get_cmap('RdYlGn') returns a function which maps a number between 0 and 1 to a corresponding color, where 0 gets mapped to red, 0.5 to yellow and 1 to green. Often, this function gets the name cmap = plt.get_cmap('RdYlGn'). Then cmap(0) (which is the same as plt.get_cmap('RdYlGn')(0)) would be the rbga-value (0.6470588235294118, 0.0, 0.14901960784313725, 1.0) for (red, green, blue, alpha). In hexadecimal, this color would be #a50026.
By numpy's broadcasting magic, cmap(np.array([0.15 , 0.325, 0.5 , 0.675, 0.85 ])) gets the same result as np.array([cmap(0.15), cmap(0.325), ..., cmap(0.85)]). (In other words, many numpy functions applied to an array return an array of that function applied to the individual elements.)
So, the first row of category_colors = cmap(np.linspace(0.15, 0.85, 5)) will be the rgba-values of the color corresponding to value 0.15, or 0.89888504, 0.30549789, 0.20676663, 1.. This is a color with 90% red, 31% green and 21% blue (and alpha=1 for complete opaque), so quite reddish. The next row are the rgba values corresponding to 0.325, and so on.
Here is some code to illustrate the concepts:
import matplotlib.pyplot as plt
from matplotlib.colors import to_hex # convert a color to hexadecimal format
from matplotlib.cm import ScalarMappable # needed to create a custom colorbar
import numpy as np
cmap = plt.get_cmap('RdYlGn')
color_values = np.linspace(0.15, 0.85, 5)
category_colors = cmap(color_values)
plt.barh(color_values, 1, height=0.15, color=category_colors)
plt.yticks(color_values)
plt.colorbar(ScalarMappable(cmap=cmap), ticks=color_values)
plt.ylim(0, 1)
plt.xlim(0, 1.1)
plt.xticks([])
for val, color in zip(color_values, category_colors):
r, g, b, a = color
plt.text(0.1, val, f'r:{r:0.2f} g:{g:0.2f} b:{b:0.2f} a:{a:0.1f}\nhex:{to_hex(color)}', va='center')
plt.show()
PS: You might also want to read about norms, which map an arbitrary range to the range 0,1 to be used by colormaps.
Following this example of K means clustering I want to recreate the same - only I'm very keen for the final image to contain just the quantized colours (+ white background). As it is, the colour bars get smooshed together to create a pixel line of blended colours.
Whilst they look very similar, the image (top half) is what I've got from CV2 it contains 38 colours total.
The lower image only has 10 colours and is what I'm after.
Let's look at a bit of that with 6 times magnification:
I've tried :
# OpenCV and Python K-Means Color Clustering
# build a histogram of clusters and then create a figure
# representing the number of pixels labeled to each color
hist = colour_utils.centroid_histogram(clt)
bar = colour_utils.plot_colors(hist, clt.cluster_centers_)
bar = cv2.resize(bar, (460, 345), 0, 0, interpolation = cv2.INTER_NEAREST)
However, the resize seems to have no resizing effect or change the scaling type. I don't know what controls the initial image size either.
Confused.
Any ideas?
I recommend you to show the image using cv2.imshow, instead of using matplotlib.
cv2.imshow shows the image "pixel to pixel" by default, while matplotlib.pyplot matches the image dimensions to the size of the axes.
bar_bgr = cv2.cvtColor(bar, cv2.COLOR_RGB2BGR) # Convert RGB to BGR
cv2.imshow('bar', bar_bgr)
cv2.waitKey()
cv2.destroyAllWindows()
In case you want to use matplotlib, take a look at: Display image with a zoom = 1 with Matplotlib imshow() (how to?).
Code used for testing:
# import the necessary packages
import numpy as np
from sklearn.cluster import KMeans
import matplotlib.pyplot as plt
import argparse
#import utils
import cv2
def centroid_histogram(clt):
# grab the number of different clusters and create a histogram
# based on the number of pixels assigned to each cluster
numLabels = np.arange(0, len(np.unique(clt.labels_)) + 1)
(hist, _) = np.histogram(clt.labels_, bins = numLabels)
# normalize the histogram, such that it sums to one
hist = hist.astype("float")
hist /= hist.sum()
# return the histogram
return hist
def plot_colors(hist, centroids):
# initialize the bar chart representing the relative frequency
# of each of the colors
bar = np.zeros((50, 300, 3), dtype = "uint8")
startX = 0
# loop over the percentage of each cluster and the color of
# each cluster
for (percent, color) in zip(hist, centroids):
# plot the relative percentage of each cluster
endX = startX + (percent * 300)
cv2.rectangle(bar, (int(startX), 0), (int(endX), 50),
color.astype("uint8").tolist(), -1)
startX = endX
# return the bar chart
return bar
# load the image and convert it from BGR to RGB so that
# we can dispaly it with matplotlib
image = cv2.imread('chelsea.png')
image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
# show our image
plt.figure()
plt.axis("off")
plt.imshow(image)
# reshape the image to be a list of pixels
image = image.reshape((image.shape[0] * image.shape[1], 3))
# cluster the pixel intensities
clt = KMeans(n_clusters = 5)
clt.fit(image)
# build a histogram of clusters and then create a figure
# representing the number of pixels labeled to each color
hist = centroid_histogram(clt)
bar = plot_colors(hist, clt.cluster_centers_)
# show our color bart
#plt.figure()
#plt.axis("off")
#plt.imshow(bar)
#plt.show()
bar = cv2.resize(bar, (460, 345), 0, 0, interpolation = cv2.INTER_NEAREST)
bar_bgr = cv2.cvtColor(bar, cv2.COLOR_RGB2BGR) # Convert RGB to BGR
cv2.imshow('bar', bar_bgr)
cv2.waitKey()
cv2.destroyAllWindows()
I am trying to figure out whether a particular color exists in an image or not? I want to write a Python code to compare the given color value with a color from the certain location coordinates of the image. I already tried to get a solution with Segmentation of Image on Color space, but I can not make it.
I am using Python "OpenCV".
I want to make program like:
given_color = Blue (Color Values)
if Blue == Color_values_detected_from_image:
print("Blue Color is present at your given area")
else:
print("Given Color Not Found")
Could you please advise me on where should I start?
I am expecting that if I am giving coordinates of rectangle in certain area of image then it should be compared with my given color values.
This can be done by simple pixel-wise comparison and NumPy's all method.
Let's have a look at the following code:
import cv2
import numpy as np
# Read input image
img = cv2.imread('images/colors.png', cv2.IMREAD_COLOR)
cv2.imshow('img', img)
# Region of interest (x1, x2, y1, y2)
roi = (200, 700, 0, 100)
imgRoi = img[roi[2]:roi[3], roi[0]:roi[1]]
cv2.imshow('imgRoi', imgRoi)
# Color of interest [B, G, R]
coi = [0, 255, 0]
# Compare each pixel with color; logical AND over all colors (axis=2)
cmp = np.all(imgRoi == coi, axis=2)
# From here, do whatever you like with this information...
# For example, show mask where color of interest was found
out = np.zeros((imgRoi.shape[0], imgRoi.shape[1], 1), np.uint8)
out[cmp] = 255
cv2.imshow('out', out)
cv2.waitKey(0)
The input image looks like this:
The region of interest (ROI) looks like this:
As an exemplary output, here's the mask where the color of interest #00ff00 was found:
Hope that helps!
P.S. The Python/NumPy masters may please suggest a more elegant way to "translate" the two points (x1, y1), (x2, y2) to the indices x1:x2, y1:y2. Right now, this notation looks quite cumbersome...
I have a input image similar to
I am referring to:
How to fill the gaps in letters after Canny edge detection
I want to plot black pixels on this image. The proposed solution on the above url is first find all black pixels using
import matplotlib.pyplot as pp
import numpy as np
image = pp.imread(r'/home/cris/tmp/Zuv3p.jpg')
bin = np.all(image<100, axis=2)
My question is dow do I plot this black pixels (data stored in bin ) on image while ignoring all other colour channels.
In the answer is stated that np.all(image<100, axis=2) is used to select pixels where R,G and B are all lower then 100, which is basically color separation. Personally, I like to use the HSV-colorspace for that.
Result:
Note: if you want to improve the green letters, it is best to create a separate mask for that, and tweak the hsv values for green.
Code:
import numpy as np
import cv2
# load image
img = cv2.imread("img.jpg")
# Convert BGR to HSV
hsv = cv2.cvtColor(img, cv2.COLOR_BGR2HSV)
# define range of black color in HSV
lower_val = np.array([0,0,0])
upper_val = np.array([179,255,127])
# Threshold the HSV image to get only black colors
mask = cv2.inRange(hsv, lower_val, upper_val)
# invert mask to get black symbols on white background
mask_inv = cv2.bitwise_not(mask)
# display image
cv2.imshow("Mask", mask_inv)
cv2.imshow("Img", img)
cv2.waitKey(0)
cv2.destroyAllWindows()