I am trying to figure out whether a particular color exists in an image or not? I want to write a Python code to compare the given color value with a color from the certain location coordinates of the image. I already tried to get a solution with Segmentation of Image on Color space, but I can not make it.
I am using Python "OpenCV".
I want to make program like:
given_color = Blue (Color Values)
if Blue == Color_values_detected_from_image:
print("Blue Color is present at your given area")
else:
print("Given Color Not Found")
Could you please advise me on where should I start?
I am expecting that if I am giving coordinates of rectangle in certain area of image then it should be compared with my given color values.
This can be done by simple pixel-wise comparison and NumPy's all method.
Let's have a look at the following code:
import cv2
import numpy as np
# Read input image
img = cv2.imread('images/colors.png', cv2.IMREAD_COLOR)
cv2.imshow('img', img)
# Region of interest (x1, x2, y1, y2)
roi = (200, 700, 0, 100)
imgRoi = img[roi[2]:roi[3], roi[0]:roi[1]]
cv2.imshow('imgRoi', imgRoi)
# Color of interest [B, G, R]
coi = [0, 255, 0]
# Compare each pixel with color; logical AND over all colors (axis=2)
cmp = np.all(imgRoi == coi, axis=2)
# From here, do whatever you like with this information...
# For example, show mask where color of interest was found
out = np.zeros((imgRoi.shape[0], imgRoi.shape[1], 1), np.uint8)
out[cmp] = 255
cv2.imshow('out', out)
cv2.waitKey(0)
The input image looks like this:
The region of interest (ROI) looks like this:
As an exemplary output, here's the mask where the color of interest #00ff00 was found:
Hope that helps!
P.S. The Python/NumPy masters may please suggest a more elegant way to "translate" the two points (x1, y1), (x2, y2) to the indices x1:x2, y1:y2. Right now, this notation looks quite cumbersome...
Related
I have a lot of pictures that I have to perform some data analysis on. Currently all of the pictures look like the one below:
enter image description here
Before the images can be processed I must remove the vertical black border on the left and right side of the image. So far I've been able to achieve this in Python by converting the image to grayscale and then an array so that darker pixels are closer to 0. Then I can select the region where the entire column is above a threshold value:
import numpy as np
from PIL import Image
file = "file_name.png"
img = Image.open(file).convert('L')
img.show()
array = np.array(img)
print(array.shape)
array_crop = array[:, np.all(array > 10, axis=0)]
im = Image.fromarray(array_crop, mode='L')
im.show()
print(array_crop.shape)
the cropped image then looks like this:
enter image description here
where the initial shape is (240, 320) and the cropped shape is (240, 250) which is exactly what I want. The problem is that with this same code, the resulting cropped shape is not always perfect for every image (for example (240, 251)) because the black border is not perfectly black and has a slight gradient. Is there a way in Python to determine the slice index that corresponds to the np.all condition result so that I can apply it to all of my images consistently?
So far the only way I have thought to achieve this is through trial and error. Then as a test I could subtract the two cropped arrays from eachother to make sure all resulting values are 0, but I'd rather not have to do it this way.
You can work out the boundaries of the image like this:
w = np.argwhere(array > 10)
amin = w[:, 0].min()
amax = w[:, 0].max()
bmin = w[:, 1].min()
bmax = w[:, 1].max()
array_crop = array[amin:amax + 1, bmin:bmax + 1]
which should always crop it to a rectangle.
I am new to OpenCV and trying to see if I can find a way to detect vertical text for the image attached.
In this case on row 3 , I would like to get the bounding box around Original Cost and the amount below ($200,000.00).
Similarly I would like to get the bounding box around Amount Existing Liens and the associated amount below. I then would use this data to send to an OCR engine to read text. Traditional OCR engines go line by line and extract and loses the context.
Here is what I have tried so far -
import cv2
import numpy as np
img = cv2.imread('Test3.png')
gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
edges = cv2.Canny(gray,100,100,apertureSize = 3)
cv2.imshow('edges',edges)
cv2.waitKey(0)
minLineLength = 20
maxLineGap = 10
lines = cv2.HoughLinesP(edges,1,np.pi/180,15,minLineLength=minLineLength,maxLineGap=maxLineGap)
for x in range(0, len(lines)):
for x1,y1,x2,y2 in lines[x]:
cv2.line(img,(x1,y1),(x2,y2),(0,255,0),2)
cv2.imshow('hough',img)
cv2.waitKey(0)
Here is my solution based on Kanan Vyas and Adrian Rosenbrock
It's probably not as "canonical" as you'd wish.
But it seems to work (more or less...) with the image you provided.
Just a word of CAUTION: The code looks within the directory from which it is running, for a folder named "Cropped" where cropped images will be stored. So, don't run it in a directory which already contains a folder named "Cropped" because it deletes everything in this folder at each run. Understood? If you're unsure run it in a separate folder.
The code:
# Import required packages
import cv2
import numpy as np
import pathlib
###################################################################################################################################
# https://www.pyimagesearch.com/2015/04/20/sorting-contours-using-python-and-opencv/
###################################################################################################################################
def sort_contours(cnts, method="left-to-right"):
# initialize the reverse flag and sort index
reverse = False
i = 0
# handle if we need to sort in reverse
if method == "right-to-left" or method == "bottom-to-top":
reverse = True
# handle if we are sorting against the y-coordinate rather than
# the x-coordinate of the bounding box
if method == "top-to-bottom" or method == "bottom-to-top":
i = 1
# construct the list of bounding boxes and sort them from top to
# bottom
boundingBoxes = [cv2.boundingRect(c) for c in cnts]
(cnts, boundingBoxes) = zip(*sorted(zip(cnts, boundingBoxes),
key=lambda b:b[1][i], reverse=reverse))
# return the list of sorted contours and bounding boxes
return (cnts, boundingBoxes)
###################################################################################################################################
# https://medium.com/coinmonks/a-box-detection-algorithm-for-any-image-containing-boxes-756c15d7ed26 (with a few modifications)
###################################################################################################################################
def box_extraction(img_for_box_extraction_path, cropped_dir_path):
img = cv2.imread(img_for_box_extraction_path, 0) # Read the image
(thresh, img_bin) = cv2.threshold(img, 128, 255,
cv2.THRESH_BINARY | cv2.THRESH_OTSU) # Thresholding the image
img_bin = 255-img_bin # Invert the imagecv2.imwrite("Image_bin.jpg",img_bin)
# Defining a kernel length
kernel_length = np.array(img).shape[1]//200
# A verticle kernel of (1 X kernel_length), which will detect all the verticle lines from the image.
verticle_kernel = cv2.getStructuringElement(cv2.MORPH_RECT, (1, kernel_length))
# A horizontal kernel of (kernel_length X 1), which will help to detect all the horizontal line from the image.
hori_kernel = cv2.getStructuringElement(cv2.MORPH_RECT, (kernel_length, 1))
# A kernel of (3 X 3) ones.
kernel = cv2.getStructuringElement(cv2.MORPH_RECT, (3, 3))# Morphological operation to detect verticle lines from an image
img_temp1 = cv2.erode(img_bin, verticle_kernel, iterations=3)
verticle_lines_img = cv2.dilate(img_temp1, verticle_kernel, iterations=3)
#cv2.imwrite("verticle_lines.jpg",verticle_lines_img)# Morphological operation to detect horizontal lines from an image
img_temp2 = cv2.erode(img_bin, hori_kernel, iterations=3)
horizontal_lines_img = cv2.dilate(img_temp2, hori_kernel, iterations=3)
#cv2.imwrite("horizontal_lines.jpg",horizontal_lines_img)# Weighting parameters, this will decide the quantity of an image to be added to make a new image.
alpha = 0.5
beta = 1.0 - alpha
# This function helps to add two image with specific weight parameter to get a third image as summation of two image.
img_final_bin = cv2.addWeighted(verticle_lines_img, alpha, horizontal_lines_img, beta, 0.0)
img_final_bin = cv2.erode(~img_final_bin, kernel, iterations=2)
(thresh, img_final_bin) = cv2.threshold(img_final_bin, 128, 255, cv2.THRESH_BINARY | cv2.THRESH_OTSU)# For Debugging
# Enable this line to see verticle and horizontal lines in the image which is used to find boxes
#cv2.imwrite("img_final_bin.jpg",img_final_bin)
# Find contours for image, which will detect all the boxes
contours, hierarchy = cv2.findContours(
img_final_bin, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)
# Sort all the contours by top to bottom.
(contours, boundingBoxes) = sort_contours(contours, method="top-to-bottom")
idx = 0
for c in contours:
# Returns the location and width,height for every contour
x, y, w, h = cv2.boundingRect(c)# If the box height is greater then 20, widht is >80, then only save it as a box in "cropped/" folder.
if (w > 50 and h > 20):# and w > 3*h:
idx += 1
new_img = img[y:y+h, x:x+w]
cv2.imwrite(cropped_dir_path+str(x)+'_'+str(y) + '.png', new_img)
###########################################################################################################################################################
def prepare_cropped_folder():
p=pathlib.Path('./Cropped')
if p.exists(): # Cropped folder non empty. Let's clean up
files = [x for x in p.glob('*.*') if x.is_file()]
for f in files:
f.unlink()
else:
p.mkdir()
###########################################################################################################################################################
# MAIN
###########################################################################################################################################################
prepare_cropped_folder()
# Read image from which text needs to be extracted
img = cv2.imread("dkesg.png")
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
# Performing OTSU threshold
ret, thresh1 = cv2.threshold(gray, 0, 255, cv2.THRESH_OTSU | cv2.THRESH_BINARY_INV)
thresh1=255-thresh1
bin_y=np.zeros(thresh1.shape[0])
for x in range(0,len(bin_y)):
bin_y[x]=sum(thresh1[x,:])
bin_y=bin_y/max(bin_y)
ry=np.where(bin_y>0.995)[0]
for i in range(0,len(ry)):
cv2.line(img, (0, ry[i]), (thresh1.shape[1], ry[i]), (0, 0, 0), 1)
# We need to draw abox around the picture with a white border in order for box_detection to work
cv2.line(img,(0,0),(0,img.shape[0]-1),(255,255,255),2)
cv2.line(img,(img.shape[1]-1,0),(img.shape[1]-1,img.shape[0]-1),(255,255,255),2)
cv2.line(img,(0,0),(img.shape[1]-1,0),(255,255,255),2)
cv2.line(img,(0,img.shape[0]-1),(img.shape[1]-1,img.shape[0]-1),(255,255,255),2)
cv2.line(img,(0,0),(0,img.shape[0]-1),(0,0,0),1)
cv2.line(img,(img.shape[1]-3,0),(img.shape[1]-3,img.shape[0]-1),(0,0,0),1)
cv2.line(img,(0,0),(img.shape[1]-1,0),(0,0,0),1)
cv2.line(img,(0,img.shape[0]-2),(img.shape[1]-1,img.shape[0]-2),(0,0,0),1)
cv2.imwrite('out.png',img)
box_extraction("out.png", "./Cropped/")
Now... It puts the cropped regions in the Cropped folder. They are named as x_y.png with (x,y) the position on the original image.
Here are two examples of the outputs
and
Now, in a terminal. I used pytesseract on these two images.
The results are the following:
1)
Original Cost
$200,000.00
2)
Amount Existing Liens
$494,215.00
As you can see, pytesseract got the amount wrong in the second case... So, be careful.
Best regards,
Stéphane
I assume the bounding box is fix (rectangle that able to fit in "Original Amount and the amount below). You can use text detection to detect the "Original Amount" and "Amount Existing Liens" using OCR and crop out the image based on the detected location for further OCR on the amount. You can refer this link for text detection
Try to divide the image into different cells using the lines in the image.
For example, first divide the input into rows by detecting the horizontal lines. This can be done by using cv.HoughLinesP and checking for each line if the difference between y-coordinate of the begin and end point is smaller than a certain threshold abs(y2 - y1) < 10. If you have a horizontal line, it's a separator for a new row. You can use the y-coordinates of this line to split the input horizontally.
Next, for the row you're interested in, divide the region into columns using the same technique, but now make sure the difference between the x-coordinates of the begin and end point are smaller than a certain threshold, since you're now looking for the vertical lines.
You can now crop the image to different cells using the y-coordinates of the horizontal lines and the x-coordinates of the vertical lines. Pass these cropped regions one by one to the OCR engine and you'll have for each cell the corresponding text.
As the questions states, I want to apply a two-way Adaptive Thresholding technique to my image. That is to say, I want to find each pixel value in the neighborhood and set it to 255 if it is less than or greater than the mean of the neighborhood minus a constant c.
Take this image, for example, as the neighborhood of pixels. The desired pixel areas to keep are the darker areas on the third and sixth squares' upper-half (from left-to-right and top-to-bottom), as well as the eight and twelve squares' upper-half.
Obviously, this all depends on the set constant value, but ideally areas that are significantly different than the mean pixel value of the neighborhood will be kept. I can worry about the tuning myself though.
Your question and comment are contradictory: Keep everything (significantly) brighter/darker than the mean (+/- constant) of the neighbourhood (question) vs. keep everything within mean +/- constant (comment). I assume the first one to be the correct, and I'll try to give an answer.
Using cv2.adaptiveThreshold is certainly useful; parameterization might be tricky, especially given the example image. First, let's have a look at the output:
We see, that the intensity value range in the given image is small. The upper-halfs of the third and sixth' squares don't really differ from their neighbourhood. It's quite unlikely to find a proper difference there. The upper-halfs of squares #8 and #12 (or also the lower-half of square #10) are more likely to be found.
Top row now shows some more "global" parameters (blocksize = 151, c = 25), bottom row more "local" parameters (blocksize = 51, c = 5). Middle column is everything darker than the neighbourhood (with respect to the paramters), right column is everything brighter than the neighbourhood. We see, in the more "global" case, we get the proper upper-halfs, but there are mostly no "significant" darker areas. Looking, at the more "local" case, we see some darker areas, but we won't find the complete upper-/lower-halfs in question. That's just because how the different triangles are arranged.
On the technical side: You need two calls of cv2.adaptiveThreshold, one using the cv2.THRESH_BINARY_INV mode to find everything darker and one using the cv2.THRESH_BINARY mode to find everything brighter. Also, you have to provide c or -c for the two different cases.
Here's the full code:
import cv2
from matplotlib import pyplot as plt
from skimage import io # Only needed for web grabbing images
plt.figure(1, figsize=(15, 10))
img = cv2.cvtColor(io.imread('https://i.stack.imgur.com/dA1Vt.png'), cv2.COLOR_RGB2GRAY)
plt.subplot(2, 3, 1), plt.imshow(img, cmap='gray'), plt.colorbar()
# More "global" parameters
bs = 151
c = 25
img_le = cv2.adaptiveThreshold(img, 255, cv2.ADAPTIVE_THRESH_MEAN_C, cv2.THRESH_BINARY_INV, bs, c)
img_gt = cv2.adaptiveThreshold(img, 255, cv2.ADAPTIVE_THRESH_MEAN_C, cv2.THRESH_BINARY, bs, -c)
plt.subplot(2, 3, 2), plt.imshow(img_le, cmap='gray')
plt.subplot(2, 3, 3), plt.imshow(img_gt, cmap='gray')
# More "local" parameters
bs = 51
c = 5
img_le = cv2.adaptiveThreshold(img, 255, cv2.ADAPTIVE_THRESH_MEAN_C, cv2.THRESH_BINARY_INV, bs, c)
img_gt = cv2.adaptiveThreshold(img, 255, cv2.ADAPTIVE_THRESH_MEAN_C, cv2.THRESH_BINARY, bs, -c)
plt.subplot(2, 3, 5), plt.imshow(img_le, cmap='gray')
plt.subplot(2, 3, 6), plt.imshow(img_gt, cmap='gray')
plt.tight_layout()
plt.show()
Hope that helps – somehow!
-----------------------
System information
-----------------------
Python: 3.8.1
Matplotlib: 3.2.0rc1
OpenCV: 4.1.2
-----------------------
Another way to look at this is that where abs(mean - image) <= c, you want that to become white, otherwise you want that to become black. In Python/OpenCV/Scipy/Numpy, I first compute the local uniform mean (average) using a uniform 51x51 pixel block averaging filter (boxcar average). You could use some weighted averaging method such as the Gaussian average, if you want. Then I compute the abs(mean - image). Then I use Numpy thresholding. Note: You could also just use one simple threshold (cv2.threshold) on the abs(mean-image) result in place of two numpy thresholds.
Input:
import cv2
import numpy as np
from scipy import ndimage
# read image as grayscale
# convert to floats in the range 0 to 1 so that the difference keeps negative values
img = cv2.imread('squares.png',0).astype(np.float32)/255.0
# get uniform (51x51 block) average
ave = ndimage.uniform_filter(img, size=51)
# get abs difference between ave and img and convert back to integers in the range 0 to 255
diff = 255*np.abs(ave - img)
diff = diff.astype(np.uint8)
# threshold
# Note: could also just use one simple cv2.Threshold on diff
c = 5
diff_thresh = diff.copy()
diff_thresh[ diff_thresh <= c ] = 255
diff_thresh[ diff_thresh != 255 ] = 0
# view result
cv2.imshow("img", img)
cv2.imshow("ave", ave)
cv2.imshow("diff", diff)
cv2.imshow("threshold", diff_thresh)
cv2.waitKey(0)
cv2.destroyAllWindows()
# save result
cv2.imwrite("squares_2way_thresh.jpg", diff_thresh)
Result:
I am currently have a document that needs to be smart scanned.
For that, I need to find proper contours of the document in any background so that I can do a warped perspective projection and detection with that image.
The main issue faced while doing this is that the document edge detects any kind of background.
I have tried to use the function HoughLineP and tried to find contours on the grayscale blurred image passed through canny edge detection until now.
MORPH = 9
CANNY = 84
HOUGH = 25
IM_HEIGHT, IM_WIDTH, _ = rescaled_image.shape
# convert the image to grayscale and blur it slightly
gray = cv2.cvtColor(rescaled_image, cv2.COLOR_BGR2GRAY)
gray = cv2.GaussianBlur(gray, (7,7), 0)
#dilate helps to remove potential holes between edge segments
kernel = cv2.getStructuringElement(cv2.MORPH_RECT,(MORPH,MORPH))
dilated = cv2.dilate(gray, kernel)
# find edges and mark them in the output map using the Canny algorithm
edged = cv2.Canny(dilated, 0, CANNY)
test_corners = self.get_corners(edged)
approx_contours = []
(_, cnts, hierarchy) = cv2.findContours(edged.copy(), cv2.RETR_LIST, cv2.CHAIN_APPROX_SIMPLE)
cnts = sorted(cnts, key=cv2.contourArea, reverse=True)[:5]
# loop over the contours
for c in cnts:
# approximate the contour
approx = cv2.approxPolyDP(c, 80, True)
if self.is_valid_contour(approx, IM_WIDTH, IM_HEIGHT):
approx_contours.append(approx)
break
How to find a proper bounding box around the document via OpenCV code.
Any help will be much appreciated.
(The document is taken from the camera in any angle and any coloured background.)
Following code might help you to detect/segment the page in the image...
import cv2
import matplotlib.pyplot as plt
import numpy as np
image = cv2.imread('test_p.jpg')
image = cv2.imread('test_p.jpg')
print(image.shape)
ori = image.copy()
image = cv2.resize(image, (image.shape[1]//10,image.shape[0]//10))
Resized the image to make the operations more faster so that we can work on realtime..
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
gray = cv2.GaussianBlur(gray, (11,11), 0)
edged = cv2.Canny(gray, 75, 200)
print("STEP 1: Edge Detection")
plt.imshow(edged)
plt.show()
cnts = cv2.findContours(edged.copy(), cv2.RETR_LIST, cv2.CHAIN_APPROX_SIMPLE)
cnts = sorted(cnts[1], key = cv2.contourArea, reverse = True)[:5]
Here we will consider only first 5 contours from the sorted list based on area
Here the size of the gaussian blur is bit sensitive, so chose it accordingly based on the image size.
After the above operations image may look like..
for c in cnts:
### Approximating the contour
#Calculates a contour perimeter or a curve length
peri = cv2.arcLength(c, True)
approx = cv2.approxPolyDP(c, 0.01 * peri, True)
# if our approximated contour has four points, then we
# can assume that we have found our screen
screenCnt = approx
if len(approx) == 4:
screenCnt = approx
break
# show the contour (outline)
print("STEP 2: Finding Boundary")
cv2.drawContours(image, [screenCnt], -1, (0, 255, 0), 2)
image_e = cv2.resize(image,(image.shape[1],image.shape[0]))
cv2.imwrite('image_edge.jpg',image_e)
plt.imshow(image_e)
plt.show()
Final Image may look like...
Rest of the things may be handled after getting the final image...
Code Reference :- Git Repository
I guess this answer would be helpful...
There is a similar problem which is called orthographic projection.
Orthographic approaches
Rather than doing, Gaussian blur+morphological operation to get the edge of the document, try to do orthographic projection first and then find contours via your method.
For fining proper bounding box, try some preset values or a reference letter after which an orthographic projection will allow you to compute the height and hence the dimensions of the bounding box.
I have obtained an image after applying k-means with clusters = 3. Now I want to obtain 3 separate images on the basis of colours obtained after k-means.
For example, consider the attached image. Now I need
one image such that it contains only the blue square.
One having the letter v and one with just the background
Is there any possible way to do that using OpenCV and python.
The most general and simplest way to do it is using the three unique gray colors for each region. (Although I could find more than three gray levels in the above image, maybe due to variation as a result of compression of imgur. Though, at the end of the day, k-means should give exactly three BGR values)
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
unique = np.unique(gray)
c1, c2, c3 = unique[0], unique[1], unique[2]
mask1 = np.zeros_like(gray)
mask1[gray == c1] = 255
mask2 = np.zeros_like(gray)
mask2[gray == c2] = 255
mask3 = np.zeros_like(gray)
mask3[mask3 == c3] = 255
You can solve the problem by calculating the histogram of the image.
The below plot shows the peaks of the image.
From this, you can threshold the colors. The code and result:
import cv2
import numpy as np
from matplotlib import pyplot as plt
img = cv2.imread("inputs/hist.png")
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
hist = cv2.calcHist([gray],[0],None,[256],[0,256])
colors = np.where(hist>5000)
img_number = 0
for color in colors[0]:
print(color)
split_image = img.copy()
split_image[np.where(gray != color)] = 0
cv2.imwrite(str(img_number)+".jpg",split_image)
img_number+=1
plt.hist(gray.ravel(),256,[0,256])
plt.savefig('plt')
plt.show()
Results: