Opencv - Ellipse Contour Not fitting correctly - python-3.x

I want to draw contours around the concentric ellipses shown in the image appended below. I am not getting the expected result.
I have tried the following steps:
Read the Image
Convert Image to Grayscale.
Apply GaussianBlur
Get the Canny edges
Draw the ellipse contour
Here is the Source code:
import cv2
target=cv2.imread('./source image.png')
targetgs = cv2.cvtColor(target,cv2.COLOR_BGRA2GRAY)
targetGaussianBlurGreyScale=cv2.GaussianBlur(targetgs,(3,3),0)
canny=cv2.Canny(targetGaussianBlurGreyScale,30,90)
kernel=cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (3, 3))
close=cv2.morphologyEx(canny,cv2.MORPH_CLOSE,kernel)
_,contours,_=cv2.findContours(close,cv2.RETR_TREE,cv2.CHAIN_APPROX_NONE)
if len(contours) != 0:
for c in contours:
if len(c) >= 50:
hull=cv2.convexHull(c)
cv2.ellipse(target,cv2.fitEllipse(hull),(0,255,0),2)
cv2.imshow('mask',target)
cv2.waitKey(0)
cv2.destroyAllWindows()
The image below shows the Expected & Actual result:
Source Image:

Algorithm can be simple:
Convert RGB to HSV, split and working with a V channel.
Threshold for delete all color lines.
HoughLinesP for delete non color lines.
dilate + erosion for close holes in ellipses.
findContours + fitEllipse.
Result:
With new image (added black curve) my approach do not works. It seems that you need to use Hough ellipse detection instead "findContours + fitEllipse".
OpenCV don't have implementation but you can find it here or here.
If you don't afraid C++ code (for OpenCV library C++ is more expressive) then:
cv::Mat rgbImg = cv::imread("sqOOE.jpg", cv::IMREAD_COLOR);
cv::Mat hsvImg;
cv::cvtColor(rgbImg, hsvImg, cv::COLOR_BGR2HSV);
std::vector<cv::Mat> chans;
cv::split(hsvImg, chans);
cv::threshold(255 - chans[2], chans[2], 200, 255, cv::THRESH_BINARY);
std::vector<cv::Vec4i> linesP;
cv::HoughLinesP(chans[2], linesP, 1, CV_PI/180, 50, chans[2].rows / 4, 10);
for (auto l : linesP)
{
cv::line(chans[2], cv::Point(l[0], l[1]), cv::Point(l[2], l[3]), cv::Scalar::all(0), 3, cv::LINE_AA);
}
cv::dilate(chans[2], chans[2], cv::getStructuringElement(cv::MORPH_RECT, cv::Size(3, 3)), cv::Point(-1, -1), 4);
cv::erode(chans[2], chans[2], cv::getStructuringElement(cv::MORPH_RECT, cv::Size(3, 3)), cv::Point(-1, -1), 3);
std::vector<std::vector<cv::Point> > contours;
std::vector<cv::Vec4i> hierarchy;
cv::findContours(chans[2], contours, hierarchy, cv::RETR_TREE, cv::CHAIN_APPROX_SIMPLE);
for (size_t i = 0; i < contours.size(); i++)
{
if (contours[i].size() > 4)
{
cv::ellipse(rgbImg, cv::fitEllipse(contours[i]), cv::Scalar(255, 0, 255), 2);
}
}
cv::imshow("rgbImg", rgbImg);
cv::waitKey(0);

Related

Masking colors using opencv

I tried to mask image by its color using opencv.
import cv2
import numpy as np
import matplotlib.pyplot as plt
After importing libraries, I load the image
img = cv2.imread('gmaps.jpg')
image = cv2.cvtColor(img,cv2.COLOR_BGR2RGB)
plt.imshow(image);
Turn the color into hsv
hsv = cv2.cvtColor(img, cv2.COLOR_BGR2HSV)
plt.imshow(hsv);
Masking process
low_orange = np.array([44, 6, 100])
high_orange = np.array([44, 24, 99])
masking = cv2.inRange(hsv,low_orange, high_orange)
plt.imshow(masking);
The result isn't what I expected.
Image :
Result :
EDIT: I want to mask the building only. Instead I got the result of masking all of the frame.
Using my answer from here I manage to extract the right values for you
Code:
frame = cv2.imread("Xv6gx.png")
blurred_frame = cv2.GaussianBlur(frame, (5, 5), 0)
hsv = cv2.cvtColor(blurred_frame, cv2.COLOR_BGR2HSV)
lower = np.array([4, 0, 7])
upper = np.array([87, 240, 255])
mask = cv2.inRange(hsv, lower, upper)
contours, _ = cv2.findContours(mask, cv2.RETR_TREE, cv2.CHAIN_APPROX_NONE)
for contour in contours:
area = cv2.contourArea(contour)
if area > 5000:
# -- Draw Option 1 --
cv2.drawContours(frame, contour, -1, (0, 255, 0), 3)
# -- Draw Option 2--
# rect = cv2.boundingRect(contour)
# x, y, w, h = rect
# cv2.rectangle(frame, (x, y), (x + w, y + h), (0, 255, 0), 2)
cv2.imshow("Mask", mask)
cv2.imshow("Frame", frame)
cv2.waitKey(0)
Final Results:
I wouldn't expect the low Value (100) to exceed the high Value (99).
Also, OpenCV uses a range of 0..180 for Hue rather than 0..360, so you likely need to divide your 44 by 2.

How can i get the inner contour points without redundancy in OpenCV - Python

I'm new with OpenCV and the thing is that i need to get all the contour points. This is easy setting the cv2.RETR_TREE mode in findContours method. The thing is that in this way, returns redundant coordinates. So, for example, in this polygon, i don't want to get the contour points like this:
But like this:
So according to the first image, green color are the contours detected with RETR_TREE mode, and points 1-2, 3-5, 4-6, ... are redundant, because they are so close to each other. I need to put together those redundant points into one, and append it in the customContours array.
For the moment, i only have the code according for the first picture, setting up the distance between the points and the points coordinates:
def getContours(img, minArea=20000, cThr=[100, 100]):
font = cv2.FONT_HERSHEY_COMPLEX
imgColor = img
imgGray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
imgBlur = cv2.GaussianBlur(imgGray, (5, 5), 1)
imgCanny = cv2.Canny(imgBlur, cThr[0], cThr[1])
kernel = np.ones((5, 5))
imgDial = cv2.dilate(imgCanny, kernel, iterations=3)
imgThre = cv2.erode(imgDial, kernel, iterations=2)
cv2.imshow('threshold', imgThre)
contours, hierachy = cv2.findContours(imgThre, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)
customContours = []
for cnt in contours:
area = cv2.contourArea(cnt)
if area > minArea:
peri = cv2.arcLength(cnt, True)
approx = cv2.approxPolyDP(cnt, 0.009*peri, True)
bbox = cv2.boundingRect(approx)
customContours.append([len(approx), area, approx, bbox, cnt])
print('points: ', len(approx))
n = approx.ravel()
i = 0
for j in n:
if i % 2 == 0:
x = n[i]
y = n[i + 1]
string = str(x)+" " + str(y)
cv2.putText(imgColor, str(i//2+1) + ': ' + string, (x, y), font, 2, (0, 0, 0), 2)
i = i + 1
customContours = sorted(customContours, key=lambda x: x[1], reverse=True)
for cnt in customContours:
cv2.drawContours(imgColor, [cnt[2]], 0, (0, 0, 255), 5)
return imgColor, customContours
Could you help me to get the real points regarding to i.e. the second picture?
(EDIT 01/07/21)
I want a generic solution, because the image could be more complex, such as the following picture:
NOTE: notice that the middle arrow (points 17 and 18) doesn't have a closed area, so isn't a polygon to study. Then, that region is not interested to obtain his points. Also, notice that the order of the points aren't important, but if the entry is the hole image, it should know that there are 4 polygons, so for each polygon points starts with 0, then 1, etc.
Here's my approach. It is mainly morphological-based. It involves convolving the image with a special kernel. This convolution identifies the end-points of the triangle as well as the intersection points where the middle line is present. This will result in a points mask containing the pixel that matches the points you are looking for. After that, we can apply a little bit of morphology to join possible duplicated points. What remains is to get a list of the coordinate of these points for further processing.
These are the steps:
Get a binary image of the input via Otsu's thresholding
Get the skeleton of the binary image
Define the special kernel and convolve the skeleton image
Apply a morphological dilate to join possible duplicated points
Get the centroids of the points and store them in a list
Here's the code:
# Imports:
import numpy as np
import cv2
# image path
path = "D://opencvImages//"
fileName = "triangle.png"
# Reading an image in default mode:
inputImage = cv2.imread(path + fileName)
# Prepare a deep copy for results:
inputImageCopy = inputImage.copy()
# Convert BGR to Grayscale
grayImage = cv2.cvtColor(inputImage, cv2.COLOR_BGR2GRAY)
# Threshold via Otsu:
_, binaryImage = cv2.threshold(grayImage, 0, 255, cv2.THRESH_BINARY_INV + cv2.THRESH_OTSU)
The first bit computes the binary image. Very straightforward. I'm using this image as base, which is just a cleaned-up version of what you posted without the annotations. This is the resulting binary image:
Now, to perform the convolution we must first get the image "skeleton". The skeleton is a version of the binary image where lines have been normalized to have a width of 1 pixel. This is useful because we can then convolve the image with a 3 x 3 kernel and look for specific pixel patterns. Let's compute the skeleton using OpenCV's extended image processing module:
# Get image skeleton:
skeleton = cv2.ximgproc.thinning(binaryImage, None, 1)
This is the image obtained:
We can now apply the convolution. The approach is based on Mark Setchell's info on this post. The post mainly shows the method for finding end-points of a shape, but I extended it to also identify line intersections, such as the middle portion of the triangle. The main idea is that the convolution yields a very specific value where patterns of black and white pixels are found in the input image. Refer to the post for the theory behind this idea, but here, we are looking for two values: 110 and 40. The first one occurs when an end-point has been found. The second one when a line intersections is found. Let's setup the convolution:
# Threshold the image so that white pixels get a value of 0 and
# black pixels a value of 10:
_, binaryImage = cv2.threshold(skeleton, 128, 10, cv2.THRESH_BINARY)
# Set the convolution kernel:
h = np.array([[1, 1, 1],
[1, 10, 1],
[1, 1, 1]])
# Convolve the image with the kernel:
imgFiltered = cv2.filter2D(binaryImage, -1, h)
# Create list of thresholds:
thresh = [110, 40]
The first part is done. We are going to detect end-points and intersections in two separated steps. Each step will produce a partial result, we can OR both results to get a final mask:
# Prepare the final mask of points:
(height, width) = binaryImage.shape
pointsMask = np.zeros((height, width, 1), np.uint8)
# Perform convolution and create points mask:
for t in range(len(thresh)):
# Get current threshold:
currentThresh = thresh[t]
# Locate the threshold in the filtered image:
tempMat = np.where(imgFiltered == currentThresh, 255, 0)
# Convert and shape the image to a uint8 height x width x channels
# numpy array:
tempMat = tempMat.astype(np.uint8)
tempMat = tempMat.reshape(height,width,1)
# Accumulate mask:
pointsMask = cv2.bitwise_or(pointsMask, tempMat)
This is the final mask of points:
Note that the white pixels are the locations that matched our target patterns. Those are the points we are looking for. As the shape is not a perfect triangle, some points could be duplicated. We can "merge" neighboring blobs by applying a morphological dilation:
# Set kernel (structuring element) size:
kernelSize = 7
# Set operation iterations:
opIterations = 3
# Get the structuring element:
morphKernel = cv2.getStructuringElement(cv2.MORPH_RECT, (kernelSize, kernelSize))
# Perform Dilate:
morphoImage = cv2.morphologyEx(pointsMask, cv2.MORPH_DILATE, morphKernel, None, None, opIterations, cv2.BORDER_REFLECT101)
This is the result:
Very nice, we have now big clusters of pixels (or blobs). To get their coordinates, one possible approach would be to get the bounding rectangles of these contours and compute their centroids:
# Look for the outer contours (no children):
contours, _ = cv2.findContours(morphoImage, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
# Store the points here:
pointsList = []
# Loop through the contours:
for i, c in enumerate(contours):
# Get the contours bounding rectangle:
boundRect = cv2.boundingRect(c)
# Get the centroid of the rectangle:
cx = int(boundRect[0] + 0.5 * boundRect[2])
cy = int(boundRect[1] + 0.5 * boundRect[3])
# Store centroid into list:
pointsList.append( (cx,cy) )
# Set centroid circle and text:
color = (0, 0, 255)
cv2.circle(inputImageCopy, (cx, cy), 3, color, -1)
font = cv2.FONT_HERSHEY_COMPLEX
string = str(cx) + ", " + str(cy)
cv2.putText(inputImageCopy, str(i) + ':' + string, (cx, cy), font, 0.5, (255, 0, 0), 1)
# Show image:
cv2.imshow("Circles", inputImageCopy)
cv2.waitKey(0)
These are the points located in the original input:
Note also that I've stored their coordinates in the pointsList list:
# Print the list of points:
print(pointsList)
This prints the centroids as the tuple (centroidX, centroidY):
[(717, 971), (22, 960), (183, 587), (568, 586), (388, 98)]

Crop the rectangular paper from the image

from the discussion : Crop exactly document paper from image
I'm trying to get the white paper from the image and I'm using the following code which not cropping exactly rectangular.
def crop_image(image):
image = cv2.imread(image)
# convert to grayscale image
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
# threshold
thresh = cv2.threshold(gray, 190, 255, cv2.THRESH_BINARY)[1]
# apply morphology
kernel = np.ones((7, 7), np.uint8)
morph = cv2.morphologyEx(thresh, cv2.MORPH_CLOSE, kernel)
kernel = np.ones((9, 9), np.uint8)
morph = cv2.morphologyEx(morph, cv2.MORPH_ERODE, kernel)
# Get Largest contour
contours = cv2.findContours(morph, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE)
contours = contours[0] if len(contours) == 2 else contours[1]
area_thresh = 0
for cnt in contours:
area = cv2.contourArea(cnt)
if area > area_thresh:
area_thresh = area
big_contour = cnt
# get bounding box
x, y, w, h = cv2.boundingRect(big_contour)
# draw filled contour on black background
mask = np.zeros_like(gray)
mask = cv2.merge([mask, mask, mask])
cv2.drawContours(mask, [big_contour], -1, (255, 255, 255), cv2.FILLED)
# apply mask to input
result = image.copy()
result = cv2.bitwise_and(result, mask)
# crop result
img_result = result[y:y+h, x:x+w]
filename = generate_filename()
cv2.imwrite(filename, img_result)
logger.info('Successfully saved cropped file : %s' % filename)
return img_result, filename
I'm able to get the desired result but not the rectangular image.
Here I'm attaching and here is what I'm getting after cropping image .
I want a rectangular image of the paper.
Please help me with this.
Thanks in advance
The first problem I can see is that the threshold value is not low enough so the bottom part of the paper is not correctly capture (it's too dark to be captured by the threshold)
The second problem as far I can understand is being able to fit the square to the image. What you need to do is wrapping perspective.
To do that you can find more information in this amazing post of PyImageSearch

How to crop the green colored rectangle from an image

I need to crop an image for further processing on it using opencv and python. I need to crop the region inside the green colored rectangle in the image. The rectangle is drawn using "haar_cascade_fullbody_Detector"
The code is as follows :
import numpy as np
import cv2
bodydetection = cv2.CascadeClassifier('haarcascade_fullbody.xml')
img = cv2.imread('original.jpg')
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
body = bodydetection.detectMultiScale(gray, 1.009, 5)
for x, y, w, h in body:
# so we slightly shrink the rectangles to get a nicer output.
pad_w, pad_h = int(0.15*w), int(0.02*h)
cv2.rectangle(img, (x+pad_w+10, y+pad_h+10), (x+w-pad_w, y+h-pad_h), (0, 255, 0), 2)
cv2.imshow('img',img)
crop_img = img[x:x+w, y:y+h]
cv2.imshow('crop',crop_img)
cv2.waitKey(0)
cv2.destroyAllWindows()
The input image is :
The output of haar cascade is :
after the crop the image is :
Please suggest any solution. Thanks!
Ok! I got the answer. The line of crop_img should be modified as
crop_img = img[y:y+h, x:x+w]
This produces required output.This is because in opencv the img coordinated sequence is y-coordinates and then x-coordinates.

Adjusting set_line_width() to correct ratio?

I'm using Cairo to draw figures. I found that Cairo uses a "absolute coordinate" when drawing. It is a flexible and comfortable way, except specify the line_width. Because of the ratio of the below image is not 1:1, when the "absolute coordinate" converted to "real coordinate", the width of the lines are not same.
WIDTH = 960
HEIGHT = 640
surface = cairo.ImageSurface(cairo.FORMAT_ARGB32, WIDTH, HEIGHT)
ctx = cairo.Context(surface)
ctx.scale(WIDTH, HEIGHT)
ctx.rectangle(0, 0, 1, 1)
ctx.set_source_rgb(255, 255, 255)
ctx.fill()
ctx.set_source_rgb(0, 0, 0)
ctx.move_to(0.5, 0)
ctx.line_to(0.5, 1)
ctx.move_to(0, 0.5)
ctx.line_to(1, 0.5)
ctx.set_line_width(0.01)
ctx.stroke()
What is the correct way to make line_width shown as the same ratio in the output image?
Undo your call to ctx.scale() before calling stroke(), for example via:
ctx.save()
ctx.set_line_width(2)
ctx.identity_matrix()
ctx.restore()
(The save()/restore() pair applies all your transformations again afterwards)

Resources