I'm using Cairo to draw figures. I found that Cairo uses a "absolute coordinate" when drawing. It is a flexible and comfortable way, except specify the line_width. Because of the ratio of the below image is not 1:1, when the "absolute coordinate" converted to "real coordinate", the width of the lines are not same.
WIDTH = 960
HEIGHT = 640
surface = cairo.ImageSurface(cairo.FORMAT_ARGB32, WIDTH, HEIGHT)
ctx = cairo.Context(surface)
ctx.scale(WIDTH, HEIGHT)
ctx.rectangle(0, 0, 1, 1)
ctx.set_source_rgb(255, 255, 255)
ctx.fill()
ctx.set_source_rgb(0, 0, 0)
ctx.move_to(0.5, 0)
ctx.line_to(0.5, 1)
ctx.move_to(0, 0.5)
ctx.line_to(1, 0.5)
ctx.set_line_width(0.01)
ctx.stroke()
What is the correct way to make line_width shown as the same ratio in the output image?
Undo your call to ctx.scale() before calling stroke(), for example via:
ctx.save()
ctx.set_line_width(2)
ctx.identity_matrix()
ctx.restore()
(The save()/restore() pair applies all your transformations again afterwards)
Related
from the discussion : Crop exactly document paper from image
I'm trying to get the white paper from the image and I'm using the following code which not cropping exactly rectangular.
def crop_image(image):
image = cv2.imread(image)
# convert to grayscale image
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
# threshold
thresh = cv2.threshold(gray, 190, 255, cv2.THRESH_BINARY)[1]
# apply morphology
kernel = np.ones((7, 7), np.uint8)
morph = cv2.morphologyEx(thresh, cv2.MORPH_CLOSE, kernel)
kernel = np.ones((9, 9), np.uint8)
morph = cv2.morphologyEx(morph, cv2.MORPH_ERODE, kernel)
# Get Largest contour
contours = cv2.findContours(morph, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE)
contours = contours[0] if len(contours) == 2 else contours[1]
area_thresh = 0
for cnt in contours:
area = cv2.contourArea(cnt)
if area > area_thresh:
area_thresh = area
big_contour = cnt
# get bounding box
x, y, w, h = cv2.boundingRect(big_contour)
# draw filled contour on black background
mask = np.zeros_like(gray)
mask = cv2.merge([mask, mask, mask])
cv2.drawContours(mask, [big_contour], -1, (255, 255, 255), cv2.FILLED)
# apply mask to input
result = image.copy()
result = cv2.bitwise_and(result, mask)
# crop result
img_result = result[y:y+h, x:x+w]
filename = generate_filename()
cv2.imwrite(filename, img_result)
logger.info('Successfully saved cropped file : %s' % filename)
return img_result, filename
I'm able to get the desired result but not the rectangular image.
Here I'm attaching and here is what I'm getting after cropping image .
I want a rectangular image of the paper.
Please help me with this.
Thanks in advance
The first problem I can see is that the threshold value is not low enough so the bottom part of the paper is not correctly capture (it's too dark to be captured by the threshold)
The second problem as far I can understand is being able to fit the square to the image. What you need to do is wrapping perspective.
To do that you can find more information in this amazing post of PyImageSearch
I want to draw contours around the concentric ellipses shown in the image appended below. I am not getting the expected result.
I have tried the following steps:
Read the Image
Convert Image to Grayscale.
Apply GaussianBlur
Get the Canny edges
Draw the ellipse contour
Here is the Source code:
import cv2
target=cv2.imread('./source image.png')
targetgs = cv2.cvtColor(target,cv2.COLOR_BGRA2GRAY)
targetGaussianBlurGreyScale=cv2.GaussianBlur(targetgs,(3,3),0)
canny=cv2.Canny(targetGaussianBlurGreyScale,30,90)
kernel=cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (3, 3))
close=cv2.morphologyEx(canny,cv2.MORPH_CLOSE,kernel)
_,contours,_=cv2.findContours(close,cv2.RETR_TREE,cv2.CHAIN_APPROX_NONE)
if len(contours) != 0:
for c in contours:
if len(c) >= 50:
hull=cv2.convexHull(c)
cv2.ellipse(target,cv2.fitEllipse(hull),(0,255,0),2)
cv2.imshow('mask',target)
cv2.waitKey(0)
cv2.destroyAllWindows()
The image below shows the Expected & Actual result:
Source Image:
Algorithm can be simple:
Convert RGB to HSV, split and working with a V channel.
Threshold for delete all color lines.
HoughLinesP for delete non color lines.
dilate + erosion for close holes in ellipses.
findContours + fitEllipse.
Result:
With new image (added black curve) my approach do not works. It seems that you need to use Hough ellipse detection instead "findContours + fitEllipse".
OpenCV don't have implementation but you can find it here or here.
If you don't afraid C++ code (for OpenCV library C++ is more expressive) then:
cv::Mat rgbImg = cv::imread("sqOOE.jpg", cv::IMREAD_COLOR);
cv::Mat hsvImg;
cv::cvtColor(rgbImg, hsvImg, cv::COLOR_BGR2HSV);
std::vector<cv::Mat> chans;
cv::split(hsvImg, chans);
cv::threshold(255 - chans[2], chans[2], 200, 255, cv::THRESH_BINARY);
std::vector<cv::Vec4i> linesP;
cv::HoughLinesP(chans[2], linesP, 1, CV_PI/180, 50, chans[2].rows / 4, 10);
for (auto l : linesP)
{
cv::line(chans[2], cv::Point(l[0], l[1]), cv::Point(l[2], l[3]), cv::Scalar::all(0), 3, cv::LINE_AA);
}
cv::dilate(chans[2], chans[2], cv::getStructuringElement(cv::MORPH_RECT, cv::Size(3, 3)), cv::Point(-1, -1), 4);
cv::erode(chans[2], chans[2], cv::getStructuringElement(cv::MORPH_RECT, cv::Size(3, 3)), cv::Point(-1, -1), 3);
std::vector<std::vector<cv::Point> > contours;
std::vector<cv::Vec4i> hierarchy;
cv::findContours(chans[2], contours, hierarchy, cv::RETR_TREE, cv::CHAIN_APPROX_SIMPLE);
for (size_t i = 0; i < contours.size(); i++)
{
if (contours[i].size() > 4)
{
cv::ellipse(rgbImg, cv::fitEllipse(contours[i]), cv::Scalar(255, 0, 255), 2);
}
}
cv::imshow("rgbImg", rgbImg);
cv::waitKey(0);
First, consider this code:
from PIL import Image
im = Image.open("best_tt.jpg")
im2 = Image.new("RGB", im.size, (255,255,255))
b = 200
for i in range(im.size[0]):
for j in range(im.size[1]):
rgb = im.getpixel((i,j))
if rgb[0] <= b and rgb[1] <= b and rgb[2] <= b:
im2.putpixel((i,j), (0,0,0))
else:
im2.putpixel((i,j), (0, rgb[1], rgb[2]))
im2.save("tmp.jpg")
What I am doing is simply removing the RED component from each pixel (other than black pixels: the if statement checks for pixels that look black). In other words, I'm converting the given image to a yellow scale (since G+B = Y).
In that way, every pixel should have an RGB value like (0, G, B).
However, certain pixels of the new image returned values like:
(1, 255, 203)
(3, 205, 243)
(16, 242, 47)
though some had the red component as 0.
What causes this arbitrary adjustment of the RGB values?
The save() function will determine the type as a jpeg, which has a default compression quality of 75. The way the file is encoded and compressed can end up changing values after the fact.
See the PIL documentation for save() below:
https://pillow.readthedocs.io/en/3.1.x/handbook/image-file-formats.html
The problem I have at hand is to draw boundaries around a white ball. But the ball is present in different illuminations. Using canny edge detections and Hough transform for circles, I am able to detect the ball in bright light/partial bright light but not in low illumination.
So can anyone help with this problem.
The code that I have tried is below.
img=cv2.imread('14_04_2018_10_38_51_.8242_P_B_142_17197493.png.png')
cimg=img.copy()
img = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
edges = cv2.medianBlur(img,5)
edges=cv2.Canny(edges,200,200)
circles = cv2.HoughCircles(edges,cv2.HOUGH_GRADIENT,1,20,
param1=25,param2=10,minRadius=0,maxRadius=0)
if circles is not None:
circles = np.uint16(np.around(circles))
for i in circles[0,:]:
# draw the outer circle
cv2.circle(cimg,(i[0],i[1]),i[2],(255,255,255),2)
# draw the center of the circle
cv2.circle(cimg,(i[0],i[1]),2,(0,0,255),3)
cv2.imwrite('segmented_out.png',cimg)
else:
print("no circles")
cv2.imwrite('edges_out.png',edges)
In the image below we need to segment if the ball is in the shadow region as well.
The output should be something like below images..
Well I am not very experienced in OpenCV or Python but I am learning as well. Probably not very pythonic piece of code but you could try this:
import cv2
import math
circ=0
n = [10, 20, 30, 40, 50, 60, 70, 80, 90, 100, 110, 120, 130, 140, 150, 160, 170, 180, 190, 200, 210, 220]
img = cv2.imread("ball1.jpg")
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
for i in n:
ret, threshold = cv2.threshold(gray,i,255,cv2.THRESH_BINARY)
im, contours, hierarchy = cv2.findContours(threshold,cv2.RETR_TREE,cv2.CHAIN_APPROX_NONE)
for j in range(0, len(contours)):
size = cv2.contourArea(contours[j])
if 500 < size < 5000:
if circ > 0:
(x,y),radius = cv2.minEnclosingCircle(contours[j])
radius = int(radius)
area = cv2.contourArea(contours[j])
circif = 4*area/(math.pi*(radius*2)**2)
if circif > circ:
circ = float(circif)
radiusx = radius
center = (int(x),int(y))
elif circ == 0:
(x,y),radius = cv2.minEnclosingCircle(contours[j])
radius = int(radius)
area = cv2.contourArea(contours[j])
circ = 4*area/(math.pi*(radius*2)**2)
else:
pass
cv2.circle(img,center,radiusx,(0,255,0),2)
cv2.imshow('image', img)
cv2.waitKey(0)
cv2.detroyAllWindows()
What it does is acctually you convert your picture to grayscale and apply different threshold settings to it. Then you eliminate noises with adding size to your specific contour. When you find it, you check its circularity (NOTE: it is not a scientific formula) and compare it to the next circularity. Perfect circle should return the result 1, so the highest number that will get in a contour (of all the contours) will be your ball.
Result:
NOTE: I haven't tried increasing the limit of size so maybe higher limit could return better result if you have a high resolution picture
Working with grayscale image will make you subject to different light conditions.
To be free from this I suggest to work in HSV color space, then use the Hue component instead of the grayscale image.
Hue is independent from the light condition, since it gives you information about the color, regardless of its Saturation or Value (a value bound to the brightness of the image).
This might bring you some clarity about color spaces and which is best to use for image segmentation.
In your case here. We have a white ball.White is not a color by itself.The main factor here is, what kind light actually falls on the white ballAs the kind of light that falls on it has a direct influence on the kind of extraction you might plan to do using a color space like HSV as mentioned above by #magicleon
HSV is your best bet for segmentation here.Using
whiteObject = cv2.inRange(hsvImage,lowerHSVLimit,upperHSVLimit)
lowerHSVLimit and upperHSVLimit HSV color range
Keeping in mind that the conditions
1) The image have similar conditions while they were clicked
2) You cover all the ranges of HSV before extraction
Hope you get an idea
Consider this example
Selecting a particular hue range from 45 to 60
Code
image = cv2.imread('allcolors.png')
hsvImg = cv2.cvtColor(image,cv2.COLOR_BGR2HSV)
lowerHSVLimit = np.array([45,0,0])
upperHSVLimit = np.array([60,255,255])
colour = cv2.inRange(hsvImg,lowerHSVLimit,upperHSVLimit)
plt.subplot(111), plt.imshow(colour,cmap="gray")
plt.title('hue range from 45 to 60'), plt.xticks([]), plt.yticks([])
plt.show()
Here the hue selected from 45 to 60
Check out this Python code:
degrees = 90
center = (24, 24)
img = np.ones((48,48,3)) * 255
mat = cv2.getRotationMatrix2D(center, degrees, 1.0)
img = cv2.warpAffine(img, mat, (48, 48))
My expectation is that a 3 channel, fully saturated, white square will be created and stored in img. After which, it'll be rotated by 90 degrees. Rotating a white square by 90 degrees should result in ... an indistinguishable white square. But when I:
plt.imshow(img)
plt.show(img)
I see an erroneous black border:
Is there any way to get warpAffine working as expected, i.e. rotate the image without an erroneous border? I've tried the following modifications to no avail:
center = (23, 23)
center = (24, 23)
center = (23, 24)
center = (25, 25)
center = (24, 25)
center = (25, 24)
You should be using the exact center of the image rather than the next closest thing. The rotation is slightly off center using (24,24).
Since getRotationMatrix2D accepts a Point2f, you should be passing the center as (23.5,23.5), as it is the midway point between 0 and 47.