using opencv LineSegmentDetector to find line of an image - python-3.x

i have a color image and i should use opencv LineSegmentDetector algorithm to detect lines of the rectangles in the image
Here is my image:
i'm using this code :
import cv2
img = cv2.imread("rectangles.jpg",0)
#Create default parametrization LSD
lsd = cv2.createLineSegmentDetector(0)
#Detect lines in the image
lines = lsd.detect(img)[0]
#Draw detected lines in the image
drawn_img = lsd.drawSegments(img,lines)
#Show image
cv2.imshow("LSD",drawn_img )
cv2.waitKey(0)
and i'm getting this errpr:
<ipython-input-18-93ae667b0648> in <module>()
3
4 #Create default parametrization LSD
----> 5 lsd = cv2.createLineSegmentDetector(0)
6
7 #Detect lines in the image
error: OpenCV(4.1.0) C:\projects\opencv-python\opencv\modules\imgproc\src\lsd.cpp:143: error: (-213:The function/feature is not implemented) Implementation has been removed due original code license issues in function 'cv::LineSegmentDetectorImpl::LineSegmentDetectorImpl'
i checked open-cv version 4.1 documentation to use this method and here is the page , but i dont understand how should i use this method.
any help is appreciated.

Did you read the error message?
error: OpenCV(4.1.0) C:\projects\opencv-python\opencv\modules\imgproc\src\lsd.cpp:143: error: (-213:The function/feature is not implemented)
Implementation has been removed due original code license issues in function 'cv::LineSegmentDetectorImpl::LineSegmentDetectorImpl'
The class is not available due to license issues.
You can see that here in the original source.

You can also use Fast Line Detector which is available in OpenCV 4.1.
import cv2
img = cv2.imread("rectangles.jpg",0)
#Create default Fast Line Detector (FSD)
fld = cv2.ximgproc.createFastLineDetector()
#Detect lines in the image
lines = fld.detect(img)
#Draw detected lines in the image
drawn_img = fld.drawSegments(img,lines)
#Show image
cv2.imshow("FLD", drawn_img)
cv2.waitKey(0)
Result:

Related

How to solve problem running MTCNN in JupiterLab

The function detect_faces() fails in JupiterLab:
image = Image.open(filename)
imageRGB = image.convert('RGB')
pixels = asarray(imageRGB)
detector = MTCNN()
results = detector.detect_faces(pixels)
mtcnn version 0.1.0
The error:
AbortedError: Operation received an exception:Status: 2, message:
could not create a descriptor for a softmax forward propagation
primitive, in file tensorflow/core/kernels/mkl/mkl_softmax_op.cc:306
[[node model/softmax/Softmax (defined at
/home/rikkatti/anaconda3/envs/poi/lib/python3.9/site-packages/mtcnn/mtcnn.py:342)
]] [Op:__inference_predict_function_828]
Function call stack: predict_function
that's probably due to the conflict of keras and tensorflow version.
I solved it by doing these steps:
Uninstall tensorflow from anaconda venv
pip install tensorflow==2.9.0
Try it this way using cv2
from mtcnn import MTCNN
import os
import cv2
detector = MTCNN()
dest_dir=r'C:\Temp\people\storage\cropped' # specify where to save the images
filename=r'C:\Temp\people\storage\34.png' # specify the file name full path
try:
img=cv2.imread(filename) # filename must be full path to the image
shape=img.shape # will cause an exception if image was not read properly
data=detector.detect_faces(img)
if data ==[]:
print ('no faces were detected for file ', filename)
else:
for i, faces in enumerate(data):
box= faces['box']
if box != []:
box[0]= 0 if box[0]<0 else box[0]
box[1]= 0 if box[1]<0 else box[1]
cropped_img=img[box[1]: box[1]+box[3],box[0]: box[0]+ box[2]]
fname=os.path.split(filename)[1]
index=fname.rfind('.')
fileid=fname[:index]
fext=fname[index:]
fname=fileid + '-' +str(i) + fext
save_path=os.path.join(dest_dir,fname )
cv2.imwrite(save_path, cropped_img)
except:
print(' an error occurred')
This will detect all faces in the image and store them as cropped images in the dest_dir. Tested it with an image with multiple faces and it works fine

How to resize an image according to a reference point in another

I currently have two images (im1 and im2) with pixel dimensions of (1725, 1580). im2 however possesses a large border around it that i had to create to ensure that the images were of the same size. im2 was originally (1152, 864).
As such. when i overlay im2 ontop of im1 using PIL.Image.blend, im2 appears overlayed onto im1, but is a lot smaller. I have 2 distinct reference points on the images i think i could use (present on im1 and im2) to rescale im2 (zoom it in somehow?) to overlay im2 ontop of im1.
My issue is that i have been looking through various python modules (PIL, scipy, matplotlib etc) but cant seem to really be getting anywhere or find a solution with which i could approach this issue.
I have 2 reference points i think i could use (present on im1 and im2) to rescale im2 (zoom it in somehow?) to overlay im2 ontop of im1.
i have looked at various modules but cant to seem to find anything that might work (scipy, PIL, matplotlib)
#im1 https://i.imgur.com/dF8uyPw.jpg
#im2 https://i.imgur.com/o4RAhOQ.png
#im2_resized https://i.imgur.com/jfWz1LE.png
im1 = Image.open("pit5Film/Pit_5_5mm_inf.tif")
im2 = Image.open("pit5Overlay/overlay_132.png")
old_size = im2.size
new_size = im1.size
im2_resized = Image.new("RGB", new_size)
im2_resized.paste(im2,((round((new_size[0]-old_size[0])/2)),round(((new_size[1]-old_size[1])/2))))
Image.blend(im1,im2_resized,0.2)
I think you are trying to do an "affine distortion". I can maybe work out how to do it in OpenCV or PIL, but for the minute, here's what I did with ImageMagick.
First, I located the centre of the registration hole (?) on both the left and right side of the first image. I got these coordinates:
422,775 # left hole centre 1st picture
1246,799 # right hole centre 1st picture
Then I found these same features in the second picture at:
514,426 # left hole centre 2nd picture
668,426 # right hole centre 2nd picture
Then I ran this in Terminal to do the 2-point affine transformation:
convert imageA.jpg -virtual-pixel white \
-distort affine '422,775 514,426 1246,799 668,426' +repage \
imageB.png -compose overlay -composite result.jpg
There is loads of great information from Anthony Thyssen here if you fancy a read.
This is how to do it in Python Wand, which is based upon Imagemagick. I use Mark Setchell's images and the Python Wand equivalent command. The distort command needs Imagemagick 7, according to the documentation. Using Python Wand 0.5.5, the current version.
Script:
#!/bin/python3.7
from wand.image import Image
from wand.color import Color
from wand.display import display
with Image(filename='imageA.jpg') as Aimg:
with Image(filename='imageB.jpg') as Bimg:
Aimg.virtual_pixel = 'background'
Aimg.background_color = Color('white')
arguments = (422, 775, 514, 426, 1246, 799, 668, 426)
Aimg.distort('affine', arguments)
Aimg.composite(Bimg, 0, 0, 'overlay')
Aimg.save(filename='image_BoverlayA_composite.png')
display(Aimg)
Calling Command:
python3.7 wand_affine_overlay.py
Result:
ADDITION:
If you want to trim the image to its minimum bounding box, then add trim to the command as follows, where the trim value is in the range 0 to quantum range.
#!/bin/python3.7
from wand.image import Image
from wand.color import Color
from wand.display import display
with Image(filename='imageA.jpg') as Aimg:
with Image(filename='imageB.jpg') as Bimg:
Aimg.virtual_pixel = 'background'
Aimg.background_color = Color('white')
arguments = (422, 775, 514, 426, 1246, 799, 668, 426)
Aimg.distort('affine', arguments)
Aimg.composite(Bimg, 0, 0, 'overlay')
Aimg.trim(fuzz=10000)
Aimg.save(filename='image_BoverlayA_composite.png')
display(Aimg)
batter way for resizing an image using OpenCV
dim = (width, height)
# resize image
resized = cv2.resize(img, dim, interpolation = cv2.INTER_AREA)
for blend two images using OpenCV
1: load both image
src1 = cv.imread(cv.samples.findFile('img1.jpg'))
src2 = cv.imread(cv.samples.findFile('img2.jpg'))
2: blend both images (alpha is in between 0 to 1 any float)
beta = (1.0 - alpha)
dst = cv.addWeighted(src1, alpha, src2, beta, 0.0)
3: for displaying result
cv.imshow('dst', dst)
cv.waitKey(0)
if you wont to resize an image according to a reference point consider this awsome blog by PYIMAGESEARCH : PYIMAGESEARCH 4 POINT

detecting similar objects and cropping them from the image

I have to extract this:
from the given image:
I tried contour detection but that gives all the contours. But I specifically need that object in that image.
My idea is to:
Find the objects in the image
Draw bounding box around them
Crop them and save them individually.
I am working with opencv and using python3, which I am fairly new to.
As seen there are three objects similar to given template but of different sizes. Also there are other boxes which are not the area of interest. After cropping I want to save them as three separate images. Is there a solution to this situation ?
I tried multi-scale template matching with the cropped template.
Here is an attempt:
# import the necessary packages
import numpy as np
import argparse
import imutils
import glob
import cv2
# construct the argument parser and parse the arguments
ap = argparse.ArgumentParser()
ap.add_argument("-t", "--template", required=True, help="Path to template image")
args = vars(ap.parse_args())
# load the image image, convert it to grayscale, and detect edges
template = cv2.imread(args["template"])
template = cv2.cvtColor(template, cv2.COLOR_BGR2GRAY)
template = cv2.Canny(template, 50, 200)
(tH, tW) = template.shape[:2]
image = cv2.imread('input.png')
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
# loop over the scales of the image
for scale in np.linspace(0.2, 1.0, 20)[::-1]:
# resize the image according to the scale, and keep track
# of the ratio of the resizing
resized = imutils.resize(gray, width = int(gray.shape[1] * scale))
r = gray.shape[1] / float(resized.shape[1])
# if the resized image is smaller than the template, then break
# from the loop
if resized.shape[0] < tH or resized.shape[1] < tW:
break
# detect edges in the resized, grayscale image and apply template
# matching to find the template in the image
edged = cv2.Canny(resized, 50, 200)
res = cv2.matchTemplate(edged, template, cv2.TM_CCOEFF)
loc = np.where( res >= 0.95)
for pt in zip(*loc[::-1]):
cv2.rectangle(image, int(pt*r), (int((pt[0] + tW)*r), int((pt[1] + tH)*r)), (0,255,0), 2)
cv2.imwrite('re.png',image)
Result that I am getting:
Expected result is bounding boxes around all the post-it boxes
I'm currently on mobile so I can't really write code, but this link does exactly what you're looking for!
If anything isn't clear I can adapt the code to your example later this evening, when I have acces to a laptop.
In your case I would crop out the content of the shape (the post-it) and template match just on the edges. That'll make sure it's not thrown off by the text inside.
Good luck!

Issue with T-API(with OpenCL) python3

I'm experimenting with T-Api of opencv python. Currently I'm trying to convert a normal RGB image to gray scale and save it with T-Api enabled. Here's the snippet
import cv2
import dlib
im = cv2.UMat(cv2.imread('input.png',1))
print(im)
imMat = cv2.UMat(im)
gray = cv2.cvtColor(imMat,cv2.COLOR_BGR2GRAY)
cv2.imwrite('gray.png',gray)
The output is as follows along with the error
<cv2.UMat object at 0x7fd1e400e378>
<cv2.UMat object at 0x7fc97d0ec390>
Segmentation fault (core dumped)
I have tried this before the above operations
cv2.ocl.setUseOpenCL(True)
print(cv2.ocl.haveOpenCL())
The above print statement is outputting false, I was thinking I need to compile by opencv with OpenCL support inorder for cv2.UMat to work but i'm able to print out both im outputs. I copied the code from this example, it seems to work fine
import cv2
img = cv2.UMat(cv2.imread("image.jpg", cv2.IMREAD_COLOR))
imgUMat = cv2.UMat(img)
gray = cv2.cvtColor(imgUMat, cv2.COLOR_BGR2GRAY)
gray = cv2.GaussianBlur(gray, (7, 7), 1.5)
gray = cv2.Canny(gray, 0, 50)
cv2.imshow("edges", gray)
cv2.waitKey();
Where exactly am I going wrong?
You should rebuild with CMake using the flag WITH_OPENCL=ON and WITH_OPENCLAMDFFT=ON, WITH_OPENCLAMDBLAS=ON if you have AMD’S FFT and BLAS libraries. source

matrix.cpp:310: error: (-215) s >= 0 in function cv::setSize

I very new to python. Im using
face_recognizer = cv2.face.LBPHFaceRecognizer_create()
and defined my predict function as
#this function recognizes the person in image passed
#and draws a rectangle around detected face with name of the
#subject
def predict(test_img):
#make a copy of the image as we don't want to chang original image
img = test_img.copy()
#detect face from the image
face, rect = detect_face(img)
#predict the image using our face recognizer
label= face_recognizer.predict(face)
#get name of respective label returned by face recognizer
label_text = subjects[label]
#draw a rectangle around face detected
draw_rectangle(img, rect)
#draw name of predicted person
draw_text(img, label_text, rect[0], rect[1]-5)
return img`
and i get the following error while predicting the face using predict function
---------------------------------------------------------------------------
error Traceback (most recent call last)
<ipython-input-13-d6517b4e38bd> in <module>()
6
7 #perform a prediction
----> 8 predicted_img1 = predict(test_img1)
9 #predicted_img2 = predict(test_img2)
10 print("Prediction complete")
<ipython-input-12-b46266ecb9d5> in predict(test_img)
9
10 #predict the image using our face recognizer
---> 11 label= face_recognizer.predict(face)
12 #get name of respective label returned by face recognizer
13 label_text = subjects[label]
error: C:\projects\opencv-python\opencv\modules\core\src\matrix.cpp:310:
error: (-215) s >= 0 in function cv::setSize
Many thanks in advance
I was testing the same github project and encountered the same error when adding new training images and testing image. The problem goes away when using one of the images from the training data as test image. So the problem would be there's nothing in place to deal with when no match is found as #DaveW.Smith suggests.

Resources