Remove lines from this image - python-3.x

Trying to remove those lines after stitching multiple images but these lines are just not going away
Tried morphological Transformations of opencv nothing worked. Any help would be great.

You can use Gaussian blurring for this. However, as you want to remove vertical lines use a m x 1 kernel as it would affect only the vertical lines and will not blur horizontally.
img = cv2.imread('vertical_noise.jpg')
img = cv2.GaussianBlur(img, (11, 1), 0)
cv2.imshow('image', img)
cv2.waitKey(0)
cv2.destroyAllWindows()
You can increase the kernel size to completely remove the lines but the image will also get blurrier.
EDIT
Sorry for the extremely late edit. If you don't want to blur then I don't know how to remove the lines in the car. However, the lines outside the car can be removed. For this create a threshold using cv2.threshold with a high thresh value and then find contours on it. Then using cv2.drawContours the blank spaces in the threshold image can be filled which can be treated as a mask. Using cv2.bitwise_and obtain two images using that as a mask with a blank image and using its inverse as a mask along with the original image. Then combine then cv2.bitwise_or.
import cv2
import numpy as np
img = cv2.imread('vertical_noise.jpg')
height, width, ch = img.shape
blur = cv2.GaussianBlur(img, (11, 1), 0)
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
_, thresh = cv2.threshold(gray, 200, 255, cv2.THRESH_BINARY)
_, cnts, _ = cv2.findContours(thresh, cv2.RETR_LIST, cv2.CHAIN_APPROX_NONE)
cv2.drawContours(thresh, cnts, -1, 0, 3)
res = cv2.bitwise_and(img, img, mask=cv2.bitwise_not(thresh))
blank = np.ones((height, width, ch), np.uint8)*255
inv_res = cv2.bitwise_and(blank, blank, mask=thresh)
res = cv2.bitwise_or(res, inv_res)
cv2.imshow('image', res)
cv2.waitKey(0)
cv2.destroyAllWindows()
Now if you want you can blur it a little for the lines inside the car.

Related

Bounding boxes for individual contours except one color

I have an image as below
I want to add bounding boxes for each of the regions as shown in the pic below using OpenCV & Python
I now how to find contours is the region is one colour. However, here I want to find contours for all non-Black regions. I am just not able to figure it out. Can anyone help?
Regrading some regions being regions being non-continuous (2 vertical lines on the left), you can ignore that. I will dilate & make them continuous.
If I understand what you want, here is one way in Python/OpenCV.
Read the input
Convert to gray
Threshold to black and white
Find external contours and their bounding boxes
Draw the bounding box rectangles on a copy of the input
Save the results
Input:
import cv2
import numpy as np
# read image
img = cv2.imread('white_green.png')
# convert to grayscale
gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
# threshold
thresh = cv2.threshold(gray, 0, 255, cv2.THRESH_BINARY)[1]\
# get contour bounding boxes and draw on copy of input
contours = cv2.findContours(thresh, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE)
contours = contours[0] if len(contours) == 2 else contours[1]
result = img.copy()
for c in contours:
x,y,w,h = cv2.boundingRect(c)
cv2.rectangle(result, (x, y), (x+w-1, y+h-1), (0, 0, 255), 1)
# view result
cv2.imshow("threshold", thresh)
cv2.imshow("result", result)
cv2.waitKey(0)
cv2.destroyAllWindows()
# save result
cv2.imwrite("white_green_thresh.jpg", thresh)
cv2.imwrite("white_green_bboxes.jpg", result)
Thresholded image:
Bounding Boxes:

Crop the rectangular paper from the image

from the discussion : Crop exactly document paper from image
I'm trying to get the white paper from the image and I'm using the following code which not cropping exactly rectangular.
def crop_image(image):
image = cv2.imread(image)
# convert to grayscale image
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
# threshold
thresh = cv2.threshold(gray, 190, 255, cv2.THRESH_BINARY)[1]
# apply morphology
kernel = np.ones((7, 7), np.uint8)
morph = cv2.morphologyEx(thresh, cv2.MORPH_CLOSE, kernel)
kernel = np.ones((9, 9), np.uint8)
morph = cv2.morphologyEx(morph, cv2.MORPH_ERODE, kernel)
# Get Largest contour
contours = cv2.findContours(morph, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE)
contours = contours[0] if len(contours) == 2 else contours[1]
area_thresh = 0
for cnt in contours:
area = cv2.contourArea(cnt)
if area > area_thresh:
area_thresh = area
big_contour = cnt
# get bounding box
x, y, w, h = cv2.boundingRect(big_contour)
# draw filled contour on black background
mask = np.zeros_like(gray)
mask = cv2.merge([mask, mask, mask])
cv2.drawContours(mask, [big_contour], -1, (255, 255, 255), cv2.FILLED)
# apply mask to input
result = image.copy()
result = cv2.bitwise_and(result, mask)
# crop result
img_result = result[y:y+h, x:x+w]
filename = generate_filename()
cv2.imwrite(filename, img_result)
logger.info('Successfully saved cropped file : %s' % filename)
return img_result, filename
I'm able to get the desired result but not the rectangular image.
Here I'm attaching and here is what I'm getting after cropping image .
I want a rectangular image of the paper.
Please help me with this.
Thanks in advance
The first problem I can see is that the threshold value is not low enough so the bottom part of the paper is not correctly capture (it's too dark to be captured by the threshold)
The second problem as far I can understand is being able to fit the square to the image. What you need to do is wrapping perspective.
To do that you can find more information in this amazing post of PyImageSearch

How to apply threshold counters to only specified/masking region in the image using opencv

I am new in OpenCV and stuck at a point actually, I study lots of QA but all are unable or I can say not fully satisfied my requirement.
Suppose I have an image and I want to find counters for objectifying specified area only.
It is easy to counter a complete image but I want to counters only masking area.
For masking purpose I have x,y coordinates with me.
So my question is how to counter only masking area in the image using OpenCV
This is the one solution matching with my question->Stackoverflow suggestion -:
But it does not solving my problem completely.
The given solution in this solution is quite complicated and not solving my problem. as I want my counters ROI to be show in the same image using mask coordinates(xmin, ymin, xmax, ymax).
Means i first drawing reactangle mask into the image and i want to draw counters boundary only into that rectangle mask in the same image.
This is my Demo.py file:-
import cv2
img = cv2.imread("F:/mycar.jpg",1)
crop_img = cv2.rectangle(img, (631,181), (698,386), (0,255,0), 3)
gray_img = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
retval, thresh = cv2.threshold(gray_img, 127, 255, 0)
img_contours, _ = cv2.findContours(thresh, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)
cv2.drawContours(img, img_contours, -1, (255, 0, 0))
cv2.imshow("dflj",img)
cv2.waitKey(0)
cv2.destroyAllWindows()
output
From this image, you can see that the complete image is countering.
But I want to just counter only that green-rectangle part of the image.
Means that blue counter boundary lines is visible inside whole image but I want to show only it inside of green-rectangle box.
I have rectangle coordinates with me (eg. minx-631, miny-181, maxx-698, maxy-386).
Please help.
You can apply functions to ROIs using Numpy indexing as explained in this tutoral:
top,left = 338,282
h,w = 216,106
gray_img = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
crop_img = cv2.rectangle(img, (left,top), (left+w,top+h), (0,255,0), 3)
retval, thresh = cv2.threshold(gray_img, 127, 255, 0)
img_contours, _ = cv2.findContours(thresh[top:top+h,left:left+w], cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)
cv2.drawContours(img[top:top+h,left:left+w], img_contours, -1, (255, 0, 0))
Please note that with Numpy indexing y comes first then x, as opposed to point coordinates with (x,y).
With this image we get:

Pupil Detection in eye images using python

I need to mark a pupil in an image like this of the eye. I have written this code
img_name='6.jpg'
image = cv2.imread(img_name)
image_copy_new=cv2.imread(img_name)
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
retval, thresholded = cv2.threshold(gray, 30, 255, cv2.THRESH_BINARY_INV)
plt.imshow(thresholded,cmap="gray")
This produces output like this -
Then I searched for the contours in the images and tried to find only the most circular one in the image through this code
contours, hierarchy = cv2.findContours(thresholded, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)
image_copy = np.zeros_like(image) # create a new emtpy image
for cnt in contours:
peri = cv2.arcLength(cnt, True)
approx = cv2.approxPolyDP(cnt, 0.04 * peri, True)
(x, y, w, h) = cv2.boundingRect(cnt)
ar = w / float(h)
if w*h > 20 and 0.9 < ar < 1.1: # filtering condition
cv2.drawContours(image, [cnt], 0, 255, -1)
While this produces great results in some cases where the eyes are in front facing direction but in other cases(like this one) it completely fails. I have tried many other things like "hough transform, different morphs" but I'm not able to tackle this problem.
The images are of only eyes and not the whole face else dlibs face detection would've worked
The cases where this code works is
Thanks for taking time and helping me out.
Adding some blurring, erosion and dilation may help. Eroding will remove very small features, like the noise around the eyelashes, and dilation will bring any surviving points back up to size. By tweaking the erosion and dilation sizes, you should be able to get rid of most of the noise and make that center pupil look much better.
Here's an example of how I would do this:
gray = cv2.cvtColor(frame_in, cv2.COLOR_BGR2GRAY)
blurred = cv2.GaussianBlur(gray, (5, 5), 0)
thresh = cv2.threshold(blurred, 30, 255, cv2.THRESH_BINARY)[1]
erosion_size = 10
dilate_size = 8
thresh = cv2.erode(thresh, cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (erosion_size, erosion_size)))
thresh = cv2.dilate(thresh, cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (dilate_size, dilate_size)))

How to crop the green colored rectangle from an image

I need to crop an image for further processing on it using opencv and python. I need to crop the region inside the green colored rectangle in the image. The rectangle is drawn using "haar_cascade_fullbody_Detector"
The code is as follows :
import numpy as np
import cv2
bodydetection = cv2.CascadeClassifier('haarcascade_fullbody.xml')
img = cv2.imread('original.jpg')
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
body = bodydetection.detectMultiScale(gray, 1.009, 5)
for x, y, w, h in body:
# so we slightly shrink the rectangles to get a nicer output.
pad_w, pad_h = int(0.15*w), int(0.02*h)
cv2.rectangle(img, (x+pad_w+10, y+pad_h+10), (x+w-pad_w, y+h-pad_h), (0, 255, 0), 2)
cv2.imshow('img',img)
crop_img = img[x:x+w, y:y+h]
cv2.imshow('crop',crop_img)
cv2.waitKey(0)
cv2.destroyAllWindows()
The input image is :
The output of haar cascade is :
after the crop the image is :
Please suggest any solution. Thanks!
Ok! I got the answer. The line of crop_img should be modified as
crop_img = img[y:y+h, x:x+w]
This produces required output.This is because in opencv the img coordinated sequence is y-coordinates and then x-coordinates.

Resources