Opencv rectangle gets drawn on original image (unable to remove) - python-3.x

I used the cv2.rectangle() function to draw a rectangle around my image. This works fine. However, I realise that it is also drawn on my original copy. (I made two copies of the same image from my original image file). Can someone please point out where I went wrong?
Here's the code (left out chunks to reduce length):
img=cv2.imread(file,1)
height, width = img.shape[:2]
max_height = 800
max_width = 800
#some image resizing function
#...
resized=img
originalImg=resized
height0, width0=resized.shape[:2]
image=resized
#image manipulation on (image)
#...
imageRect=cv2.rectangle(image,(xCoor,yCoor),(xCoor+width, yCoor+height), (255,0,255), 3)
#This shows the drawn on rectangle fine
cv2.imshow("imageRect", imageRect)
#problem is, this originalImg still has the rectangle drawn on it
cv2.imshow("originalImg", originalImg)

Related

Transform Plates into Horizontal Using Hough transform

I am trying to transform images that are not horizontal, because they may be slanted.
It turns out that when testing 2 images, this photo that is horizontal, and this one that is not. It gives me good results with the horizontal photo, however when trying to change the second photo that is tilted, it does not do what was expected.
The fist image it's works fine like below with a theta 1.6406095. For now it looks bad because I'm trying to make the 2 photos look horizontally correct.
The second image say that theta is just 1.9198622
I think the error it is at this line:
lines= cv2.HoughLines(edges, 1, np.pi/90.0, 60, np.array([]))
I have done a little simulation on this link with colab.
Any help is welcome.
So far this is what I got.
import cv2
import numpy as np
img=cv2.imread('test.jpg',1)
imgGray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
imgBlur=cv2.GaussianBlur(imgGray,(5,5),0)
imgCanny=cv2.Canny(imgBlur,90,200)
contours,hierarchy =cv2.findContours(imgCanny,cv2.RETR_EXTERNAL,cv2.CHAIN_APPROX_SIMPLE)
rectCon=[]
for cont in contours:
area=cv2.contourArea(cont)
if area >100:
#print(area) #prints all the area of the contours
peri=cv2.arcLength(cont,True)
approx=cv2.approxPolyDP(cont,0.01*peri,True)
#print(len(approx)) #prints the how many corner points does the contours have
if len(approx)==4:
rectCon.append(cont)
#print(len(rectCon))
rectCon=sorted(rectCon,key=cv2.contourArea,reverse=True) # Sort out the contours based on largest area to smallest
bigPeri=cv2.arcLength(rectCon[0],True)
cornerPoints=cv2.approxPolyDP(rectCon[0],0.01*peri,True)
# Reorder bigCornerPoints so I can prepare it for warp transform (bird eyes view)
cornerPoints=cornerPoints.reshape((4,2))
mynewpoints=np.zeros((4,1,2),np.int32)
add=cornerPoints.sum(1)
mynewpoints[0]=cornerPoints[np.argmin(add)]
mynewpoints[3]=cornerPoints[np.argmax(add)]
diff=np.diff(cornerPoints,axis=1)
mynewpoints[1]=cornerPoints[np.argmin(diff)]
mynewpoints[2]=cornerPoints[np.argmax(diff)]
# Draw my corner points
#cv2.drawContours(img,mynewpoints,-1,(0,0,255),10)
##cv2.imshow('Corner Points in Red',img)
##print(mynewpoints)
# Bird Eye view of your region of interest
pt1=np.float32(mynewpoints) #What are your corner points
pt2=np.float32([[0,0],[300,0],[0,200],[300,200]])
matrix=cv2.getPerspectiveTransform(pt1,pt2)
imgWarpPers=cv2.warpPerspective(img,matrix,(300,200))
cv2.imshow('Result',imgWarpPers)
Now you just have to fix the tilt (opencv has skew) and then use some threshold to detect the letters and then recognise each letter.
As for a general purpose, I think images need to be normalised first so that we can easily detect the edges.

Detect rectangles in OpenCV (4.2.0) using Python (3.7),

I am working on a personal project where I detect rectangles (all the same dimensions) and then place those rectangles inside a list in the same order (top-bottom) and then process the information inside each rectangle using some function. Below is my test image.
I have managed to detect the rectangle of interest, however I keep getting other rectangles that I don't want. As you can see I only want the three rectangles with the information (6,9,3) into a list.
My code
import cv2
width=700
height=700
y1=0
y2=700
x1=500
x2=700
img=cv2.imread('test.jpg') #read image
img=cv2.resize(img,(width,height)) #resize image
roi = img[y1:y2, x1:x2] #region of interest i.e where the rectangles will be
gray = cv2.cvtColor(roi, cv2.COLOR_BGR2GRAY) #convert roi into gray
Blur=cv2.GaussianBlur(gray,(5,5),1) #apply blur to roi
Canny=cv2.Canny(Blur,10,50) #apply canny to roi
#Find my contours
contours =cv2.findContours(Canny,cv2.RETR_EXTERNAL,cv2.CHAIN_APPROX_NONE)[0]
#Loop through my contours to find rectangles and put them in a list, so i can view them individually later.
cntrRect = []
for i in contours:
epsilon = 0.05*cv2.arcLength(i,True)
approx = cv2.approxPolyDP(i,epsilon,True)
if len(approx) == 4:
cv2.drawContours(roi,cntrRect,-1,(0,255,0),2)
cv2.imshow('Roi Rect ONLY',roi)
cntrRect.append(approx)
cv2.waitKey(0)
cv2.destroyAllWindows()
There is a feature in Contour called cv2.contourArea for which your contour dimensions are input like this cv2.contourArea(contours) . You can use the condition,
if cv2.contourArea(contours)>#Rectangle area
By using this your problem will be solved
I'd suggest that you get the bounding rectangles of the contours and then sort the rectangles by area descending. Crop the first rectangle by default, then loop through the remaining rectangles and crop them if they're, let's say, >=90% of the first rectangle's area. This will ensure that you have the larger rectangles and the smaller ones are ignored.

grayscale image rotation with cv2

I'm trying to crop and rotate an grayscale image.
The image is being rotated successfully according to my defined dimensions, but the intensity channel seems to get zeroed up across the entire rotated image.
image - the original 32,000X1024X1 grayscale image.
i - an index from which I want to crop the image.
windowWidth - a size constant, which defines the number of pixels I wish to crop (e.g in our case, windowWidth = 5000).
cropped - the piece from the original image I wish to rotate.
code example:
cropped = image[i:i+windowWidth, :]
ch, cw = cropped.shape[:2]
rotation_matrix = cv2.getRotationMatrix2D((cw/2,ch/2),90,1)
return cv2.warpAffine(cropped,rotation_matrix, (ch,cw))
The returned 1024X5000X1 matrix contains only 0's, although the original image does not.
It is possible that you are using width instead of height, then maybe this would solve your problem:
cropped = image[:, i:i+windowWidth]

Change background and pixel color of image

For an experiment I want to show the participants drawings from a database which includes black drawn lines on a white background. Eventually I only want to shown what is the 'drawn part' per image in a certain color. So I want the white parts of the image to be made gray, so it is indistinguishable from the gray background. And I want to show the black parts of the image (the actual drawing) in other colors, for example red.
I am quite new to programming and so far I couldn't find an answer. I have tried several things, including the 2 options below.
Could anyone maybe show me an example of how to change the colors of the image I have attached to this message?
It would be very much appreciated!
[enter image description here][1]
####### OPTION 1, not working
#picture = Image.open(fname)
fname = exp.get_file('PICTURE_1.png')
picture = Image.open(fname)
# Get the size of the image
width, height = picture.size
# Process every pixel
for x in range(width):
for y in range(height):
current_color = picture.getpixel( (x,y) )
if current_color == (255,255,255):
new_color = (255,0,0)
picture.putpixel( (x,y), new_color)
elif current_color == (0,0,0):
new_color2 = (115,115,115)
picture.putpixel( (x,y), new_color2)
picture.show()
#picture.show()
win.flip()
clock.sleep(1000)
Implemented changes as you suggested gives: TypeError: 'int' object has no attribute 'getitem'
for x in range(width):
for y in range(height):
current_color = picture.getpixel( (x,y) )
if (current_color[0]<200) and (current_color[1]<200) and (current_color[2]<200):
new_color = (255,0,0)
picture.putpixel( (x,y), new_color)
elif (current_color[0]>200) and (current_color[1]>200) and (current_color[2]>200):
new_color2 = (115,115,115)
picture.putpixel( (x,y), new_color2)
picture.show()
Your approach in option one is basically correct, but here are a few tips to help you get it working properly:
Instead of saying if current_color == (255,255,255):, you should instead put
if (current_color[0]>200) and (current_color[1]>200) and (current_color[2]>200):
as even though the white parts of the image look white the pixels may not be exactly (255,255,255).
I thought you wanted to turn the white parts grey and the black parts red? In your code for option one, the lines
if current_color == (255,255,255):
new_color = (255,0,0)
will turn white pixels red. To turn black pixels red, it should be if current_color == (0,0,0).
If your code is still not working when these changes are made, you could try creating a new image with the same dimensions as the original one, and adding pixels to the new image rather than editing the pixels in the original one.
Also, it would help if you could tell us what actually happens when you run your code. Is there an error message, or is an image shown but the image is not correct? Could you please attach an example output?
Update:
I fiddled around with your code, and got it to do what you want it to do. Here is the code I ended up with:
import PIL
from PIL import Image
picture = Image.open('image_one.png')
# Get the size of the image
width, height = picture.size
for x in range(width):
for y in range(height):
current_color = picture.getpixel( (x,y) )
if (current_color[0]<200) and (current_color[1]<200) and (current_color[2]<200):
new_color = (255,0,0)
picture.putpixel( (x,y), new_color)
elif (current_color[0]>200) and (current_color[1]>200) and (current_color[2]>200):
new_color2 = (115,115,115)
picture.putpixel( (x,y), new_color2)
picture.show()
If you copy and paste this code into a script and run it in the same folder as your image, it should work.
There are much more efficient ways to do this than looping through each pixel and changing its value.
Since it looks like you're using PsychoPy, you can save your images as greyscale with a transparent background. By using the greyscale image format you allow PsychoPy to change the color of the lines to anything you want simply by altering the stimulus color setting. By using a transparent background, whatever you see behind your lines will show through, so you can choose to have a white square, a different square or no square at all. By this method, all the calculations for the colors are being done on the graphics card and can be changed every frame with no problems.
If for some reason you need to alter the image in ways that PsychoPy doesn't inherently allow (and if speed of processing matters) then you should try to change all the pixels in a single operation (using the numpy arrays) rather than one pixel at a time in a for-loop.

Find contour of a sock that contains patterns

I am trying to figure out how I can isolate a non-uniform sock on a picture.
For now I am using edge detection principally as you can see in my code :
main :
# We import the image
image = importImage(filename)
# Save the shapes variables
height, width, _ = np.shape(image)
# Get the gray scale image in a foot shape
grayImage, bigContourArea = getFootShapeImage(image, True)
minArea = width * height / 50
# Extract all contours
contours = getAllContours(grayImage)
# Keep only the contours that are not too big nor too small
relevantContours = getRelevantContours(contours, minArea, maxArea)
And getAllContours does the following :
kernel = np.ones((5, 5), np.uint8)
# Apply Canny Edge detection algorithm
# We apply a Gaussian blur first
edges = cv2.GaussianBlur(grayIm, (5, 5), 0)
# Then we apply Edge detection
edges = cv2.Canny(edges, 10, 100)
# And we do a dilatation followed by erosion to fill gaps
edges = cv2.dilate(edges, kernel, iterations=2)
edges = cv2.erode(edges, kernel, iterations=2)
_, contours, _ = cv2.findContours(edges, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)
Here are some pictures resulting from my code :
Original picture with foot on the drawed shape
Only the biggers contours
All contours
So as you can see there are some parts of the socks that are not taken in the sock contour, and I tried to include the whole sock with several techniques but never succeeded.
I tried the following :
Segmentation using Otsu thresholding, Itti's saliency (In order to have a mask of the sock in the image and avoid all the remaining)
Regroup the smaller contours with the big one to create an even bigger one (But then I can't avoid taking others that are outside the socks)
Do you have an idea on how i can proceed ?
Thanks in advance ! I hope it is clear enough, if you need clarifications just ask.
In order to solve this I had to perform some color detection algorithm in order to detect the white sheet of paper that is here for this special purpose. I did so with the following :
# Define a mask for color I want to isolate
mask = cv2.inRange(image, lowerWhiteVals, upperWhiteVals)
# I also applied some morphological operations on the mask to make it cleaner
Here is the mask image obtained doing so before and after operations:
Then I detect the paper on the image by taking the left-most contour on the mask, and use it as a left boundary, I also split the paper contour to get a bottom boundary well representative.
And for the top and right I used the first sock contour I had, assuming this one will always at least have theses 2 boundaries because of how the socks are.
Once this was done I just took all the contours in my boundaries and created a new contour from that by drawing them all onto a blank image and finding the new contour again (Thanks to #Alexander Reynolds).
I also had to fine tune a bit my algorithm in order to have the more representative contour of the sock at the end and you can see what my final result is on this following image, even if it's not perfect it's more than enough for this small trial with opencv.
Thanks #Alexander for your help. And hope it will help others someday !

Resources