Find contour of a sock that contains patterns - python-3.x

I am trying to figure out how I can isolate a non-uniform sock on a picture.
For now I am using edge detection principally as you can see in my code :
main :
# We import the image
image = importImage(filename)
# Save the shapes variables
height, width, _ = np.shape(image)
# Get the gray scale image in a foot shape
grayImage, bigContourArea = getFootShapeImage(image, True)
minArea = width * height / 50
# Extract all contours
contours = getAllContours(grayImage)
# Keep only the contours that are not too big nor too small
relevantContours = getRelevantContours(contours, minArea, maxArea)
And getAllContours does the following :
kernel = np.ones((5, 5), np.uint8)
# Apply Canny Edge detection algorithm
# We apply a Gaussian blur first
edges = cv2.GaussianBlur(grayIm, (5, 5), 0)
# Then we apply Edge detection
edges = cv2.Canny(edges, 10, 100)
# And we do a dilatation followed by erosion to fill gaps
edges = cv2.dilate(edges, kernel, iterations=2)
edges = cv2.erode(edges, kernel, iterations=2)
_, contours, _ = cv2.findContours(edges, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)
Here are some pictures resulting from my code :
Original picture with foot on the drawed shape
Only the biggers contours
All contours
So as you can see there are some parts of the socks that are not taken in the sock contour, and I tried to include the whole sock with several techniques but never succeeded.
I tried the following :
Segmentation using Otsu thresholding, Itti's saliency (In order to have a mask of the sock in the image and avoid all the remaining)
Regroup the smaller contours with the big one to create an even bigger one (But then I can't avoid taking others that are outside the socks)
Do you have an idea on how i can proceed ?
Thanks in advance ! I hope it is clear enough, if you need clarifications just ask.

In order to solve this I had to perform some color detection algorithm in order to detect the white sheet of paper that is here for this special purpose. I did so with the following :
# Define a mask for color I want to isolate
mask = cv2.inRange(image, lowerWhiteVals, upperWhiteVals)
# I also applied some morphological operations on the mask to make it cleaner
Here is the mask image obtained doing so before and after operations:
Then I detect the paper on the image by taking the left-most contour on the mask, and use it as a left boundary, I also split the paper contour to get a bottom boundary well representative.
And for the top and right I used the first sock contour I had, assuming this one will always at least have theses 2 boundaries because of how the socks are.
Once this was done I just took all the contours in my boundaries and created a new contour from that by drawing them all onto a blank image and finding the new contour again (Thanks to #Alexander Reynolds).
I also had to fine tune a bit my algorithm in order to have the more representative contour of the sock at the end and you can see what my final result is on this following image, even if it's not perfect it's more than enough for this small trial with opencv.
Thanks #Alexander for your help. And hope it will help others someday !

Related

Generate an Image Dataset from a Single Image

I have a single image that looks like this:
And I need to generate an image dataset that keeps the basic characteristics of this image but adds some noise, such as we see a line at 1:30 time in the image.
Mainly, there's the pink part of the image (vertical lines), blue part (central bluesh hue) and yellow/green part at the edges. I'm looking to "learn" the image in a way that I could control these 3 things and randomly generate:
bluesh central hue's small colors changes and size
vertical pink lines thickness and color
Yellow/Green edges and their size (I could expand them at the expense of blue in the middle or vice virsa
CONSTRAINT: The yellowish circle (which is image of a semi-conductor wafer) cannot change in size or shape. It can move on top of the black square though. structures inside it can change as well, as mentioned in above 3 points.
This might be an easy question for people with experience in computer vision but I, unfortunately, don't have a lot of experience in this domain. So, I'd love to get any ideas on making progress in this direction. Thanks.
Changing the shape of your inner structures while safely keeping all possible characteristics seems non-trivial to me. There are however a number of simple transformation you could do to create an augmented dataset such as:
Mirroring: Horizontally, vertically, diagonally - will keep all of your line characteristics
Rotation: Normally you would also do some rotations, but this will obviously change the orientation of your lines which you want to preserve, so this does not apply in your case
Shearing: Might still apply and work nicely to add some robustness, as long as you don't overdo it and end up bending your features too much
Other than that you might also want to add some noise to your image, or transformed versions of it as listed above, such as Gaussian noise or salt and pepper noise.
You could also play around with the color values, e.g. by slighly shifting the saturation of different hue values in HSV space.
You can combine any of those methods in different combinations, if you try all possible permutations with different amount/type of noise you will get quite a big dataset.
One approach is using keras's ImageDataGenerator
Decide how many samples you want? Assume 5.
total_number = 5
Initialize ImageDataGenerator class. For instance
data_gen = ImageDataGenerator(rescale=1. / 255, shear_range=0.2,
zoom_range=0.2, horizontal_flip=True)
Turn your image to the tensor.
img = load_img("xIzEG.png", grayscale=False) # You can also create gray-images.
arr = img_to_array(img)
tensor_img = arr.reshape((1, ) + arr.shape)
Create a folder you want to store the result, i.e. populated, then Populate
for i, _ in enumerate(data_gen.flow(x=tensor_img,
batch_size=1,
save_to_dir="populated",
save_prefix="generated",
save_format=".png")):
if i > total_number:
break
Now, if you look at your populated folder:
Code
from keras.preprocessing.image import load_img, img_to_array
from keras.preprocessing.image import ImageDataGenerator
# Total Generated number
total_number = 5
data_gen = ImageDataGenerator(rescale=1. / 255, shear_range=0.2,
zoom_range=0.2, horizontal_flip=True)
# Create image to tensor
img = load_img("xIzEG.png", grayscale=False)
arr = img_to_array(img)
tensor_image = arr.reshape((1, ) + arr.shape)
for i, _ in enumerate(data_gen.flow(x=tensor_image,
batch_size=1,
save_to_dir="populated",
save_prefix="generated",
save_format=".png")):
if i > total_number:
break

Transform Plates into Horizontal Using Hough transform

I am trying to transform images that are not horizontal, because they may be slanted.
It turns out that when testing 2 images, this photo that is horizontal, and this one that is not. It gives me good results with the horizontal photo, however when trying to change the second photo that is tilted, it does not do what was expected.
The fist image it's works fine like below with a theta 1.6406095. For now it looks bad because I'm trying to make the 2 photos look horizontally correct.
The second image say that theta is just 1.9198622
I think the error it is at this line:
lines= cv2.HoughLines(edges, 1, np.pi/90.0, 60, np.array([]))
I have done a little simulation on this link with colab.
Any help is welcome.
So far this is what I got.
import cv2
import numpy as np
img=cv2.imread('test.jpg',1)
imgGray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
imgBlur=cv2.GaussianBlur(imgGray,(5,5),0)
imgCanny=cv2.Canny(imgBlur,90,200)
contours,hierarchy =cv2.findContours(imgCanny,cv2.RETR_EXTERNAL,cv2.CHAIN_APPROX_SIMPLE)
rectCon=[]
for cont in contours:
area=cv2.contourArea(cont)
if area >100:
#print(area) #prints all the area of the contours
peri=cv2.arcLength(cont,True)
approx=cv2.approxPolyDP(cont,0.01*peri,True)
#print(len(approx)) #prints the how many corner points does the contours have
if len(approx)==4:
rectCon.append(cont)
#print(len(rectCon))
rectCon=sorted(rectCon,key=cv2.contourArea,reverse=True) # Sort out the contours based on largest area to smallest
bigPeri=cv2.arcLength(rectCon[0],True)
cornerPoints=cv2.approxPolyDP(rectCon[0],0.01*peri,True)
# Reorder bigCornerPoints so I can prepare it for warp transform (bird eyes view)
cornerPoints=cornerPoints.reshape((4,2))
mynewpoints=np.zeros((4,1,2),np.int32)
add=cornerPoints.sum(1)
mynewpoints[0]=cornerPoints[np.argmin(add)]
mynewpoints[3]=cornerPoints[np.argmax(add)]
diff=np.diff(cornerPoints,axis=1)
mynewpoints[1]=cornerPoints[np.argmin(diff)]
mynewpoints[2]=cornerPoints[np.argmax(diff)]
# Draw my corner points
#cv2.drawContours(img,mynewpoints,-1,(0,0,255),10)
##cv2.imshow('Corner Points in Red',img)
##print(mynewpoints)
# Bird Eye view of your region of interest
pt1=np.float32(mynewpoints) #What are your corner points
pt2=np.float32([[0,0],[300,0],[0,200],[300,200]])
matrix=cv2.getPerspectiveTransform(pt1,pt2)
imgWarpPers=cv2.warpPerspective(img,matrix,(300,200))
cv2.imshow('Result',imgWarpPers)
Now you just have to fix the tilt (opencv has skew) and then use some threshold to detect the letters and then recognise each letter.
As for a general purpose, I think images need to be normalised first so that we can easily detect the edges.

Detect rectangles in OpenCV (4.2.0) using Python (3.7),

I am working on a personal project where I detect rectangles (all the same dimensions) and then place those rectangles inside a list in the same order (top-bottom) and then process the information inside each rectangle using some function. Below is my test image.
I have managed to detect the rectangle of interest, however I keep getting other rectangles that I don't want. As you can see I only want the three rectangles with the information (6,9,3) into a list.
My code
import cv2
width=700
height=700
y1=0
y2=700
x1=500
x2=700
img=cv2.imread('test.jpg') #read image
img=cv2.resize(img,(width,height)) #resize image
roi = img[y1:y2, x1:x2] #region of interest i.e where the rectangles will be
gray = cv2.cvtColor(roi, cv2.COLOR_BGR2GRAY) #convert roi into gray
Blur=cv2.GaussianBlur(gray,(5,5),1) #apply blur to roi
Canny=cv2.Canny(Blur,10,50) #apply canny to roi
#Find my contours
contours =cv2.findContours(Canny,cv2.RETR_EXTERNAL,cv2.CHAIN_APPROX_NONE)[0]
#Loop through my contours to find rectangles and put them in a list, so i can view them individually later.
cntrRect = []
for i in contours:
epsilon = 0.05*cv2.arcLength(i,True)
approx = cv2.approxPolyDP(i,epsilon,True)
if len(approx) == 4:
cv2.drawContours(roi,cntrRect,-1,(0,255,0),2)
cv2.imshow('Roi Rect ONLY',roi)
cntrRect.append(approx)
cv2.waitKey(0)
cv2.destroyAllWindows()
There is a feature in Contour called cv2.contourArea for which your contour dimensions are input like this cv2.contourArea(contours) . You can use the condition,
if cv2.contourArea(contours)>#Rectangle area
By using this your problem will be solved
I'd suggest that you get the bounding rectangles of the contours and then sort the rectangles by area descending. Crop the first rectangle by default, then loop through the remaining rectangles and crop them if they're, let's say, >=90% of the first rectangle's area. This will ensure that you have the larger rectangles and the smaller ones are ignored.

How to remove the white border from my heat map using python

I've generated 2000 heat maps using seaboard in python3. The problem is that it makes a white border as well. I only want to save the heat map. I want to remove these white borders because I want to train my model based on these heat maps and I think having these borders might mess-up the result. Will having these borders matter since each heat map would have this border?
The code I wrote to generate these heat maps.
for i in range(len(h1)):
ax = sns.heatmap(h1[i], yticklabels = False,xticklabels = False, cbar = False)
fig = ax.get_figure()
fig.savefig(path.join(outpath,"neutral_{0}.png".format(i)))
Actual heat map
What I want:
If you really have same size heat map pictures, you can try trimming them by one more step.
Use PIL(pillow) module to do this work.
For example:
from PIL import Image
for i in range(len(h1)):
im = Image.open("neutral_{0}.png".format(i))
im = im.crop((left, upper, right, lower)) # You have to adjust parameter here
#im = im.crop((100, 75, 300, 150)) # ↓
# you will get an image which size is (width=200, height=75)
im.save("neutral_crop_{0}.png".format(i))
The coordinates of these parameters (left, upper, right, lower) are measured from the top left corner of your input image.

Detect Color of particular area of Image Nodejs OpenCV

I'm trying to write code to detect the color of a particular area of an image.
So far I have come across is using OpenCV, we can do this, But still haven't found any particular tutorial to help with this.
I want to do this with javascript, but I can also use python OpenCV to get the results.
can anyone please help me with sharing any useful link or can explain how can I achieve detecting the color of the particular area in the image.
For eg.
The box in red will show a different color. I need to figure out which color it is showing.
What I have tried:
I have tried OpenCV canny images, though I am successful to get area separated with canny images, how to detect the color of that particular canny area is still a challenge.
Also, I tried it with inRange method from OpenCV which works perfect
# find the colors within the specified boundaries and apply
# the mask
mask = cv2.inRange(image, lower, upper)
output = cv2.bitwise_and(image, image, mask = mask)
# show the images
cv2.imshow("images", np.hstack([image, output]))
It works well and extracts the color area from the image But is there any callback which responds if the image has particular color so that it can be all done automatically?
So I am assuming here that, you already know the location of the rect which is going to be dynamically changed and need to find out the single most dominant color in the desired ROI. There are a lot of ways to do the same, one is by getting the average, of all the pixels in the ROI, other is to count all the distinct pixel values in the given ROI, with some tolerance difference.
Method 1:
import cv2
import numpy as np
img = cv2.imread("path/to/img.jpg")
region_of_interest = (356, 88, 495, 227) # left, top, bottom, right
cropped_img = img[region_of_interest[1]:region_of_interest[3], region_of_interest[0]:region_of_interest[2]]
print cv2.mean(cropped_img)
>>> (53.430516018839604, 41.05708814243569, 244.54991977640907, 0.0)
Method 2:
To find out the various dominant clusters in the given image you can use cv2.kmeans() as:
import cv2
import numpy as np
img = cv2.imread("path/to/img.jpg")
region_of_interest = (356, 88, 495, 227)
cropped_img = img[region_of_interest[1]:region_of_interest[3], region_of_interest[0]:region_of_interest[2]]
Z = cropped_img.reshape((-1, 3))
Z = np.float32(Z)
criteria = (cv2.TERM_CRITERIA_EPS + cv2.TERM_CRITERIA_MAX_ITER, 10, 1.0)
K = 4
ret, label, center = cv2.kmeans(Z, K, None, criteria, 10, cv2.KMEANS_RANDOM_CENTERS)
# Sort all the colors, as per their frequencies, as:
print center[sorted(range(K), key=lambda x: np.count_nonzero(label == [x]), reverse=True)[0]]
>>> [ 52.96525192 40.93861389 245.02325439]
#Prateek... nice to have the question narrowed down to the core. The code you provided does not address this issue at hand and remains just a question. I'll hint you towards a direction but you have to code it yourself.
steps that guide you towards a scripting result:
1) In your script add two (past & current) pixellists to store values (pixeltype + occurance).
2) Introduce a while-loop with an action true/stop statement (link to "3") for looping purpose because then it becomes a dynamic process.
3) Write a GUI with a flashy warning banner.
4) compare the pixellist with current_pixellist for serious state change (threshhold).
5) If the delta state change at "4" meets threshold throw the alert ("3").
When you've got written the code and enjoyed the trouble of tracking the tracebacks... then edit your question, update it with the code and reshape your question (i can help wiht that if you want). Then we can pick it up from there. Does that sound like a plan?
I am not sure why you need callback in this situation, but maybe this is what you mean?
def test_color(image, lower, upper):
mask = cv2.inRange(image, lower, upper)
return np.any(mask == 255)
Explanations:
cv2.inRange() will return 255 when pixel is in range (lower, upper), 0 otherwise (see docs)
Use np.any() to check if any element in the mask is actually 255

Resources