Open cv compare two face embeddings - python-3.x

I went through Pyimagesearch face Recognition tutorial,
but my application need to compare two faces only,
I have embedding of two faces, how to compare them using opencv ?
about the trained model which is use to extract embedding from face is mentioned in link,
I want to know that what methods I should try to compare two face embedding.
(Note: I am new to this field)

First of all your case is similar to given tutorial, instead of multiple images you have single image that you need to compare with test image,
So you don't really need training step here.
You can do
# read 1st image and store encodings
image = cv2.imread(args["image"])
rgb = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
boxes = face_recognition.face_locations(rgb, model=args["detection_method"])
encodings1 = face_recognition.face_encodings(rgb, boxes)
# read 2nd image and store encodings
image = cv2.imread(args["image"])
rgb = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
boxes = face_recognition.face_locations(rgb, model=args["detection_method"])
encodings2 = face_recognition.face_encodings(rgb, boxes)
# now you can compare two encodings
# optionally you can pass threshold, by default it is 0.6
matches = face_recognition.compare_faces(encoding1, encoding2)
matches will give you True or False based on your images

Based on the article you mentioned, you can actually compare if two faces are the same using only the face_recognition library.
You can use the compare faces to determine if two pictures have the same face
import face_recognition
known_image = face_recognition.load_image_file("biden.jpg")
unknown_image = face_recognition.load_image_file("unknown.jpg")
biden_encoding = face_recognition.face_encodings(known_image)[0]
unknown_encoding = face_recognition.face_encodings(unknown_image)[0]
results = face_recognition.compare_faces([biden_encoding], unknown_encoding)

Related

How to detect an object in an image rather than screen with pyautogui?

I am using pyautogui.locateOnScreen() function to locate elements in chrome and get their x,y coordinates and click them. But at some point I need to take a screenshot of a part of the screen and search for the object I want in this screenshot. Then I get coordinates of it. Is it possible to do it with pyautogui?
My example code:
coord_one = pyautogui.locateOnScreen("first_image.png",confidence=0.95)
scshoot = pyautogui.screenshot(region=coord_one)
coord_two = # search second image in scshoot and if it can be detected get coordinates of it.
If it is not possible with pyautogui, can you advice the easiest-smartest way?
Thanks in advance.
I don't believe there is a built-in direct way to do what you need but the python-opencv library does the job.
The following code sample assumes you have an screen capture you just took "capture.png" and you want to find "logo.png" in that capture, which you know is an subsection of "capture.png".
Minimal example
"""Get bounding box of cropped image from original image."""
import cv2 as cv
import numpy as np
img_rgb = cv.imread(r'res/original.png')
# the cropped image, expected to be smaller
target_img = cv.imread(r'res/crop.png')
_, w, h = target_img.shape[::-1]
res = cv.matchTemplate(img_rgb,target_img,cv.TM_CCOEFF_NORMED)
# with the method used, the date in res are top left pixel coords
min_val, max_val, min_loc, max_loc = cv.minMaxLoc(res)
top_left = max_loc
# if we add to it the width and height of the target, then we get the bbox.
bottom_right = (top_left[0] + w, top_left[1] + h)
cv.rectangle(img_rgb,top_left, bottom_right, 255, 2)
cv.imshow('', img_rgb)
MatchTemplate
From the docs, MatchTemplate "simply slides the template image over the input image (as in 2D convolution) and compares the template and patch of input image under the template image." Under the hood, this offers methods such as square difference to compare the images represented as arrays.
See more
For a more in-depth explanation, check the opencv docs as the code is entirely based off their example.

combine overlapping labelled objects and modify label values

I have a Z-stack of 2D confocal microscopy images (2D slices) and I want to segment cells. The Z-stack of 2D images is actually a 3D data. In different slices along the Z-axis, I see same cells do appear in multiple slices. I am interested in cell shape in the XY so I want to preserve the largest cell area from different Z-axis slices. I thought to combine the consecutive 2D slices after converting them to labelled binary images but I am having few issues and I need some help to proceed further.
I have two images img_a and img_b. I first converted them to binary images using OTSU, then applied some morphological operations and then used cv2.connectedComponentsWithStats() to obtain labelled objects. After labeling images, I combined them using cv2.bitwise_or() but it messes up with the labels. You can see this in the attached processed image (cell higlighted by red circles). I see multiple labels for overlapping cell. However, I want to assign one unique label for every combined overlapping object.
What I want at the end is that when I combine two labelled images, I want to assign one single label (a unique value) to the combined overlapping objects and keep the largest cell area by combining both images. Does anyone know how to do it?
Here is the code:
from matplotlib import pyplot as plt
from skimage import io, color, measure
from skimage.util import img_as_ubyte
from skimage.segmentation import clear_border
import cv2
import numpy as np
cells_a=img_a[:,:,1] # get the green channel
#Threshold image to binary using OTSU.
ret_a, thresh_a = cv2.threshold(cells_a, 0, 255, cv2.THRESH_BINARY+cv2.THRESH_OTSU)
# Morphological operations to remove small noise - opening
kernel = np.ones((3,3),np.uint8)
opening_a = cv2.morphologyEx(thresh_a,cv2.MORPH_OPEN,kernel, iterations = 2)
opening_a = clear_border(opening_a) #Remove edge touchingpixels
numlabels_a, labels_a, stats_a, centroids_a = cv2.connectedComponentsWithStats(opening_a)
img_a1 = color.label2rgb(labels_a, bg_label=0)
## now do the same with image_b
cells_b=img_b[:,:,1] # get the green channel
#Threshold image to binary using OTSU.
ret_b, thresh_b = cv2.threshold(cells_b, 0, 255, cv2.THRESH_BINARY+cv2.THRESH_OTSU)
# Morphological operations to remove small noise - opening
opening_b = cv2.morphologyEx(thresh_b,cv2.MORPH_OPEN,kernel, iterations = 2)
opening_b = clear_border(opening_b) #Remove edge touchingpixels
numlabels_b, labels_b, stats_b, centroids_b = cv2.connectedComponentsWithStats(opening_b)
img_b1 = color.label2rgb(labels_b, bg_label=0)
## Now combined two images
combined = cv2.bitwise_or(labels_a, labels_b) ## combined both labelled images to get maximum area per cell
combined_img = color.label2rgb(combined, bg_label=0)
plt.imshow(combined_img)
Images can be found here:
Based on the comments from Christoph Rackwitz and beaker, I started to look around for 3D connected components labeling. I found one python library that can handle such things and I installed it and give it a try. It seems to be doing pretty good. It does assign labels in each slice and keeps the labels same for the same cells in different slices. This is exactly what I wanted.
Here is the link to the library that I used to label objects in 3D.
https://pypi.org/project/connected-components-3d/

How to change only array for Dicom file with Simple ITK in python

I have a bunch of medical images in dicom that I want to correct for bias field inhomogeneity using SimpleITK in Python. The workflow is straightforward: I want to (1) open the dicom image, (2) create a binary mask of the object in the image, (3) apply N4 bias field correction to the masked image, (4) write back the corrected image in dicom format. Note that no spatial transformation is applied to the image, but only intensity transformation, so that I could copy all spatial information and all meta data (except for date/hour of creation and instance number) from the original to the corrected image.
I have written this function to achieve my goal:
def n4_dcm_correction(dcm_in_file):
metadata_to_set = ["0008|0012", "0008|0013", "0020|0013"]
filepath = PurePath(dcm_in_file)
root_dir = str(filepath.parent)
file_name = filepath.stem
dcm_reader = sitk.ImageFileReader()
dcm_reader.SetFileName(dcm_in_file)
dcm_reader.LoadPrivateTagsOn()
inputImage = dcm_reader.Execute()
metadata_to_copy = [k for k in inputImage.GetMetaDataKeys() if k not in metadata_to_set]
maskImage = sitk.OtsuThreshold(inputImage,0,1,200)
filledImage = sitk.BinaryFillhole(maskImage)
floatImage = sitk.Cast(inputImage,sitk.sitkFloat32)
corrector = sitk.N4BiasFieldCorrectionImageFilter();
output = corrector.Execute(floatImage, filledImage)
output.CopyInformation(inputImage)
for k in metadata_to_copy:
print("key is: {}; value is {}".format(k, inputImage.GetMetaData(k)))
output.SetMetaData(k, inputImage.GetMetaData(k))
output.SetMetaData("0008|0012", time.strftime("%Y%m%d"))
output.SetMetaData("0008|0013", time.strftime("%H%M%S"))
output.SetMetaData("0008|0013", str(float(inputImage.GetMetaData("0008|0013")) + randint(1, 999)))
out_file = "{}/{}_biascorrected.dcm".format(root_dir, file_name)
writer = sitk.ImageFileWriter()
writer.KeepOriginalImageUIDOn()
writer.SetFileName(out_file)
writer.Execute(sitk.Cast(output, sitk.sitkUInt16))
return
n4_dcm_correction("/path/to/my/dcm/image.dcm")
As much as the bias correction part works (the bias is removed), the writing part is a mess. I would expect my output dicom to have the exact same metadata of the original one, however they are all missing, notably the patient name, the protocol name and the manufacturer. Similalry, something is very wrong with the spatial information, since if I try to convert the dicom to the nifti format with dcm2niix, the directions are reversed: superior is down and inferior is up, forward is back and backward is front. What step am I missing ?
I suspect you are working with a MRI series, not a single file. Likely this example does what you want, read-modify-write a volume stored in a set of files.
If the example did not resolve your issue, please post to the ITK discourse which is the primary location for ITK/SimpleITK related discussions.

OpenCV and Matplotlib show an image in differnt ways

I am trying to plot an image after some processing. I get three different images using the three options below. The image obtained is after applying the Sobel filter twice on a road lane image.
sample_image.jpg
The three methods to plot are shown in the below Python code.
img = cv2.imread('sample_image.jpg')
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
gaussian = cv2.GaussianBlur(gray,(3,3),0)
sobely = cv2.Sobel(gaussian,cv2.CV_64F,1,0,ksize=5) # y
sobelyy = cv2.Sobel(sobely,cv2.CV_64F,1,0,ksize=5) # y
# method 1
cv2.imshow('sobelyy', sobelyy)
# method 2
cv2.imwrite('filtered_img1.JPG', sobelyy)
s_img = cv2.imread('filtered_img1.JPG')
cv2.imshow('s_img', s_img)
# method 3
plt.figure()
plt.imshow(sobelyy, cmap='gray')
plt.title('Filtered sobelyy image, B(x,y)'), plt.xticks([]), plt.yticks([])
plt.show()
The images I get are:
method 1
method 2
method 3
The image I want to get is the one obtained in method 3.
Why are the images shown in different ways?
How can I get to save the output image like the result of method 3?
Thank you in advance!
Why are the images shown in different ways?
OpenCV and Matplotlib use different color spaces to display images - that's why they look differently even when they are actually the same.
As for your first 2 methods those should actually look the same and they do when I try out your code.
How can I get to save the output image like the result of method 3?
Matplotlib has a build in function to write plotted images to disc, just use:
plt.savefig('your_filename.png')

How to adaptively split an image into regions and set a different text orientation for each one?

Input-Sample
I am trying to pre-process my images in order to improve the ocr quality. However, I am stuck with a problem.
The Images I am dealing with contain different text orientations within the same image (2 pages, 1st is vertical, the 2nd one is horizontally oriented and they are scanned to the same image.
The text direction is automatically detected for the first part. nevertheless, the rest of the text from the other page is completely missed up.
I was thinking of creating a zonal template to detect the regions of interest but I don't know how.
Or automatically detect the border and split the image adaptively then flip the splitted part to achieve the required result.
I could set splitting based on a fixed pixel height but it is not constant as well.
from tesserocr import PyTessBaseAPI, RIL
import cv2
from PIL import Image
with PyTessBaseAPI() as api:
filePath = r'sample.jpg'
img = Image.open(filePath)
api.SetImage(img)
boxes = api.GetComponentImages(RIL.TEXTLINE, True)
print('Found {} textline image components.'.format(len(boxes)))
for i, (im, box, _, _) in enumerate(boxes):
# im is a PIL image object
# box is a dict with x, y, w and h keys
api.SetRectangle(box['x'], box['y'], box['w'], box['h'])
ocrResult = api.GetUTF8Text()
conf = api.MeanTextConf()
for box in boxes:
box = boxes[0][1]
x = box.get('x')
y = box.get('y')
h = box.get('h')
w = box.get('w')
cimg = cv2.imread(filePath)
crop_img = cimg[y:y+h, x:x+w]
cv2.imshow("cropped", crop_img)
cv2.waitKey(0)
output image
as you can see i can apply an orientation detection but I wount get any meaningful text out of such an image.
Try Tesseract API method GetComponentImages and then DetectOrientationScript on each component image.

Resources