The goal is to identify that the input scanned image is passport or PAN card using Opencv.
I have used structural_similarity(compare_ssim) method of skimage to compare input scan image with the images of template of Passport and PAN card.
But in both cases i got low score.
Here is the code that i have tried
from skimage.measure import compare_ssim as ssim
import matplotlib.pyplot as plt
import numpy as np
import cv2enter code here
img1 = cv2.imread('PAN_Template.jpg', 0)
img2 = cv2.imread('PAN_Sample1.jpg', 0)
def prepare_img(im):
size = 300, 200
im = cv2.resize(im, size)
return im
img1 = prepare_img(img1)
img2 = prepare_img(img2)
def compare_images(imageA, imageB):
s = ssim(imageA, imageB)
return s
ssim = compare_images(img1, img2)
print(ssim)
Comparing the PAN Card Template with Passport i have got ssim score of 0.12
and Comparing the PAN Card template with a PAN Card the score was 0.20
Since both the score were very close i wast not able to distinguish between them through the code.
If anyone got any other solution or approach then please help.
Here is a sample image
PAN Scanned Image
You can also compare 2 images by the mean square error (MSE) of those 2 images.
def mse(imageA, imageB):
# the 'Mean Squared Error' between the two images is the
# sum of the squared difference between the two images;
# NOTE: the two images must have the same dimension
err = np.sum((imageA.astype("float") - imageB.astype("float")) ** 2)
err /= float(imageA.shape[0] * imageA.shape[1])
# return the MSE, the lower the error, the more "similar"
# the two images are
return err
As per my understanding Pan card and Passport images contain different text data, so i believe OCR can solve this problem.
All you need to do is- extract the text data from the images using any OCR library like Tesseract and look for a few predefined key words in the text data to differentiate the images.
Here is simple Python script showing the image pre-processing and OCR using pyteseract module:
img = cv2.imread("D:/pan.jpg")
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
ret,th1 = cv2.threshold(gray,127,255,cv2.THRESH_BINARY)
cv2.imwrite('filterImg.png', th1)
pilImg = Image.open('filterimg.png')
text = pytesseract.image_to_string(pilImg)
print(text.encode("utf-8"))
Below is the binary image used for OCR:
I got the below string data after doing the OCR on the above image:
esraax fram EP aca ae
~ INCOME TAX DEPARTMENT Ld GOVT. OF INDIA
wrtterterad sg
Permanent Account Number. Card \xe2\x80\x98yf
KFWPS6061C
PEF vom ; ae
Reviavs /Father's Name. e.
SUDHIR SINGH : . ,
Though this text data contains noises but i believe it is more than enough to get the job done.
Another OCR solution is to use TextCleaner ImageMagick script from Fred's Scripts. A tutorial which explain how to install and use it (on Windows) is available here.
Script used:
C:/cygwin64/bin/textcleaner -g -e normalize -f 20 -o 20 -s 20 C:/Users/Link/Desktop/id.png C:/Users/Link/Desktop/out.png
Result:
I applied OCR on this with Tesseract (I am using version 4) and that's the result:
fart
INCOME TAX DEPARTMENT : GOVT. OF INDIA
wort cra teat ears -
Permanent Account Number Card
KFWPS6061C
TT aa
MAYANK SUDHIR SINGH el
far aT ary /Father's Name
SUDHIR SINGH
Wa RT /Date of Birth den. +
06/01/1997 genge / Signature
Code for OCR:
import cv2
from PIL import Image
import tesserocr as tr
number_ok = cv2.imread("C:\\Users\\Link\\Desktop\\id.png")
blur = cv2.medianBlur(number_ok, 1)
cv2.imshow('ocr', blur)
pil_img = Image.fromarray(cv2.cvtColor(blur, cv2.COLOR_BGR2RGB))
api = tr.PyTessBaseAPI()
try:
api.SetImage(pil_img)
boxes = api.GetComponentImages(tr.RIL.TEXTLINE, True)
text = api.GetUTF8Text()
finally:
api.End()
print(text)
cv2.waitKey(0)
Now, this don't answer at your question (passport or PAN card) but it's a good point where you can start.
Doing OCR might be a solution for this type of image classification but it might fail for the blurry or not properly exposed images. And it might be slower than newer deep learning methods.
You can use Object detection (Tensorflow or any other library) to train two separate class of image i.e PAN and Passport. For fine-tuning pre-trained models, you don't need much data too. And as per my understanding, PAN and passport have different background color so I guess it will be really accurate.
Tensorflow Object Detection: Link
Nowadays OpenCV also supports object detection without installing any new libraries(i.e.Tensorflow, caffee, etc.). You can refer this article for YOLO based object detection in OpenCV.
We can use:
Histogram Comparison - Simplest & fastest methods, using this we will get the similarity between histograms.
Template Matching - Searching and finding the location of a template image, using this we can find smaller image parts in a bigger one. (like some common patterns in PAN card).
Feature Matching - Features extracted from one image and the same feature will be recognised in another image even if the image rotated or skewed.
Related
I'm currently working on cloud removals from satellite data (I'm pretty new).
This is the image I'm working on (TIFF)
And this is the mask, where black pixels represent clouds (JPG)
I'm trying to remove the clouds from the TIFF, using the mask to identify the position of the cloud, and the cloudless image itself, like this (the area is the same, but the period is different):
I'm kindly ask how can I achieve that. A Python solution, with libraries like Rasterio or skimage is particularly appreciated.
Thanks in advance.
You can read the images with rasterio, PIL, OpenCV or tifffile, so I use OpenCV
import cv2
import numpy as np
# Load the 3 images
cloudy = cv2.imread('cloudy.png')
mask = cv2.imread('mask.jpg')
clear = cv2.imread('clear.png')
Then just use Numpy where() to choose whether you want the clear or cloudy image at each location according to the mask:
res = np.where(mask<128, clear, cloudy)
Note that if your mask was a single channel PNG rather than JPEG, or if it was read as greyscale like this:
mask = cv2.imread('mask.jpg', cv2.IMREAD_GRAYSCALE)
you would have to make it broadcastable to the 3 channels of the other two arrays by adding a new axis like this:
res = np.where(mask[...,np.newaxis]<128, clear, cloudy)
I have pictures containing ArUco markers but I am unable to detect all of them with the detectMarkers function. Actually, I have many pictures : in some of them I can detect all the markers, in others I cannot and I don't really understand why.
I thought it was because of the quality of the photo, but it seems to be not so simple. Here's an example of my code :
import cv2
import matplotlib.pyplot as plt
from cv2 import aruco
aruco_dict = aruco.Dictionary_get(aruco.DICT_4X4_1000)
inputfile = 'EOS-1D-X-Mark-II_1201-ConvertImage.jpg'
frame = cv2.imread(inputfile)
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
parameters = aruco.DetectorParameters_create()
corners, ids, rejectedImgPoints = aruco.detectMarkers(frame, aruco_dict, parameters=parameters)
frame_markers = aruco.drawDetectedMarkers(frame.copy(),rejectedImgPoints)
plt.figure(figsize=(20,10))
plt.imshow(frame_markers)
for i in range(len(ids)):
c = corners[i][0]
plt.plot([c[:, 0].mean()], [c[:, 1].mean()], "o", label = "id={0}".format(ids[i]))
plt.legend()
plt.show()
In this picture, 1 marker is not detected and I don't understand why.
I tried to tune the parameters of detectMarkers function manually with an interactive method thanks to jupyter notebook. There are many parameters and I found nothing that really helped me, except in some photos the reduction of polygonalApproxAccuracyRate.
The photo is orginally in 5472 x 3648 pixels but the one I send in this post is 2189 x 1459 pixels. Note that it doesn't work with the better resolution neither. Actually, I found in some photos that reducing the resolution help to detect the markers ... It's a contradiction but I think this is because the default parameters of the function are not adapted to my pictures, but I found no solution when tuning the parameters.
Another idea is to use the refineDetectMarkers function after calling detectMarkers. It uses the candidates that were found in detectMarkers but failed to be identified, and try to refine their identification. However, as far as I understood, I need to know where my markers should be in the picture and put it in refineDetectMarkers (as a board). In my situation, I don't know where the markers should be, otherwise I wouldn't take photos. The photos are used to observe precisely the evolution of their positions.
I am interested in any ideas you may have, thanks for reading !
I'm currently trying to perform a Polar to Cartesian Coordinate Image transformation, to display a raw sonar image into a 'fan-display'.
Initially I have a Numpy Array image of type np.float64, that can be seen below:
After doing some searching, I came across this StackOverflow post Inverse transform an image from Polar to Cartesian in OpenCV with a very similar problem, in which the poster seemed to have solved his/her issue by using the Python Wand library (http://docs.wand-py.org/en/0.5.9/index.html), specifically using their set of Distortion functions.
However, when I tried to use Wand and read the image in, I instead ended up with Wand getting the image below, which seems to be smaller than the original one. However, the weird thing is that img.size still gives the same size number as the original image's shape.
The code for this transformation can be seen below:
print(raw_img.shape)
wand_img = Image.from_array(raw_img.astype(np.uint8), channel_map="I") #=> (369, 256)
display(wand_img)
print("Current image size", wand_img.size) #=> "Current image size (369, 256)"
This is definitely quite problematic as Wand will automatically give the wrong 'fan image'. Is anybody familiar with this kind of problem with the Wand library previously, and if yes, may I ask what is the recommended solution to fix this issue?
If this issue isn't resolved soon I have an alternative backup of using OpenCV's cv::remap function (https://docs.opencv.org/4.1.2/da/d54/group__imgproc__transform.html#ga5bb5a1fea74ea38e1a5445ca803ff121). However the problem with this is that I'm not sure what mapping arrays (i.e. map_x and map_y) to use to perform the Polar->Cartesian transformation, as using a mapping matrix that implements the transformation equations below:
r = polar_distances(raw_img)
x = r * cos(theta)
y = r * sin(theta)
didn't seem to work and instead threw out errors from OpenCV as well.
Any kind of help and insight into this issue is greatly appreciated. Thank you!
- NickS
EDIT I've tried on another image example as well, and it still shows a similar problem. So first, I imported the image into Python using OpenCV, using these lines of code:
import matplotlib.pyplot as plt
from wand.image import Image
from wand.display import display
import cv2
img = cv2.imread("Test_Img.jpg")
img_rgb = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
plt.figure()
plt.imshow(img_rgb)
plt.show()
which showed the following display as a result:
However, as I continued and tried to open the img_rgb object with Wand, using the code below:
wand_img = Image.from_array(img_rgb)
display(img_rgb)
I'm getting the following result instead.
I tried to open the image using wand.image.Image() on the file directly, which is able to display the image correctly when using display() function, so I believe that there isn't anything wrong with the wand library installation on the system.
Is there a missing step that I required to convert the numpy into Wand Image that I'm missing? If so, what would it be and what is the suggested method to do so?
Please do keep in mind that I'm stressing the conversion of Numpy to Wand Image quite crucial, the raw sonar images are stored as binary data, thus the required use of Numpy to convert them to proper images.
Is there a missing step that I required to convert the numpy into Wand Image that I'm missing?
No, but there is a bug in Wand's Numpy implementation in Wand 0.5.x. The shape of OpenCV's ndarray is (ROWS, COLUMNS, CHANNELS), but Wand's ndarray is (WIDTH, HEIGHT, CHANNELS). I believe this has been fixed for the future 0.6.x releases.
If so, what would it be and what is the suggested method to do so?
Swap the values in img_rgb.shape before passing to Wand.
img_rgb.shape = (img_rgb.shape[1], img_rgb.shape[0], img_rgb.shape[2],)
with Image.from_array(img_rgb) as img:
display(img)
I'm trying to open a SAR image from sentinel-1. I can view the tiff file in QGIS, so I know the data is there, but when I go to open and view/show it in python, all of the modules I could use to open the data produce a NaN area, basically insinuating that there is no data in the image. Visualizing the image produces a completely black image, however the shape is correct.
Here is the code where I read in the image:
img = skimage.io.imread('NewData.tif', as_gray = True, plugin = 'tifffile')
with rio.open(r'NewData.tif') as src:
img2 = src.read()
imgMeta = src.profile
print(img)
skimage.io.imshow(img)
Any help would be appreciated.
thank you
The problem is not on the way rasterio or skimage is importing the image, but on the way it is displayed. I am assumign you are working with Calibrated SAR images that ARE NOT converted to the decibel dB scale. Here is the problem, the dynamic range of your data.
The issue here is that by default, the color ramp is not being strech according to the distribution of values in the raster histogram. In QGIS, SNAP or many other EO-related softwares, the color distribution matches the histogram to produce proper visualizations.
Solution: either you make that happen in your code or simply convert your backscatter values to decibel (which is a very common procedure when working with SAR data and produces an almost normal distrubution of the data). The conversion can be done in a EO software or more directly in your imported image with:
srcdB = 10*np.log10(src)
Once done, you can properly display your image:
import rasterio
from rasterio.plot import show
import numpy as np
with rio.open(r'/.../S1B_IW_GRDH_1SDV_20190319T161451_20190319T161520_015425_01CE3C_A401_Cal.tif') as src:
img2 = src.read()
imgMeta = src.profile
srcdB = 10*np.log10(src) # to decibel
show(srcdB, cmap='gray') # show using rasterio
Hi I am trying to get the handwritten data only from an image, for that I took a empty image and a filled one and then I am doing ImageChops.difference to get the data out of it.
The problem is right now with the alignment of images, both are not equally aligned in terms of depth, so the results are not correct.
from PIL import Image, ImageChops
def compare_images(path_one, path_two, diff_save_location):
"""
Compares to images and saves a diff image, if there
is a difference
#param: path_one: The path to the first image
#param: path_two: The path to the second image
"""
image_one = Image.open(path_one).convert('LA')
image_two = Image.open(path_two).convert('LA')
diff = ImageChops.difference(image_one, image_two)
if diff.getbbox():
diff.convert('RGB').save(diff_save_location)
if __name__ == '__main__':
compare_images('images/blank.jpg',
'images/filled.jpg',
'images/diff.jpg')
This is the result which I got.
the result which I am looking for:
Can anyone help me with this.
Thanks.
This site may be helpful: https://www.learnopencv.com/image-alignment-feature-based-using-opencv-c-python/ . The main idea is to first detect keypoints use SIFT, SURF or other algorithms in both images; then match the keypoints from the empty image with the keypoints from the handwritten image, to get a homography matrix; then use this matrix to align the two images.
After image alignment, post processing may be needed due to illumination or noise.