Win32 Excel: Resize image to cell maintaining aspect ratio - excel

I'm writing a script that given an image and its containing cell should resize the image to fit that cell while preserving it's aspect ratio.
The function takes 3 parameters, the image, the width of the cell and the height of the cell.
I've written the following function so far:
def fitToCell(img, width, height):
ratioX = width / img.Width
ratioY = height / img.Height
if ratioX * img.Height > height:
ratio = ratioY
else:
ratio = ratioX
img.Width = img.Width * ratio
img.Height = img.Height * ratio
return img
My logging gives me the following result:
Original Image: (48.0, 35.26598358154297)
Desired Image: (100, 100)
Ratios: (2.0833333333333335, 2.8355936753834547)
Final Ratio: 2.0833333333333335
Final Image: (208.33331298828125, 153.0640869140625)
This doesnt seem to make any sense to me as 2 * 48 = 96, not 208. The ratio should produce the output 100, 73.
I'm using the win32com api with Python on xlsx files. I would really appreciate any suggestions.

This is my modified code, which can achieve the desired results.
My build environment is WIN10, python 3.7
from PIL import Image
def fitToCell(img, width, height):
try:
ratioX = width / img.width
ratioY = height / img.height
if ratioX * img.height > height:
ratio = ratioY
else:
ratio = ratioX
Width = img.width * ratio
Height = img.height * ratio
print("Original Image :",img.width, img.height)
print("Ratios :",ratioX, ratioY)
print("Final Ratio :",ratio)
print("Final Image :",Width, Height)
except IOError:
pass
if __name__ == "__main__":
img = Image.open("panda.jpg")
fitToCell(img, 400, 400)
Debug Result:
Original Image : 268 304
Ratios : 1.492537313432836 1.3157894736842106
Final Ratio : 1.3157894736842106
Final Image : 352.63157894736844 400.0
The difference is that I use the image module.
The Image module provides a class with the same name which is used to
represent a PIL image. The module also provides a number of factory
functions, including functions to load images from files, and to
create new images.

Related

How to crop a square image from normalized vertices

I'm using this code to identify tops and bottoms of photographs:
( as of now I only have it working for tops. one thing at a time ;) )
def get_file(path):
client = vision.ImageAnnotatorClient()
for images in os.listdir(path):
# # Loads the image into memory
with io.open(images, "rb") as image_file:
content = image_file.read()
image = types.Image(content=content)
objects = client.object_localization(image=image).localized_object_annotations
im = Image.open(images)
width, height = im.size
print("Number of objects found: {}".format(len(objects)))
for object_ in objects:
if object_.name == "Top":
print("Top")
l1 = object_.bounding_poly.normalized_vertices[0].x
l2 = object_.bounding_poly.normalized_vertices[0].y
l3 = object_.bounding_poly.normalized_vertices[2].x
l4 = object_.bounding_poly.normalized_vertices[3].y
left = l1 * width
top = l2 * height
right = l3 * width
bottom = l4 * height
im = im.crop((left, top, right, bottom))
im.save('new_test_cropped.tif', 'tiff')
im.show()
if __name__ == '__main__':
parser = argparse.ArgumentParser(description="Script to automatically crop images based on google vision predictions of 'tops' and 'bottoms'")
parser.add_argument('--path', help='Include the path to the images folder')
args = parser.parse_args()
get_file(args.path)
The images are opened, clothing is identified, and then the images are cropped and saved to a new file. (granted as of now they are being overwritten within the loop, but I'll fix that later)
What I cant figure out, is how to make the crop a 1:1 ratio. I need to save them out as square-cropped to be put on our website.
I'll be honest, the normalized_vertices make no sense to me. Hence why I'm having trouble.
Starting image:
Output:
Desired Output:
"Normalized" means the coordinates are divided by the width or height of the image, so normalized coordinates [1, 0.5] would indicate all the way (1) across the image and halfway down (0.5).
For a 1:1 aspect ratio you want right - left to be equal to top - bottom. So you want to find out which dimension (width or height) you need to increase, and by how much.
height = abs(top - bottom)
width = abs(right - left)
extrawidth = max(0, height - width)
extraheight = max(0, width - height)
If height > width, we want to increase width but not height. Since height - width > 0, the correct value will go into extrawidth. But because width - height < 0, extraheight will be 0.
Now let's say we want to increase the dimensions of our image symmetrically around the original crop rectangle.
top -= extraheight // 2
bottom += extraheight // 2
left -= extrawidth // 2
right += extrawidth // 2
And finally, do the crop:
im = im.crop((left, top, right, bottom))
For your image, let's say you get left = 93, right = 215, top = 49, and bottom = 205
Before:
After:

How to find orientation of an object in image?

I have bunch of images of gear and they all are in different orientation and I need them all in same orientation. I mean there is one reference image and rest of the images should be rotated so they look like same as reference image. I followed these steps, first segment the gear and then tried to find an angle using moments but its not working correctly. I've attached the 3 images considering the first image as reference image and here's the code so far
def adjust_gamma(image, gamma=1.0):
invGamma = 1.0 / gamma
table = np.array([((i / 255.0) ** invGamma) * 255
for i in np.arange(0, 256)]).astype("uint8")
return cv2.LUT(image, table)
def unsharp_mask(image, kernel_size=(13, 13), sigma=1.0, amount=2.5, threshold=10):
"""Return a sharpened version of the image, using an unsharp mask."""
blurred = cv2.GaussianBlur(image, kernel_size, sigma)
sharpened = float(amount + 1) * image - float(amount) * blurred
sharpened = np.maximum(sharpened, np.zeros(sharpened.shape))
sharpened = np.minimum(sharpened, 255 * np.ones(sharpened.shape))
sharpened = sharpened.round().astype(np.uint8)
if threshold > 0:
low_contrast_mask = np.absolute(image - blurred) < threshold
np.copyto(sharpened, image, where=low_contrast_mask)
return sharpened
def find_orientation(cont):
m = cv2.moments(cont, True)
cen_x = m['m10'] / m['m00']
cen_y = m['m01'] / m['m00']
m_11 = 2*m['m11'] - m['m00'] * (cen_x*cen_x+cen_y*cen_y)
m_02 = m['m02'] - m['m00'] * cen_y*cen_y
m_20 = m['m20'] - m['m00'] * cen_x*cen_x
theta = 0 if m_20==m_02 else atan2(m_11, m_20-m_02)/2.0
theta = theta * 180 / pi
return (cen_x, cen_y, theta)
def rotate_image(img, angles):
height, width = img.shape[:2]
rotation_matrix = cv2.getRotationMatrix2D((width/2, height/2), angles, 1)
rotated_image = cv2.warpAffine(img, rotation_matrix, (width,height))
return rotated_image
img = cv2.imread('gear1.jpg')
resized_img = imutils.resize(img, width=540)
height, width = resized_img.shape[:2]
gamma_adjusted = adjust_gamma(resized_img, 2.5)
sharp = unsharp_mask(gamma_adjusted)
gray = cv2.cvtColor(sharp, cv2.COLOR_BGR2GRAY)
gauss_blur = cv2.GaussianBlur(gray, (13,13), 2.5)
ret, thresh = cv2.threshold(gauss_blur, 250, 255, cv2.THRESH_BINARY+cv2.THRESH_OTSU)
kernel = cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (3,3))
thresh = cv2.morphologyEx(thresh, cv2.MORPH_DILATE, kernel, iterations=2)
kernel = np.ones((3,3), np.uint8)
thresh = cv2.morphologyEx(thresh, cv2.MORPH_OPEN, kernel, iterations=2)
contours = cv2.findContours(thresh.copy(), cv2.RETR_EXTERNAL,
cv2.CHAIN_APPROX_SIMPLE)[0]
cen_x, cen_y, theta = find_orientation(contours[0])
reference_angle = -24.14141919602858
rot_angle = 0.0
if theta < reference_angle:
rot_angle = -(theta - reference_angle)
else:
rot_angle = (reference_angle - theta)
rot_img = rotate_image(resized_img, rot_angle)
Can anyone tell me where did i go wrong? Any help would be appreciated.
Binarization of the gear and the holes seems easy. You should be able to discriminate the holes from noise and extra small features.
First find the geometric center, and sort the holes by angle around the center. Also compute the areas of the holes. Then you can try to match the holes to the model in a cyclic way. There are 20 holes, and you just need to test 20 positions. You can rate a matching by some combination of the differences in the angles and the areas. The best match tells you the orientation.
This should be very reliable.
You can obtain a very accurate value of the angle by computing the average error per hole and correcting to cancel that value (this is equivalent to least-squares fitting).

Centering a rotated image using Reportlab

I'm trying to center a rotated image on Reportlab, but I'm having issues using the correct calculation for the placement.
Here's the current code:
from reportlab.pdfgen import canvas
from reportlab.lib.utils import ImageReader
from PIL import Image as PILImage
import requests
import math
def main(rotation):
# create a new PDF with Reportlab
a4 = (595.275590551181, 841.8897637795275)
c = canvas.Canvas('output.pdf', pagesize=a4)
c.saveState()
# loading the image:
img = requests.get('https://i.stack.imgur.com/dI5Rj.png', stream=True)
img = PILImage.open(img.raw)
width, height = img.size
# We calculate the bouding box of a rotated rectangle
angle_radians = rotation * (math.pi / 180)
bounding_height = abs(width * math.sin(angle_radians)) + abs(height * math.cos(angle_radians))
bounding_width = abs(width * math.cos(angle_radians)) + abs(height * math.sin(angle_radians))
a4_pixels = [x * (100 / 75) for x in a4]
offset_x = (a4_pixels[0] / 2) - (bounding_width / 2)
offset_y = (a4_pixels[1] / 2) - (bounding_height / 2)
c.translate(offset_x, offset_y)
c.rotate(rotation)
c.drawImage(ImageReader(img), 0, 0, width, height, 'auto')
c.restoreState()
c.save()
if __name__ == '__main__':
main(45)
So far, here's what I did:
Calculating the boundaries of a rotated rectangle (since it will be bigger)
Using these to calculate the position of the center of the image (size / 2 - image / 2) for width and height.
Two issues appears that I can't explain:
The "a4" variable is in points, everything else is in pixels. If I change them to pixels for calculating the position (which is logical, using a4_pixels = [x * (100 / 75) for x in a4]). The placement is incorrect for a rotation of 0 degree. If I keep the a4 in points, it works ... ?
If I change the rotation, it breaks even more.
So my final question: How can I calculate the offset_x and offset_y values to ensure it's always centered regardless of the rotation?
Thank you! :)
When you translate the canvas, you are literally moving the origin (0,0) point and all draw operations will be relative to that.
So in the code below, I moved the origin to the middle of the page.
Then I rotated the "page" and drew the image on the "page". No need to rotate the image since its canvas axes have rotated.
from reportlab.pdfgen import canvas
from reportlab.lib.utils import ImageReader
from reportlab.lib.pagesizes import A4
from PIL import Image as PILImage
import requests
def main(rotation):
c = canvas.Canvas('output.pdf', pagesize=A4)
c.saveState()
# loading the image:
img = requests.get('https://i.stack.imgur.com/dI5Rj.png', stream=True)
img = PILImage.open(img.raw)
# The image dimensions in cm
width, height = img.size
# now move the canvas origin to the middle of the page
c.translate(A4[0] / 2, A4[1] / 2)
# and rotate it
c.rotate(rotation)
# now draw the image relative to the origin
c.drawImage(ImageReader(img), -width/2, -height/2, width, height, 'auto')
c.restoreState()
c.save()
if __name__ == '__main__':
main(45)

Resizing image and its bounding box

I have an image with bounding box in it, and I want to resize the image.
img = cv2.imread("img.jpg",3)
x_ = img.shape[0]
y_ = img.shape[1]
img = cv2.resize(img,(416,416));
Now I want to calculate the scale factor:
x_scale = ( 416 / x_)
y_scale = ( 416 / y_ )
And draw an image, this is the code for the original bounding box:
( 128, 25, 447, 375 ) = ( xmin,ymin,xmax,ymax)
x = int(np.round(128*x_scale))
y = int(np.round(25*y_scale))
xmax= int(np.round (447*(x_scale)))
ymax= int(np.round(375*y_scale))
However using this I get:
While the original is:
I don't see any flag in this logic, what's wrong?
Whole code:
imageToPredict = cv2.imread("img.jpg",3)
print(imageToPredict.shape)
x_ = imageToPredict.shape[0]
y_ = imageToPredict.shape[1]
x_scale = 416/x_
y_scale = 416/y_
print(x_scale,y_scale)
img = cv2.resize(imageToPredict,(416,416));
img = np.array(img);
x = int(np.round(128*x_scale))
y = int(np.round(25*y_scale))
xmax= int(np.round (447*(x_scale)))
ymax= int(np.round(375*y_scale))
Box.drawBox([[1,0, x,y,xmax,ymax]],img)
and drawbox
def drawBox(boxes, image):
for i in range (0, len(boxes)):
cv2.rectangle(image,(boxes[i][2],boxes[i][3]),(boxes[i][4],boxes[i][5]),(0,0,120),3)
cv2.imshow("img",image)
cv2.waitKey(0)
cv2.destroyAllWindows()
The image and the data for the bounding box are loaded separately. I am drawing the bounding box inside the image. The image does not contain the box itself.
I believe there are two issues:
You should swap x_ and y_ because shape[0] is actually y-dimension and shape[1] is the x-dimension
You should use the same coordinates on the original and scaled image. On your original image the rectangle is (160, 35) - (555, 470) rather than (128,25) - (447,375) that you use in the code.
If I use the following code:
import cv2
import numpy as np
def drawBox(boxes, image):
for i in range(0, len(boxes)):
# changed color and width to make it visible
cv2.rectangle(image, (boxes[i][2], boxes[i][3]), (boxes[i][4], boxes[i][5]), (255, 0, 0), 1)
cv2.imshow("img", image)
cv2.waitKey(0)
cv2.destroyAllWindows()
def cvTest():
# imageToPredict = cv2.imread("img.jpg", 3)
imageToPredict = cv2.imread("49466033\\img.png ", 3)
print(imageToPredict.shape)
# Note: flipped comparing to your original code!
# x_ = imageToPredict.shape[0]
# y_ = imageToPredict.shape[1]
y_ = imageToPredict.shape[0]
x_ = imageToPredict.shape[1]
targetSize = 416
x_scale = targetSize / x_
y_scale = targetSize / y_
print(x_scale, y_scale)
img = cv2.resize(imageToPredict, (targetSize, targetSize));
print(img.shape)
img = np.array(img);
# original frame as named values
(origLeft, origTop, origRight, origBottom) = (160, 35, 555, 470)
x = int(np.round(origLeft * x_scale))
y = int(np.round(origTop * y_scale))
xmax = int(np.round(origRight * x_scale))
ymax = int(np.round(origBottom * y_scale))
# Box.drawBox([[1, 0, x, y, xmax, ymax]], img)
drawBox([[1, 0, x, y, xmax, ymax]], img)
cvTest()
and use your "original" image as "49466033\img.png",
I get the following image
And as you can see my thinner blue line lies exactly inside your original red line and it stays there whatever targetSize you chose (so the scaling actually works correctly).
Another way of doing this is to use CHITRA
image = Chitra(img_path, box, label)
# Chitra can rescale your bounding box automatically based on the new image size.
image.resize_image_with_bbox((224, 224))
print('rescaled bbox:', image.bounding_boxes)
plt.imshow(image.draw_boxes())
https://chitra.readthedocs.io/en/latest/
pip install chitra
I encountered an issue with bounding box coordinates in Angular when using TensorFlow.js and MobileNet-v2 for prediction. The coordinates were based on the resolution of the video frame.
but I was displaying the video on a canvas with a fixed height and width. I resolved the issue by dividing the coordinates by the ratio of the original video resolution to the resolution of the canvas.
const x = prediction.bbox[0] / (this.Owidth / 300);
const y = prediction.bbox[1] / (this.Oheight / 300);
const width = prediction.bbox[2] / (this.Owidth / 300);
const height = prediction.bbox[3] / (this.Oheight / 300);
// Draw the bounding box.
ctx.strokeStyle = '#99ff00';
ctx.lineWidth = 2;
ctx.strokeRect(x, y, width, height);
this.Owidth & this.Oheight are original resolution of video. it is obtained by.
this.video.addEventListener(
'loadedmetadata',
(e: any) => {
this.Owidth = this.video.videoWidth;
this.Oheight = this.video.videoHeight;
console.log(this.Owidth, this.Oheight, ' pixels ');
},
false
);
300 X 300 is my static canvas width and height.
you can use the resize_dataset_pascalvoc
it's easy to use python3 main.py -p <IMAGES_&_XML_PATH> --output <IMAGES_&_XML> --new_x <NEW_X_SIZE> --new_y <NEW_X_SIZE> --save_box_images <FLAG>"
It resize all your dataset and rewrite new annotations files to resized images

Pillow create thumbnail by cropping instead of preserving aspect ratio

By default, the thumbnail method preserves aspect ratio which may result in inconsistently sized thumbnails.
Image.thumbnail(size, resample=3)
Make this image into a thumbnail. This method modifies the image to contain a thumbnail version of itself, no larger than the given size. This method calculates an appropriate thumbnail size to preserve the aspect of the image, calls the draft() method to configure the file reader (where applicable), and finally resizes the image.
I want it to crop the image so that the thumbnail fills the entire canvas provided, such that the image isn't warped. For example, Image.thumbnail((200, 200), Image.ANTIALIAS) on an image that's 640 by 480 would crop to the center 480 by 480, then scale to exactly 200 by 200 (not 199 or 201). How can this be done?
This is easiest to do in two stages: crop first, then generate a thumbnail. Cropping to a given aspect ratio is common enough that there really should be a function for it in PILLOW, but as far as I know there isn't. Here's a simple implementation, monkey-patched onto the Image class:
from PIL import Image
class _Image(Image.Image):
def crop_to_aspect(self, aspect, divisor=1, alignx=0.5, aligny=0.5):
"""Crops an image to a given aspect ratio.
Args:
aspect (float): The desired aspect ratio.
divisor (float): Optional divisor. Allows passing in (w, h) pair as the first two arguments.
alignx (float): Horizontal crop alignment from 0 (left) to 1 (right)
aligny (float): Vertical crop alignment from 0 (left) to 1 (right)
Returns:
Image: The cropped Image object.
"""
if self.width / self.height > aspect / divisor:
newwidth = int(self.height * (aspect / divisor))
newheight = self.height
else:
newwidth = self.width
newheight = int(self.width / (aspect / divisor))
img = self.crop((alignx * (self.width - newwidth),
aligny * (self.height - newheight),
alignx * (self.width - newwidth) + newwidth,
aligny * (self.height - newheight) + newheight))
return img
Image.Image.crop_to_aspect = _Image.crop_to_aspect
Given this, you can just write
cropped = img.crop_to_aspect(200,200)
cropped.thumbnail((200, 200), Image.ANTIALIAS)

Resources