understanding transforms: resize and centercrop with same size - pytorch

I am trying to understand this particular set of compose transforms:
transform= transforms.Compose([transforms.Resize((224,224) interpolation=torchvision.transforms.InterpolationMode.BICUBIC),\
transforms.CenterCrop(224),transforms.ToTensor(),\
transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225])])
Does it make sense (and is it legit) to do centercrop after transforms - with the same size parameter? I would have thought resize itself is giving the centercrop but I see in the repos that centercrop is composed after resize - both with exactly the same sizes. I wonder what is the use of doing such a thing. For the sake of completeness, I would like to add that my input image sizes vary (ie they are all not of the same dims).
thanks!

I would have thought resize itself is giving the center crop.
Function T.Resize won't center crop your image, the center will stay the same since you are only resizing the original image, i.e. proportions are kept and the original center remains at the center. Applying a crop of the same shape as the image - since it's just after the resize - with T.CenterCrop doesn't make any difference since you are cropping nothing out of the image.
If you change the sizes of your T.CenterCrop, then this and the order you apply both transforms will matter greatly.

Related

Godot pixelated image

Godot 2d project, I created at 640 640 png using Gimp.
Imported those PNG's to Godot as texture(2d).
After setting scale to 0.1, I resized those images to 64 x 64 in godot scene.
When I initiate this image in my main scene, I get this pixelated disgusting result.
Edit : Dont be confused with rotated red wings, I did it at runtime. Its not part of the question.
My window size is 1270 x 780
Stretch Mode is viewport.
I tried changing import settings etc.
I wonder is it not possible to have a sleek image in this sizes?
Disclaimer: I haven’t bothered to fire up Godot to reproduce your problem.
I suspect you are shooting yourself in the foot by that scale 0.1 bit. Remember, every time you resample (scale) an image there is loss.
There are three things to do:
Prepare your image(s) to have the same size (resolution) as your intended target display. In other words, if your image is going to display at 64×64 pixels, then your source image should be 64×64 pixels.
When importing images, make sure that Filter is set to ☑ On. If your image contains alpha, you may also wish to check the Fix Alpha Border flag.
Perform as few resampling operations as possible to display the image. In your particular case, you are resampling it to a tenth of its size before again resampling it up to the displayed size. Don’t do that. (This is probably the main cause of your problem.) Just make your sprite have the natural size of the image, and scale the sprite only if necessary.
It is also possible that you are applying some other filter or that your renderer has a bad resampler attached to it, but your problem description does not lead me to think either of these are likely.
A warning ahead: I'm not into godot at all, but I have some experience with image resizing.
Your problem looks totally related to pure image resizing. If you scale down an image in one go by any factor smaller than 0.5 (this means make it smaller than half its original size), you will face this effect of an ugly small image.
To get a nice and smooth result, you have to resize it multiple times:
Always reduce the image size by half, until the next necessary step is bigger than scaling by 0.5.
For your case (640x640 to 64x64) it would need these steps:
640 * 0.5 => 320
320 * 0.5 => 160
160 * 0.5 => 80
80 * 0.8 => 64
You can either start with a much smaller image - if you never need it with that big size in your code, or you add multiple scaled resolutions to your resources, or you precalculate the halfed steps before you start using them and then just pick the right image to scale down by the final factor.
I'm not very experienced with godot and haven't opened it right now, but I could imagine that you shouldn't scale the image to 0.1 because then there will be a loss in image quality.

crop images with different black margins

I have an image dataset and before feeds it to deep learning algorithm I need to crop it to the same size. All images have different size of black margins as the below image demonstrates.
Any suggestions for a way to crop images with different margin size.
Since your border color is black (nearly perfect black) and will be the same in all the images, I would suggest applying binary threshold making everything white (255) except the black region. Now some of the image regions may get affected too but that's not a problem.
Now find contours in the image and second largest contour will be your region. Calculate rectangular bounding box for this contour and crop the same region in the original image.
First, do a thresholding with a low-intensity threshold value (if your background is definitely completely black, you could even threshold at an intensity of 1) to determine all non-border components.
Next, use Connected-component labeling to determine all isolated foreground components. The central scan-image you are interested in should then always result in the biggest component. Crop out this biggest component to remove the border together with all possible non-black artifacts (labels, letters etc.). You should be left with only the borderless scan.
You can find all the algorithms needed in any basic image processing library. I'd personally recommend looking into OpenCV, they also include phyton bindings.
One way to this could be as follows:
Flood-fill the image with red starting at the top-left corner, and allowing around 5% divergence from the black pixel there.
Now make everything that is not red into white - because the next step after this looks for white pixels.
Now use findContours() (which looks for white objects) and choose the largest white contour as your image and crop to that.
You could consider making things more robust by considering some of the following ideas:
You could normalise a copy of the image to the full range of black to white first in case you get any with near-black borders.
You could check that more than one, or all corner pixels are actually black in case you get images without a border.
You could also flag up issues if your cropped image appears to be less than, say 70%, of the total image area.
You could consider a morphological opening with 9x9 square structuring element as the penultimate step to tidy things up before findContrours().
here is the solution code for this question:
import warnings
warnings.filterwarnings('always')
warnings.filterwarnings('ignore')
import cv2
import numpy as np
import os
path = "data/benign/"
img_resized_dir = "data/pre-processed/benign/"
dirs = os.listdir(path)
def thyroid_scale():
for item in dirs:
if os.path.isfile(path+item):
img = cv2.imread(path+item)
gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
ret,thresh = cv2.threshold(gray,0,255,0)
im2, contours, hierarchy = cv2.findContours(thresh,cv2.RETR_TREE,cv2.CHAIN_APPROX_SIMPLE)
areas = [cv2.contourArea(c) for c in contours]
max_index = np.argmax(areas)
cnt=contours[max_index]
x,y,w,h = cv2.boundingRect(cnt)
crop_img = img[y+35:y+h-5,x+25:x+w-10]
resize_img = cv2.resize(crop_img, (300, 250), interpolation = cv2.INTER_CUBIC)
cv2.imwrite(img_resized_dir+item, resize_img)
thyroid_scale()

Scikit-learn, image classification

This example allows the classification of images with scikit-learn:
http://scikit-learn.org/stable/auto_examples/classification/plot_digits_classification.html
However, it is important that all the images have the same size (width and height, as written in the comments).
How can I modify this code to allow classification of images with different sizes?
You will need to define your own Feature Extraction.
In example from above, every pixel is represent a feature. If your images of different sizes, most trivial (but certainly not the best) thing that you can do is pad all images to the size of largest image with, for example, white pixels.
Here an example how to add boarders to image.

How to best train SVM on images with different aspect ratio

Can the SVM work on data with different dimensions? (using libsvm)
If the images have different size, I can resize to a standard value.
But if they have different aspect ratios, it seems not to make sense to resize without keeping the original aspect ratio.
Or shall I pad the images with zeros to make them have the same aspect ratio?
Can the SVM work on data with different dimensions?
No it can't, but you already give an answer about how to overcome that (resizing the images).
But if they have different aspect ratios, it seems not to make sense to resize without keeping the original aspect ratio. Or shall I pad the images with zeros to make them have the same aspect ratio?
Agreed, not maintaining the aspect ratio just would make the problem even harder. Usually people pre-process all the images to one aspect ratio and use letterboxing or pillarboxing when necessary.

Alpha Blending to remove seam in an image

I have stitched two images but in the final image there is a visible seam. I am trying to use Alpha blending to remove that seam. I know Alpha blending is applied using the cvAddweight() function, but in this the function parameters are two images,alpha, beta , gamma and desitination . I am taking gamma=0, alpha=0.6, beta=0.4. What will be my two input source images and the destination image as the last part of my code is this->
IplImage* WarpImg = cvCreateImage
(cvSize(T1Img->width*2, T1Img->height*2), T1Img->depth, T1Img- >nChannels);
cvWarpPerspective(T1Img, WarpImg, &mxH);
cvSetImageROI(WarpImg, cvRect(0, 0, T2Img->width, T2Img->height));
cvCopy(T2Img, WarpImg);
cvResetImageROI(WarpImg);
cvNamedWindow("WarpImg Img",1);
cvShowImage("WarpImg Img", WarpImg);
cvSaveImage("save.jpg",WarpImg);
My final Image is
I have to admit, I dont think alpha blending is your answer. The seem is there due to the difference in lighting / exposure. Alpha blending is a way of essentially having one image visible through another by means of weighted averaging the two images colors together. Your right and your left images are backed by black. If you simply alpha blend then you are essentially going to be weighting your images with a black background. The resultant effect will simply be a darkening of both images.
2 potential other methods might be to look at the average color of both images at the seem, and adjust one up or down by 50% of the difference in brightness, and the other opposite by the other 50% (one goes up and the other down and the 50% makes it so that the overall brightness jump by either is only 50% of the difference).
The other might do a more complex image histogram technique where you try to widen or shrink the histogram of one side' image to the other as well as align them, and re-asign your color (in this case grayscale) via the new histograms.
Pyramid/multiband blending should do a good enough job for you scenario. Try enblend: http://enblend.sourceforge.net

Resources