How augmentation increase number of images [duplicate] - pytorch

I am a little bit confused about the data augmentation performed in PyTorch. Now, as far as I know, when we are performing data augmentation, we are KEEPING our original dataset, and then adding other versions of it (Flipping, Cropping...etc). But that doesn't seem like happening in PyTorch. As far as I understood from the references, when we use data.transforms in PyTorch, then it applies them one by one. So for example:
data_transforms = {
'train': transforms.Compose([
transforms.RandomResizedCrop(224),
transforms.RandomHorizontalFlip(),
transforms.ToTensor(),
transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
]),
'val': transforms.Compose([
transforms.Resize(256),
transforms.CenterCrop(224),
transforms.ToTensor(),
transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
]),
}
Here , for the training, we are first randomly cropping the image and resizing it to shape (224,224). Then we are taking these (224,224) images and horizontally flipping them. Therefore, our dataset is now containing ONLY the horizontally flipped images, so our original images are lost in this case.
Am I right? Is this understanding correct? If not, then where do we tell PyTorch in this code above (taken from Official Documentation) to keep the original images and resize them to the expected shape (224,224)?
Thanks

I assume you are asking whether these data augmentation transforms (e.g. RandomHorizontalFlip) actually increase the size of the dataset as well, or are they applied on each item in the dataset one by one and not adding to the size of the dataset.
Running the following simple code snippet we could observe that the latter is true, i.e. if you have a dataset of 8 images, and create a PyTorch dataset object for this dataset when you iterate through the dataset, the transformations are called on each data point, and the transformed data point is returned. So for example if you have random flipping, some of the data points are returned as original, some are returned as flipped (e.g. 4 flipped and 4 original). In other words, by one iteration through the dataset items, you get 8 data points(some flipped and some not). [Which is at odds with the conventional understanding of augmenting the dataset(e.g. in this case having 16 data points in the augmented dataset)]
from torch.utils.data import Dataset
from torchvision import transforms
class experimental_dataset(Dataset):
def __init__(self, data, transform):
self.data = data
self.transform = transform
def __len__(self):
return len(self.data.shape[0])
def __getitem__(self, idx):
item = self.data[idx]
item = self.transform(item)
return item
transform = transforms.Compose([
transforms.ToPILImage(),
transforms.RandomHorizontalFlip(),
transforms.ToTensor()
])
x = torch.rand(8, 1, 2, 2)
print(x)
dataset = experimental_dataset(x,transform)
for item in dataset:
print(item)
Results: (The little differences in floating points are caused by transforming to pil image and back)
Original dummy dataset:
tensor([[[[0.1872, 0.5518],
[0.5733, 0.6593]]],
[[[0.6570, 0.6487],
[0.4415, 0.5883]]],
[[[0.5682, 0.3294],
[0.9346, 0.1243]]],
[[[0.1829, 0.5607],
[0.3661, 0.6277]]],
[[[0.1201, 0.1574],
[0.4224, 0.6146]]],
[[[0.9301, 0.3369],
[0.9210, 0.9616]]],
[[[0.8567, 0.2297],
[0.1789, 0.8954]]],
[[[0.0068, 0.8932],
[0.9971, 0.3548]]]])
transformed dataset:
tensor([[[0.1843, 0.5490],
[0.5725, 0.6588]]])
tensor([[[0.6549, 0.6471],
[0.4392, 0.5882]]])
tensor([[[0.5647, 0.3255],
[0.9333, 0.1216]]])
tensor([[[0.5569, 0.1804],
[0.6275, 0.3647]]])
tensor([[[0.1569, 0.1176],
[0.6118, 0.4196]]])
tensor([[[0.9294, 0.3333],
[0.9176, 0.9608]]])
tensor([[[0.8549, 0.2275],
[0.1765, 0.8941]]])
tensor([[[0.8902, 0.0039],
[0.3529, 0.9961]]])

The transforms operations are applied to your original images at every batch generation. So your dataset is left unchanged, only the batch images are copied and transformed every iteration.
The confusion may come from the fact that often, like in your example, transforms are used both for data preparation (resizing/cropping to expected dimensions, normalizing values, etc.) and for data augmentation (randomizing the resizing/cropping, randomly flipping the images, etc.).
What your data_transforms['train'] does is:
Randomly resize the provided image and randomly crop it to obtain a (224, 224) patch
Apply or not a random horizontal flip to this patch, with a 50/50 chance
Convert it to a Tensor
Normalize the resulting Tensor, given the mean and deviation values you provided
What your data_transforms['val'] does is:
Resize your image to (256, 256)
Center crop the resized image to obtain a (224, 224) patch
Convert it to a Tensor
Normalize the resulting Tensor, given the mean and deviation values you provided
(i.e. the random resizing/cropping for the training data is replaced by a fixed operation for the validation one, to have reliable validation results)
If you don't want your training images to be horizontally flipped with a 50/50 chance, just remove the transforms.RandomHorizontalFlip() line.
Similarly, if you want your images to always be center-cropped, replace transforms.RandomResizedCrop by transforms.Resize and transforms.CenterCrop, as done for data_transforms['val'].

Yes the dataset size does not change after the transformations. Every Image is passed to the transformation and returned, thus the size remaining the same.
If you wish to use the original dataset with transformed one concat them.
e.g increased_dataset = torch.utils.data.ConcatDataset([transformed_dataset,original])

The purpose of data augumentation is to increase the diversity of training dataset.
Even though the data.transforms doesn't change the size of dataset, however, every epoch we recall the dataset, the transforms operation will be executed and then get different data.
I changed #Ashkan372 code slightly to output data for multiple epochs:
import torch
from torchvision import transforms
from torch.utils.data import TensorDataset as Dataset
from torch.utils.data import DataLoader
class experimental_dataset(Dataset):
def __init__(self, data, transform):
self.data = data
self.transform = transform
def __len__(self):
return self.data.shape[0]
def __getitem__(self, idx):
item = self.data[idx]
item = self.transform(item)
return item
transform = transforms.Compose([
transforms.ToPILImage(),
transforms.RandomHorizontalFlip(),
transforms.ToTensor()
])
x = torch.rand(8, 1, 2, 2)
print('the original data: \n', x)
epoch_size = 3
batch_size = 4
dataset = experimental_dataset(x,transform)
for i in range(epoch_size):
print('----------------------------------------------')
print('the epoch', i, 'data: \n')
for item in DataLoader(dataset, batch_size, shuffle=False):
print(item)
The output is:
the original data:
tensor([[[[0.5993, 0.5898],
[0.7365, 0.5472]]],
[[[0.1878, 0.3546],
[0.2124, 0.8324]]],
[[[0.9321, 0.0795],
[0.4090, 0.9513]]],
[[[0.2825, 0.6954],
[0.3737, 0.0869]]],
[[[0.2123, 0.7024],
[0.6270, 0.5923]]],
[[[0.9997, 0.9825],
[0.0267, 0.2910]]],
[[[0.2323, 0.1768],
[0.4646, 0.4487]]],
[[[0.2368, 0.0262],
[0.2423, 0.9593]]]])
----------------------------------------------
the epoch 0 data:
tensor([[[[0.5882, 0.5961],
[0.5451, 0.7333]]],
[[[0.3529, 0.1843],
[0.8314, 0.2118]]],
[[[0.9294, 0.0784],
[0.4078, 0.9490]]],
[[[0.6941, 0.2824],
[0.0863, 0.3725]]]])
tensor([[[[0.7020, 0.2118],
[0.5922, 0.6235]]],
[[[0.9804, 0.9961],
[0.2902, 0.0235]]],
[[[0.2314, 0.1765],
[0.4627, 0.4471]]],
[[[0.0235, 0.2353],
[0.9569, 0.2392]]]])
----------------------------------------------
the epoch 1 data:
tensor([[[[0.5882, 0.5961],
[0.5451, 0.7333]]],
[[[0.1843, 0.3529],
[0.2118, 0.8314]]],
[[[0.0784, 0.9294],
[0.9490, 0.4078]]],
[[[0.2824, 0.6941],
[0.3725, 0.0863]]]])
tensor([[[[0.2118, 0.7020],
[0.6235, 0.5922]]],
[[[0.9804, 0.9961],
[0.2902, 0.0235]]],
[[[0.2314, 0.1765],
[0.4627, 0.4471]]],
[[[0.0235, 0.2353],
[0.9569, 0.2392]]]])
----------------------------------------------
the epoch 2 data:
tensor([[[[0.5882, 0.5961],
[0.5451, 0.7333]]],
[[[0.3529, 0.1843],
[0.8314, 0.2118]]],
[[[0.0784, 0.9294],
[0.9490, 0.4078]]],
[[[0.6941, 0.2824],
[0.0863, 0.3725]]]])
tensor([[[[0.2118, 0.7020],
[0.6235, 0.5922]]],
[[[0.9961, 0.9804],
[0.0235, 0.2902]]],
[[[0.2314, 0.1765],
[0.4627, 0.4471]]],
[[[0.0235, 0.2353],
[0.9569, 0.2392]]]])
Different epoch we get different outputs!

TLDR :
The transform operation applies a bunch of transforms with a certain probability to the input batch that comes in the loop. So the model now is exposed to more examples during the course of multiple epochs.
Personally, when I was Training an audio classification model on my own dataset, before augmentation, my model always seem to converge at 72 % accuracy. I used augmentation along with an increased number of training epochs, Which boosted the validation accuracy in the test set to 89 percent.

In PyTorch, there are types of cropping that DO change the size of the dataset. These are FiveCrop and TenCrop:
CLASS torchvision.transforms.FiveCrop(size)
Crop the given image into four corners and the central crop.
This transform returns a tuple of images and there may be a mismatch
in the number of inputs and targets your Dataset returns. See below
for an example of how to deal with this.
Example:
>>> transform = Compose([
>>> TenCrop(size), # this is a list of PIL Images
>>> Lambda(lambda crops: torch.stack([ToTensor()(crop) for crop in crops])) # returns a 4D tensor
>>> ])
>>> #In your test loop you can do the following:
>>> input, target = batch # input is a 5d tensor, target is 2d
>>> bs, ncrops, c, h, w = input.size()
>>> result = model(input.view(-1, c, h, w)) # fuse batch size and ncrops
>>> result_avg = result.view(bs, ncrops, -1).mean(1) # avg over crops
TenCrop is the same plus the flipped version of the five patches (horizontal flipping is used by default).

Related

PyTorch Data Augmentation is taking too long

For the task that involves regression, I need to train my models to generate density maps from RGB images. To augment my dataset I have decided to flip all the images horizontally. For that matter, I also have to flip my ground truth images and I did so.
dataset_for_augmentation.listDataset(train_list,
shuffle=True,
transform=transforms.Compose([
transforms.RandomHorizontalFlip(p=1),
transforms.ToTensor(),
transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]),
]),
target_transform=transforms.Compose([
transforms.RandomHorizontalFlip(p=1),
transforms.ToTensor()
]),
train=True,
resize=4,
batch_size=args.batch_size,
num_workers=args.workers),
But here is the problem : For some reason, PyTorch transforms.RandomHorizontalFlip function takes only PIL images (numpy is not allowed) as input. So I decided to convert the type to PIL Image.
img_path = self.lines[index]
img, target = load_data(img_path, self.train, resize=self.resize)
if type(target[0][0]) is np.float64:
target = np.float32(target)
img = Image.fromarray(img)
target = Image.fromarray(target)
if self.transform is not None:
img = self.transform(img)
target = self.target_transform(target)
return img, target
And yes, this operation need enormous amount of time. Considering I need this operation to be carried out for thousands of images, 23 seconds (should have been under half a second at most) per batch is not tolerable.
2019-11-01 16:29:02,497 - INFO - Epoch: [0][0/152] Time 27.095 (27.095) Data 23.150 (23.150) Loss 93.7401 (93.7401)
I would appreciate any suggestions to speed up my augmentation process
You don't need to change the DataLoader to do that. You can use ToPILImage():
transform=transforms.Compose([
transforms.ToPILImage(), # check mode assumption in the documentation
transforms.RandomHorizontalFlip(p=1),
transforms.ToTensor(),
transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]),
])
Anyway, I would avoid converting to PIL. It seems completely unnecessary. If you want to flip all images, then why not to do that using NumPy only?
img_path = self.lines[index]
img, target = load_data(img_path, self.train, resize=self.resize)
if type(target[0][0]) is np.float64:
target = np.float32(target)
# assuming width axis=1 -- see my comment below
img = np.flip(img, axis=1)
target = np.flip(target, axis=1)
if self.transform is not None:
img = self.transform(img)
target = self.target_transform(target)
return img, target
And remove the transforms.RandomHorizontalFlip(p=1) from the Compose. As ToTensor(...) also handles ndarray, you are good to go.
Note: I am assuming the width axis is equal to 1, since ToTensor expects it to be there.
From the docs:
Converts a PIL Image or numpy.ndarray (H x W x C) ...
More of an addition to #Berriel answer.
Horizontal Flip
You are using transforms.RandomHorizontalFlip(p=1) for both X and y images. In your case, with p=1, those will be transformed exactly the same but you are missing the point of data augmentation as the network will only see flipped images (instead of only original images). You should go for probability lower than 1 and higher than 0 (usually 0.5) to get high variability in versions of the image.
If that was the case (p=0.5), you can be more than certain that there will occur a situation, where X gets flipped and y doesn't.
I would advise to use albumentations library and it's albumentations.augmentations.transforms.HorizontalFlip to do the flip on both images the same way.
Normalization
You can find normalization with ImageNet means and stds already set up there as well.
Caching
Furthermore, to speed things up you could use torchdata third party library (disclaimer I'm the author). In your case you could transform image from PIL to Tensor, Normalize with albumentations, cache on disk or even better in RAM images after those transformations with torchdata and finally apply your transformations. This way would allow you to only apply HorizontalFlips on your image and target after initial epoch, previous steps would be pre-calculated.

Keras data augmentaion changes pixel values for masks (segmentation)

Iam using runtime data augmentation using generators in keras for segmentation problem..
Here is my data generator
data_gen_args = dict(
width_shift_range=0.1,
height_shift_range=0.1,
zoom_range=0.2,
horizontal_flip=True,
validation_split=0.2
)
image_datagen = ImageDataGenerator(**data_gen_args)
def generate_data_generator(generator, Xi, Yi):
genXi = generator.flow(Xi, seed=7, batch_size=32)
genYi = generator.flow(Yi, seed=7,batch_size=32)
while True:
Xi = genXi.next()
Yi = genYi.next()
print(Yi.dtype)
print(np.unique(Yi))
yield (Xi, Yi)
train_generator = generate_data_generator(image_datagen,
x_train,
y_train)
My labels are in a numpy array with data type float 32 and value 0.0 and 1.0.
#Output of np.unique(y_train)
array([0., 1.], dtype=float32
However, the data generator seems to modifies pixel values as shown below:-
#Output of print(np.unique(Yi))
[0.00000000e+00 1.01742386e-04 1.74021334e-04 ... 9.99918878e-01
9.99988437e-01 1.00000000e+00]
It is supposed to have same values(0.0 and 1.0) after data geneartion..
Also, the the official documentation shows an example using same augmentation arguments for generating mask and images together.
However when i remove shift and zoom iam getting (0.0 and 1.0) as output.
Keras verion 2.2.4,Python 3.6.8
UPDATE:-
I saved those images as numpy array and plotted it using matplotlib.It looks like the edges are smoothly interpolated (0.0-1.0) somehow upon including shifts and zoom augmentation. I can round these values in my custom generator as a hack; but i still don't understand the root cause (in case of normal images this is quite unnoticeable and has no adverse effects; but in masks we don't want to change label values )!!!
Still wondering.. is this a bug (nobody has mentioned it so far)or problem with my custom code ??

Invalid dimension for image data in plt.imshow()

I am using mnist dataset for training a capsule network in keras background.
After training, I want to display an image from mnist dataset. For loading images, mnist.load_data() is used. The data is stored as (x_train, y_train),(x_test, y_test).
Now, for visualizing image, my code is as follows:
img_path = x_test[1]
print(img_path.shape)
plt.imshow(img_path)
plt.show()
The code gives output as follows:
(28, 28, 1)
and the error on plt.imshow(img_path) as follows:
TypeError: Invalid dimensions for image data
How to show image in png format. Help!
As per the comment of #sdcbr using np.sqeeze reduces unnecessary dimension. If image is 2 dimensions then imshow function works fine. If image has 3 dimensions then you have to reduce extra 1 dimension. But, for higher dim data you will have to reduce it to 2 dims, so np.sqeeze may be applied multiple times. (Or you may use some other dim reduction functions for higher dim data)
import numpy as np
import matplotlib.pyplot as plt
img_path = x_test[1]
print(img_path.shape)
if(len(img_path.shape) == 3):
plt.imshow(np.squeeze(img_path))
elif(len(img_path.shape) == 2):
plt.imshow(img_path)
else:
print("Higher dimensional data")
Example:
plt.imshow(test_images[0])
TypeError: Invalid shape (28, 28, 1) for image data
Correction:
plt.imshow((tf.squeeze(test_images[0])))
Number 7
You can use tf.squeeze for removing dimensions of size 1 from the shape of a tensor.
plt.imshow( tf.shape( tf.squeeze(x_train) ) )
Check out TF2.0 example
matplotlib.pyplot.imshow() does not support images of shape (h, w, 1). Just remove the last dimension of the image by reshaping the image to (h, w): newimage = reshape(img,(h,w)).

How to oversample image dataset using Python?

I am working on a multiclass classification problem with an unbalanced dataset of images(different class). I tried imblearn library, but it is not working on the image dataset.
I have a dataset of images belonging to 3 class namely A,B,C. A has 1000 data, B has 300 and C has 100. I want to oversample class B and C, so that I can avoid data imbalance. Please let me know how to oversample the image dataset using python.
Actually, it seems imblearn.over_sampling resampling just 2d dims inputs. So one way to oversampling your image dataset by this library is to use reshaping alongside with it, you can:
reshape your images
oversample them
again reshape the new dataset to
the first dims
consider you have an image dataset of size (5000, 28, 28, 3) and dtype of nd.array, following the above instructions, you can use the solution below:
# X : current_dataset
# y : labels
from imblearn.over_sampling import RandomOverSampler
reshaped_X = X.reshape(X.shape[0],-1)
#oversampling
oversample = RandomOverSampler()
oversampled_X, oversampled_y = oversample.fit_resample(reshaped_X , y)
# reshaping X back to the first dims
new_X = oversampled_X.reshape(-1,28,28,3)
hope that was helpful!

Keras: Get True labels (y_test) from ImageDataGenerator or predict_generator

I am using ImageDataGenerator().flow_from_directory(...) to generate batches of data from directories.
After the model builds successfully I'd like to get a two column array of True and Predicted class labels. With model.predict_generator(validation_generator, steps=NUM_STEPS) I can get a numpy array of predicted classes. Is it possible to have the predict_generator output the corresponding True class labels?
To add: validation_generator.classes does print the True labels but in the order that they are retrieved from the directory, it doesn't take into account the batching or sample expansion by augmentation.
You can get the prediction labels by:
y_pred = numpy.rint(predictions)
and you can get the true labels by:
y_true = validation_generator.classes
You should set shuffle=False in the validation generator before this.
Finally, you can print confusion matrix by
print confusion_matrix(y_true, y_pred)
There's another, slightly "hackier" way, of retrieving the true labels as well. Note that this approach can handle when setting shuffle=True in your generator (it's generally speaking a good idea to shuffle your data - either if you do this manually where you've stored the data, or through the generator, which is probably easier). You will need your model for this approach though.
# Create lists for storing the predictions and labels
predictions = []
labels = []
# Get the total number of labels in generator
# (i.e. the length of the dataset where the generator generates batches from)
n = len(generator.labels)
# Loop over the generator
for data, label in generator:
# Make predictions on data using the model. Store the results.
predictions.extend(model.predict(data).flatten())
# Store corresponding labels
labels.extend(label)
# We have to break out from the generator when we've processed
# the entire once (otherwise we would end up with duplicates).
if (len(label) < generator.batch_size) and (len(predictions) == n):
break
Your predictions and corresponding labels should now be stored in predictions and labels, respectively.
Lastly, remember that we should NOT add data augmentation on the validation and test sets/generators.
Using np.rint() method will get one hot coding result like [1., 0., 0.] which once I've tried creating a confusion matrix with confusion_matrix(y_true, y_pred) it caused error. Because validation_generator.classes returns class labels as a single number.
In order to get a class number for example 0, 1, 2 as class label specified, I have found the selected answer in this topic useful. here
You should try this to resolve the class probabilities and convert it to a single class based on the score.
if Y_preds.ndim !=1:
Y_preds = np.argmax(Y_preds, axis=1)

Resources