keras Data Augmentation - keras

I know that ImageDataGenerator generates for each input image one image randomly augmented . Now, I would like to generate for each input image two augmented images :
datagen = tf.keras.preprocessing.image.ImageDataGenerator(
rotation_range=40,
width_shift_range=0.2,
height_shift_range=0.2,
shear_range=0.2,
zoom_range=0.2,
horizontal_flip=True,
fill_mode='nearest')
train_ds = datagen.flow_from_directory('/home/train/')
To explain more, I would like to apply 2 distinct augmentation functions on the same image, i.e, if we sample 5 images, we end up with 2 × 5 = 10 augmented observations in the batch
So how I can proceed please ?

I would recommend creating a custom data generator that inherits from tf.keras.utils.Sequence. There are a number of ways to go about this, but this should be along the lines of what you are looking for:
class double_aug_generator(tf.keras.utils.Sequence):
def __init__(self, x, y, batch_size, aug_params1, aug_params2):
self.x, self.y = x, y
self.batch_size = batch_size
self.datagen = tf.keras.preprocessing.image.ImageDataGenerator(**aug_params1)
// dictionary of parameters for the second augmentation
self.aug_params2 = aug_params2
def __len__(self):
return math.ceil(len(self.x) / self.batch_size)
def load(self, file_names):
// load and return raw images however you like
def __getitem__(self, idx):
batch_x = self.x[idx * self.batch_size:(idx + 1) *
self.batch_size]
batch_y = self.y[idx * self.batch_size:(idx + 1) *
self.batch_size]
// load images
batch_x = self.load(batch_x)
// apply first augmentation
batch_x = self.datagen.flow(batch_x)
// apply second
batch_x = self.datagen.apply_transform(batch_x, self.aug_params2)
return batch_x, np.array(batch_y)

Related

How albumentations work with keras Sequence

I have read this tutorial for using albumentations with keras sequence. The code is as follows :
`
from tensorflow.python.keras.utils.data_utils import Sequence
class CIFAR10Sequence(Sequence):
def __init__(self, x_set, y_set, batch_size, augmentations):
self.x, self.y = x_set, y_set
self.batch_size = batch_size
self.augment = augmentations
def __len__(self):
return int(np.ceil(len(self.x) / float(self.batch_size)))
def __getitem__(self, idx):
batch_x = self.x[idx * self.batch_size:(idx + 1) * self.batch_size]
batch_y = self.y[idx * self.batch_size:(idx + 1) * self.batch_size]
return np.stack([
self.augment(image=x)["image"] for x in batch_x
], axis=0), np.array(batch_y)
`
The thing is I don't understand how it is augmenting ( i.e. providing more samples ) the data. The way I see it, it is just transforming the samples in the dataset, and not generating newer ones.
Following the tutorial you provided you may see that the author defines AUGMENTATIONS_TRAIN and AUGMENTATIONS_TEST objects which perform the actual augmentation.
Then these objects are passed to the sequence generator above:
train_gen = CIFAR10Sequence(x_train, y_train, hparams.train_batch_size, augmentations=AUGMENTATIONS_TRAIN)
so that calling self.augment actually augments every image in the batch:
self.augment(image=x)["image"] for x in batch_x
And yes, augmentation doesn't mean creating new objects but applying random transformation to existing ones to create 'artifical' objects which are somewhat different from the originals.

how to fix capsule training problem for a single class of MNIST dataset?

I am training a Capsule Network with both encoder and decoder part. It works perfectly fine with all the classes (10 classes) of the MNIST data set. But when I am extracting a single class say (class 0 or class 5) and then training the capsule network, the reconstruction of the image is very poor.
Where do I need to change the network setting, or do I have an error in my data preparation?
I tried:
I changed the total class from 10 (for ten digits to 1 for 1 digit and even for 2 for 2 digits).
When I am using the default MNIST dataset, I am getting no error or tensor size, but when I am extracting a particular class and then passing it into the network, I am facing issues like a) Dimensional Issues b) Float tensor warning.
I fixed these things but manually adding a dimension and converting the data to data.float().cuda() tensor. I did this for both the case i.e when I am using the 10 Digit Capsules and when I am using the 1 Digit Capsules for training a single class digit.
But after this, the network is running fine, but I am getting really blurred and poor reconstructions. While when I am training the whole MNIST dataset without extracting any class and passing it to the network, it doesn't throw any error and the reconstruction works really fine.
I would love to share the more detail and other parts of the code -
import numpy as np
import torch
import torch.nn as nn
import torch.nn.functional as F
from torch.autograd import Variable
from torch.optim import Adam
from torchvision import datasets, transforms
USE_CUDA = True
### **Here we prepare the data for the complete 10 class digit training**###
class Mnist:
def __init__(self, batch_size):
dataset_transform = transforms.Compose([
transforms.ToTensor(),
transforms.Normalize((0.1307,), (0.3081,))
])
train_dataset = datasets.MNIST('../data', train=True, download=True, transform=dataset_transform)
test_dataset = datasets.MNIST('../data', train=False, download=True, transform=dataset_transform)
self.train_loader = torch.utils.data.DataLoader(train_dataset, batch_size=batch_size, shuffle=True)
self.test_loader = torch.utils.data.DataLoader(test_dataset, batch_size=batch_size, shuffle=True)
## **Here is my code for extracting a single class digit extraction**##
class Mnist:
def __init__(self,batch_size):
dataset_transform = transforms.Compose([
transforms.ToTensor(),
transforms.Normalize((0.1307,), (0.3081,))
])
train_mnist = datasets.MNIST("../data", train=True)
test_mnist = datasets.MNIST("../data", train= False)
train_image, train_label = train_mnist.train_data, train_mnist.train_labels
test_image, test_label = test_mnist.test_data, test_mnist.test_labels
train_0, test_0 = [train_image[key] for (key, label) in enumerate(train_label) if int(label) == 5],[test_image[key] for (key, label) in enumerate(test_label) if int(label) == 5]
train_label_0, test_label_0 = zero__train = [train_label[key] for (key, label) in enumerate(train_label) if int(label) == 5],[test_label[key] for (key, label) in enumerate(test_label) if int(label) == 5]
train_dataset = tuple(zip(train_0, train_label_0))
test_dataset = tuple(zip(test_0, test_label_0))
self.train_loader = torch.utils.data.DataLoader(train_dataset, batch_size=batch_size, shuffle=True)
self.test_loader = torch.utils.data.DataLoader(test_dataset, batch_size=batch_size, shuffle=True)
# Here is the main code for the capsule training.
''' The below code is used for training the 1 class but using the 10 Digit capsules
'''
class ConvLayer(nn.Module):
def __init__(self, in_channels=1, out_channels=256, kernel_size=9):
super(ConvLayer, self).__init__()
self.conv = nn.Conv2d(in_channels=in_channels,
out_channels=out_channels,
kernel_size=kernel_size,
stride=1
)
def forward(self, x):
return F.relu(self.conv(x))
class PrimaryCaps(nn.Module):
def __init__(self, num_capsules=8, in_channels=256, out_channels=32, kernel_size=9):
super(PrimaryCaps, self).__init__()
self.capsules = nn.ModuleList([
nn.Conv2d(in_channels=in_channels, out_channels=out_channels, kernel_size=kernel_size, stride=2, padding=0)
for _ in range(num_capsules)])
def forward(self, x):
u = [capsule(x) for capsule in self.capsules]
u = torch.stack(u, dim=1)
u = u.view(x.size(0), 32 * 6 * 6, -1)
return self.squash(u)
def squash(self, input_tensor):
squared_norm = (input_tensor ** 2).sum(-1, keepdim=True)
output_tensor = squared_norm * input_tensor / ((1. + squared_norm) * torch.sqrt(squared_norm))
return output_tensor
class DigitCaps(nn.Module):
def __init__(self, num_capsules=10, num_routes=32 * 6 * 6, in_channels=8, out_channels=16):
super(DigitCaps, self).__init__()
self.in_channels = in_channels
self.num_routes = num_routes
self.num_capsules = num_capsules
self.W = nn.Parameter(torch.randn(1, num_routes, num_capsules, out_channels, in_channels))
def forward(self, x):
batch_size = x.size(0)
x = torch.stack([x] * self.num_capsules, dim=2).unsqueeze(4)
# print(f"x at epoch {epoch} is equal to : {x}")
W = torch.cat([self.W] * batch_size, dim=0)
# print(f"W at epoch {epoch} is equal to : {W}")
u_hat = torch.matmul(W, x)
# print(f"u_hatat epoch {epoch} is equal to : {u_hat}")
b_ij = Variable(torch.zeros(1, self.num_routes, self.num_capsules, 1))
if USE_CUDA:
b_ij = b_ij.cuda()
# print(f"b_ij at epoch {epoch} is equal to : {b_ij}")
num_iterations = 3
for iteration in range(num_iterations):
c_ij = F.softmax(b_ij, dim =1)
c_ij = torch.cat([c_ij] * batch_size, dim=0).unsqueeze(4)
s_j = (c_ij * u_hat).sum(dim=1, keepdim=True)
v_j = self.squash(s_j)
# print(f"b_ij at iteration {iteration} is equal to : {b_ij}")
if iteration < num_iterations - 1:
a_ij = torch.matmul(u_hat.transpose(3, 4), torch.cat([v_j] * self.num_routes, dim=1))
b_ij = b_ij + a_ij.squeeze(4).mean(dim=0, keepdim=True)
return v_j.squeeze(1)
def squash(self, input_tensor):
squared_norm = (input_tensor ** 2).sum(-1, keepdim=True)
output_tensor = squared_norm * input_tensor / ((1. + squared_norm) * torch.sqrt(squared_norm))
return output_tensor
class Decoder(nn.Module):
def __init__(self):
super(Decoder, self).__init__()
self.reconstraction_layers = nn.Sequential(
nn.Linear(16 * 10, 512),
nn.ReLU(inplace=True),
nn.Linear(512, 1024),
nn.ReLU(inplace=True),
nn.Linear(1024, 784),
nn.Sigmoid()
)
def forward(self, x, data):
classes = torch.sqrt((x ** 2).sum(2))
classes = F.softmax(classes, dim =1)
_, max_length_indices = classes.max(dim=1)
masked = Variable(torch.sparse.torch.eye(10))
if USE_CUDA:
masked = masked.cuda()
masked = masked.index_select(dim=0, index=max_length_indices.squeeze(1).data)
reconstructions = self.reconstraction_layers((x * masked[:, :, None, None]).view(x.size(0), -1))
reconstructions = reconstructions.view(-1, 1, 28, 28)
return reconstructions, masked
class CapsNet(nn.Module):
def __init__(self):
super(CapsNet, self).__init__()
self.conv_layer = ConvLayer()
self.primary_capsules = PrimaryCaps()
self.digit_capsules = DigitCaps()
self.decoder = Decoder()
self.mse_loss = nn.MSELoss()
def forward(self, data):
output = self.digit_capsules(self.primary_capsules(self.conv_layer(data)))
reconstructions, masked = self.decoder(output, data)
return output, reconstructions, masked
def loss(self, data, x, target, reconstructions):
return self.margin_loss(x, target) + self.reconstruction_loss(data, reconstructions)
# return self.reconstruction_loss(data, reconstructions)
def margin_loss(self, x, labels, size_average=True):
batch_size = x.size(0)
v_c = torch.sqrt((x**2).sum(dim=2, keepdim=True))
left = F.relu(0.9 - v_c).view(batch_size, -1)
right = F.relu(v_c - 0.1).view(batch_size, -1)
# print(f"shape of labels, left and right respectively - {labels.size(), left.size(), right.size()}")
loss = labels * left + 0.5 * (1.0 - labels) * right
loss = loss.sum(dim=1).mean()
return loss
def reconstruction_loss(self, data, reconstructions):
loss = self.mse_loss(reconstructions.view(reconstructions.size(0), -1), data.view(reconstructions.size(0), -1))
return loss*0.0005
capsule_net = CapsNet()
if USE_CUDA:
capsule_net = capsule_net.cuda()
optimizer = Adam(capsule_net.parameters())
capsule_net
##### Here is the problem while training####
batch_size = 100
mnist = Mnist(batch_size)
n_epochs = 5
for epoch in range(n_epochs):
capsule_net.train()
train_loss = 0
for batch_id, (data, target) in enumerate(mnist.train_loader):
target = torch.eye(10).index_select(dim=0, index=target)
data, target = Variable(data), Variable(target)
if USE_CUDA:
data, target = data.cuda(), target.cuda()
data, target = data.float().cuda(), target.float().cuda() # Here I changed the data to float and it's required only when I am using my extracted dataset for a single class
data = data[:,:,:] # Use this when 1st MNist data is used
# data = data[:,None,:,:] # Use this when I am using my extracted single class digits
optimizer.zero_grad()
output, reconstructions, masked = capsule_net(data)
loss = capsule_net.loss(data, output, target, reconstructions)
loss.backward()
optimizer.step()
train_loss += loss.item()
# if batch_id % 100 == 0:
# print ("train accuracy:", sum(np.argmax(masked.data.cpu().numpy(), 1) ==
# np.argmax(target.data.cpu().numpy(), 1)) / float(batch_size))
print (train_loss / len(mnist.train_loader))
I used this to see the main data as image and the reconstructed image
import matplotlib
import matplotlib.pyplot as plt
def plot_images_separately(images):
"Plot the six MNIST images separately."
fig = plt.figure()
for j in range(1, 10):
ax = fig.add_subplot(1, 10, j)
ax.matshow(images[j-1], cmap = matplotlib.cm.binary)
plt.xticks(np.array([]))
plt.yticks(np.array([]))
plt.show()
plot_images_separately(data[:10,0].data.cpu().numpy())
plot_images_separately(reconstructions[:10,0].data.cpu().numpy())
I checked the normal performing code and then the problematic one, I found that the dataset passed into the network was of not same nature. The problems were -
The MNIST data extracted for a single class was not transformed into tensor and no normalization was applied, although I tried passing it through the transformation.
This is what I did to fix it -
I created transformation objections and tensor objection and then passed by list comprehension elements to it. Below are the codes and the final output of my network -
Preparing class 0 dataset (dataset for the digit 5)
class Mnist:
trans = transforms.ToTensor()
normalize = transforms.Normalize((0.1307,), (0.3081,))
def init(self,batch_size):
dataset_transform = transforms.Compose([
transforms.ToTensor(),
transforms.Normalize((0.1307,), (0.3081,))
])
trans = transforms.ToTensor()
normalize = transforms.Normalize((0.1307,), (0.3081,))
train_mnist = datasets.MNIST("../data", train=True, transform=dataset_transform)
test_mnist = datasets.MNIST("../data", train= False, transform=dataset_transform)
train_image, train_label = train_mnist.train_data, train_mnist.train_labels
test_image, test_label = test_mnist.test_data, test_mnist.test_labels
train_0, test_0 = [normalize(trans(train_image[key].unsqueeze(2).numpy())) for (key, label) in enumerate(train_label) if int(label) == 5],[test_image[key] for (key, label) in enumerate(test_label) if int(label) == 5]
train_label_0, test_label_0 = zero__train = [train_label[key] for (key, label) in enumerate(train_label) if int(label) == 5],[test_label[key] for (key, label) in enumerate(test_label) if int(label) == 5]
train_dataset = tuple(zip(train_0, train_label_0))
test_dataset = tuple(zip(test_0, test_label_0))
self.train_loader = torch.utils.data.DataLoader(train_dataset, batch_size=batch_size, shuffle=True)
self.test_loader = torch.utils.data.DataLoader(test_dataset, batch_size=batch_size, shuffle=True)
enter image description here

Order of rotated images by using a custom generator

I use a custom image data generator for my project. It receives batches of images and returns [0, 90, 180 and 270] degrees rotated versions of the images with the corresponding class indices {0:0, 1:90, 2:180, 3:270}. Lets assume we have images A, B and C in a batch and images A to Z in the whole data set. All the images are naturally in 0 degree orientation. Initially I returned all the rotated images at the same time. Here is a sample of returned batch: [A0,B0,C0,A1,B1,C1,...,A3,B3,C3]. But this gave me useless results. To compare my approach I trained the same model by using my generator and built in Keras ImageDataGenerator with flow_from_directory. For the built in function I manually rotated original images and stored them in separate folders. Here are the accuracy plots for comparison:
I used only a few images just to see if there is any difference. From the plots it is obvious that the custom generator is not correct. Hence I think it must return the images as [[A0,B0,C0],[D0,E0,F0]...[...,Z0]], then [[A1,B1,C1],[D1,E1,F1]...[...,Z1]] and so on. To do this I must use the folowing function for multiple times (in my case 4).
def next(self):
with self.lock:
# get input data index and size of the current batch
index_array = next(self.index_generator)
# create array to hold the images
return self._get_batches_of_transformed_samples(index_array)
This function iterates through the directory and returns batches of images. When it reaches to the last image it finishes and the next epoch starts. In my case, in one epoch I want to run this for 4 times by sending the rotation angle as an argument like this: self._get_batches_of_transformed_samples(index_array) , rotation_angle). I was wondering if this is possible or not? If not what could be the solution? Here is the current data generator code:
def _get_batches_of_transformed_samples(self, index_array):
# create list to hold the images and labels
batch_x = []
batch_y = []
# create angle categories corresponding to number of rotation angles
angle_categories = list(range(0, len(self.target_angles)))
# generate rotated images and corresponding labels
for rotation_angle, angle_indice in zip(self.target_angles, angle_categories):
for i, j in enumerate(index_array):
if self.filenames is None:
image = self.images[j]
if len(image.shape) == 2: image = cv2.cvtColor(image,cv2.COLOR_GRAY2RGB)
else:
is_color = int(self.color_mode == 'rgb')
image = cv2.imread(self.filenames[j], is_color)
if is_color:
if not image is None:
image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
# do nothing if the image is none
if not image is None:
rotated_im = rotate(image, rotation_angle, self.target_size[:2])
if self.preprocess_func: rotated_im = self.preprocess_func(rotated_im)
# add dimension to account for the channels if the image is greyscale
if rotated_im.ndim == 2: rotated_im = np.expand_dims(rotated_im, axis=2)
batch_x.append(rotated_im)
batch_y.append(angle_indice)
# convert lists to numpy arrays
batch_x = np.asarray(batch_x)
batch_y = np.asarray(batch_y)
batch_y = to_categorical(batch_y, len(self.target_angles))
return batch_x, batch_y
def next(self):
with self.lock:
# get input data index and size of the current batch
index_array = next(self.index_generator)
# create array to hold the images
return self._get_batches_of_transformed_samples(index_array)
Hmm I would probably do this through keras.utils.Sequence
from keras.utils import Sequence
import numpy as np
class RotationSequence(Sequence):
def __init__(self, x_set, y_set, batch_size, rotations=(0,90,180,270)):
self.rotations = rotations
self.x, self.y = x_set, y_set
self.batch_size = batch_size
def __len__(self):
return int(np.ceil(len(self.x) / float(self.batch_size)))
def __getitem__(self, idx):
batch_x = self.x[idx * self.batch_size:(idx + 1) * self.batch_size]
batch_y = self.y[idx * self.batch_size:(idx + 1) * self.batch_size]
x, y = [], []
for rot in self.rotations:
x += [rotate(cv2.imread(file_name), rotation_angle) for file_name in batch_x]
y += batch_y
return np.array(x), np.array(y)
def on_epoch_end(self):
shuffle_idx = np.random.permutation(len(self.x))
self.x, self.y = self.x[shuffle_idx], self.y[shuffle_idx]
And then just pass the batcher to model.fit()
rotation_batcher = RotationSequence(...)
model.fit_generator(rotation_batcher,
steps_per_epoch=len(rotation_batcher),
validation_data=validation_batcher,
epochs=epochs)
This allows you to have more control over the batches being fed into your model. This implementation will almost run. You just need to implement the rotate() function in __getitem__. Also, the batch_size will be 4 times the set size because I just duplicated and rotated each batch. Hope this is helpful to you

Keras : Dealing with large image datasets

I am trying to fit a model using a large image datasets. I have a memory RAM of 14 GB, and the dataset have the size of 40 GB. I tried to use fit_generator, but I end up with a method that does not delete the loaded batchs after using theme.
If there is anyway to sole the problem or resources, thanks to point me to it.
Thanks.
The generator code is :
class Data_Generator(Sequence):
def __init__(self, image_filenames, labels, batch_size):
self.image_filenames, self.labels = image_filenames, labels
self.batch_size = batch_size
def __len__(self):
return int(np.ceil(len(self.image_filenames) / float(self.batch_size)))
def __format_labels__(self, gd_truth):
cols=gd_truth.columns
y=[]
for col in cols:
y.append(gd_truth[col].values)
return y
def __getitem__(self, idx):
batch_x = self.image_filenames[idx * self.batch_size:(idx + 1) * self.batch_size]
batch_y = self.labels[idx * self.batch_size:(idx + 1) * self.batch_size]
gd_truth=pd.DataFrame(data=batch_y,columns=self.labels.columns)
#gd_truth=batch_y
return np.array([read_image(file_name) for file_name in batch_x]),self.__format_labels__(gd_truth) #np.array(batch_y)
Then I have created two generators for train and validation images:
training_batch_generator = Data_Generator(training_filenames, trainTargets, batch_size)
mvalidation_batch_generator = Data_Generator(validation_filenames, valTargets, batch_size)
The fit_generator call is as follow :
num_epochs=10
model.fit_generator(generator=my_training_batch_generator,
steps_per_epoch=(num_training_samples // batch_size),
epochs=num_epochs,
verbose=1,
validation_data=my_validation_batch_generator,
validation_steps=(num_validation_samples // batch_size),
max_queue_size=16)

Correct way of doing data augmentation in TensorFlow with the dataset api?

So, I've been playing around with the TensorFlow dataset API for loading images, and segmentation masks (for a semantic segmentation project), I would like to be able to generate batches of images and masks, with each image having randomly gone through any combination of pre-processing functions like brightness changes, contrast changes, cropping, saturation changes etc. So, the first image in my batch may have no pre-processing, second may have saturation changes, third may have brightness and saturation and so on.
I tried the following:
import tensorflow as tf
from tensorflow.contrib.data import Dataset, Iterator
import random
def _resize_image(image, mask):
image = tf.image.resize_bicubic(image, [480, 640], True)
mask = tf.image.resize_bicubic(mask, [480, 640], True)
return image, mask
def _corrupt_contrast(image, mask):
image = tf.image.random_contrast(image, 0, 5)
return image, mask
def _corrupt_saturation(image, mask):
image = tf.image.random_saturation(image, 0, 5)
return image, mask
def _corrupt_brightness(image, mask):
image = tf.image.random_brightness(image, 5)
return image, mask
def _random_crop(image, mask):
seed = random.random()
image = tf.random_crop(image, [240, 320, 3], seed=seed)
mask = tf.random_crop(mask, [240, 320, 1], seed=seed)
return image, mask
def _flip_image_horizontally(image, mask):
seed = random.random()
image = tf.image.random_flip_left_right(image, seed=seed)
mask = tf.image.random_flip_left_right(mask, seed=seed)
return image, mask
def _flip_image_vertically(image, mask):
seed = random.random()
image = tf.image.random_flip_up_down(image, seed=seed)
mask = tf.image.random_flip_up_down(mask, seed=seed)
return image, mask
def _normalize_data(image, mask):
image = tf.cast(image, tf.float32)
image = image / 255.0
mask = tf.cast(mask, tf.float32)
mask = mask / 255.0
return image, mask
def _parse_data(image_paths, mask_paths):
image_content = tf.read_file(image_paths)
mask_content = tf.read_file(mask_paths)
images = tf.image.decode_png(image_content, channels=3)
masks = tf.image.decode_png(mask_content, channels=1)
return images, masks
def data_batch(image_paths, mask_paths, params, batch_size=4, num_threads=2):
# Convert lists of paths to tensors for tensorflow
images_name_tensor = tf.constant(image_paths)
mask_name_tensor = tf.constant(mask_paths)
# Create dataset out of the 2 files:
data = Dataset.from_tensor_slices(
(images_name_tensor, mask_name_tensor))
# Parse images and labels
data = data.map(
_parse_data, num_threads=num_threads, output_buffer_size=6 * batch_size)
# Normalize images and masks for vals. between 0 and 1
data = data.map(_normalize_data, num_threads=num_threads, output_buffer_size=6 * batch_size)
if params['crop'] and not random.randint(0, 1):
data = data.map(_random_crop, num_threads=num_threads,
output_buffer_size=6 * batch_size)
if params['brightness'] and not random.randint(0, 1):
data = data.map(_corrupt_brightness, num_threads=num_threads,
output_buffer_size=6 * batch_size)
if params['contrast'] and not random.randint(0, 1):
data = data.map(_corrupt_contrast, num_threads=num_threads,
output_buffer_size=6 * batch_size)
if params['saturation'] and not random.randint(0, 1):
data = data.map(_corrupt_saturation, num_threads=num_threads,
output_buffer_size=6 * batch_size)
if params['flip_horizontally'] and not random.randint(0, 1):
data = data.map(_flip_image_horizontally,
num_threads=num_threads, output_buffer_size=6 * batch_size)
if params['flip_vertically'] and not random.randint(0, 1):
data = data.map(_flip_image_vertically, num_threads=num_threads,
output_buffer_size=6 * batch_size)
# Shuffle the data queue
data = data.shuffle(len(image_paths))
# Create a batch of data
data = data.batch(batch_size)
data = data.map(_resize_image, num_threads=num_threads,
output_buffer_size=6 * batch_size)
# Create iterator
iterator = Iterator.from_structure(data.output_types, data.output_shapes)
# Next element Op
next_element = iterator.get_next()
# Data set init. op
init_op = iterator.make_initializer(data)
return next_element, init_op
But all batches returned by this have the same transformations applied to them, not different combinations, my guess is that the random.randint persists, and is not actually run for each batch, if so, how do I fix this to get the desired result?
For an example of how I plan to use it (I feel that's irrelevant to the problem but people might still want to know) can be found here
So the problem was indeed that the control flow with the if statements are with Python variables, and are only executed once when the graph is created, to do what I want to do, I had to define a placeholder that contains the boolean values of whether to apply a function or not (and feed in a new boolean tensor per iteration to change the augmentation), and control flow is handled by tf.cond. I pushed the new code to the GitHub link I posted in the question above if anyone is interested.

Resources