from tensorflow.keras.preprocessing.image import ImageDataGenerator
With image data generator's flow_from_directory method can we reshape images also.
e.g. we have color images in 10 classes in 10 folders and we are providing path of that directory let's say train:
gen = ImageDataGenerator(rescale=1./255, width_shift_range=0.05, height_shift_range=0.05)
train_imgs= gen .flow_from_directory(
'/content/data/train',
target_size=(10,10),
batch_size=1,
class_mode='categorical')
Now my model is taking input shape 300. And I want to define training data from this train_imgs that is images of 10X10X3.
Is there any library, method or option available to convert this data generator to matrix in which columns are each image vector?
Generally the best option in these cases is to add a Reshape layer to the start of your model: layers.Reshape((300), input_shape=(10,10,3)). You can also do layers.Reshape((-1), input_shape=(10,10,3)), and it will automatically figure out the correct output length.
Related
I've been trying to reshape to NCHW format using tf.keras.layers.Reshape function but the final xml file outputs this :
There's a Transpose layer after reshaping to (1,6,26,26) and final shape is (1,26,6,26)
I'm not sure why there's a Transpose layer , I want the shapes to be (1,6,26,26)
What's the reason ?
First and foremost, you need to understand the topology of your model's network. Then you can reshape and batch.
You can use the model optimizer to manipulate your input size.
This is how you can do it: Model Optimizer Advanced Reshape, Batching
I am using Keras's "ImageDataGenerator" class for data augmentation. Since the image has the bounding box of the relevant object, I want to crop the image to the relevant part before augmenting it. The class has an argument named "preprocessing_function" among its arguments and allows us to implement the desired function after augmentation and resizing. I am asking for this to happen the opposite. First, let the function run, then the augmentation takes place. How can I implement that to the code?
tf.keras.preprocessing.image.ImageDataGenerator(
featurewise_center=False,
samplewise_center=False,
featurewise_std_normalization=False,
samplewise_std_normalization=False,
zca_whitening=False,
zca_epsilon=1e-06,
rotation_range=0,
width_shift_range=0.0,
height_shift_range=0.0,
brightness_range=None,
shear_range=0.0,
zoom_range=0.0,
channel_shift_range=0.0,
fill_mode="nearest",
cval=0.0,
horizontal_flip=False,
vertical_flip=False,
rescale=None,
preprocessing_function=None,
data_format=None,
validation_split=0.0,
dtype=None,
)
preprocessing_function: a function that will be applied to each input. The function will run after the image is resized and augmented. The function should take one argument: one image (Numpy tensor with rank 3) and should output a Numpy tensor with the same shape.
Keras team members said that the ImageDataGenerator class is legacy. They suggest me to use transformation layers. They can be used anytime while training.
Example usage of transformation layers: Keras Transformation layers example page
Github Issue (Closed): GitHub Issues
In order to do multiclass segmentation the masks need to be one-hot-encoded. For example if I have a 100 images of shape 224x224x3 with 5 different classes I would have a set of masks with shape (100, 224, 224, 5) i.e the last dimension (the channel) refers to the class of the pixel. Take a grayscale masks that contains 6 classes where each pixel has the label 1-6, I can easily convert this to the categorical mask I need using tf.keras.utils.to_categorical.
If I use the ImageDataGenerator provided with keras I know I can create a generator for both images and masks then zip them together for the problem (as code shows below) but where i'm confused is how do I convert the masks into this categorical one-hot-encoded structure whilst using the ImageDataGenerator? The ImageDataGenerator only finds files in directories that are saved as images therefore I can't convert the masks and then save them down as numpy arrays (the one-hot-encoded masks) for the generator to pick up, as images can't have that have more than 4 channels right? Is there somehow of telling the generator to do this conversion? Or does this therefore limit the number of classes I can have in my problem?
One solution is to write my own custom generator with the sequence class which I have done but I'm keen on understanding if this is possible to do with Keras inbuilt ImageDataGenenerator? Could writing my a lambda layer on the network be the solution?
mask_categorical = tf.keras.utils.to_categoricl(mask) #converts 224x224 grayscale mask to one-hot encoding version
imgDataGen = ImageDataGenerator(rescale=1/255.)
maskDataGen = ImageDataGenerator()
imageGenerator =imageDataGen.flow_from_directory("dataset/image/",
class_mode=None, seed=40)
maskGenerator = maskDataGen.flow_from_directory("dataset/mask/",
class_mode=None, seed=40)
trainGenerator = zip(imageGenerator, maskGenerator)
To train a model using Keras, should I load all the images I have to an array to create something like
x_train, y_train
Or is there a better way to read the images on the fly while training. I am not looking for ImageDataGenerator class since my output is an array of points not classes based on directory names..
I managed to get my data csv file to contain the array of points and image file name in 9 columns as follows:
x1 x2 ..... x8 Image_file_name
You can use this data with ImageDataGenerator. You incorrectly assume that it needs folders for classes, but that only applies to flow_from_directory. The method flow_from_dataframe allows you to load data from a Pandas dataframe, from where you can load your data, for example:
idg = ImageDataGenerator(...)
df = pd.load_csv('your_data.csv')
generator = idf.flow_from_dataframe(directory='image folder', x_col = 'filename_column',
y_col = ['col1', 'col2', ..., 'coln'],
class_mode='other')
This generator will data from the dataframe, load the image filename in directory as specified by the value of x_col, and use the corresponding row to build the targets, which in this case will be a numpy array of the values of columns in y_col. More information about this method can be found in the keras documentation.
Loading the entire data set in memory in an array is not a great idea because the memory consumption could go out of control, so you should use a generator. ImageDataGenerator and flow_from_dataframe are a great way of loading images in Keras. Since you don't want to use ImageDataGenerator(can you mention why?) you can create your own generator function that loads chunks of images in memory. If you load your data in a generator make sure you use fit_generator and predict_generator functions.
To load unlabeled data you can do the following hack:
datagen = ImageDataGenerator()
test_data = datagen.flow_from_directory('.', classes=['directory_where_images_are_stored'])
For more information check out link [1].
[1] https://kylewbanks.com/blog/loading-unlabeled-images-with-imagedatagenerator-flowfromdirectory-keras
I trained a CNN in Keras with images in a folder (two types of bees). I have a second folder with unlabeled bees images for prediction.
I'm able to predict a single image (as per below code).
from keras.preprocessing import image
test_image = image.load_img('data/test/20300.jpg')
test_image = image.img_to_array(test_image)
test_image = np.expand_dims(test_image, axis = 0)
prob = classifier.predict_proba(test_image)
Result:
prob
Out[214]: array([[1., 0.]], dtype=float32)
I would like to be able to predict all of the images (around 300).
Is there a way to load and predict all the images in a batch? And will predict() be able to handle it, as it expects and array to predict?
Model.predict_proba() (which is a synonym of predict() really) accepts the batch input. From the documentation:
Generates class probability predictions for the input samples.
The input samples are processed batch by batch.
You just need to load several images and glue them together in a single numpy array. By expanding the 0 dimension your code already uses a batch of 1 in test_image. To complete the picture there's also a Model.predict_on_batch() method.
To load a batch of test images you can use image.list_pictures or ImageDataGenerator.flow_from_directory() (which is compatible with Model.predict_generator() method, see the examples in the documentation).