Load several Images without label in keras cnn - python-3.x

I have several .jpeg images with different names, that I want to load into a cnn in a jupyter notebook to have them classified. The only way I found was:
test_image = image.load_img("name_of_picute.jpeg",target_size=(64,64))
test_image = image.img_to_array(test_image)
test_image = np.expand_dims(test_image, axis=0)
result = cnn.predict(test_image)
All the other things found at the Keras API like tf.keras.preprocessing.image_dataset_from_directory()seems to only work on labeled data. Sadly I can't "simply" iterate over the name of the pictures a they are named differently, is there a way to predict all of them at once without naming every single picture?
Thanks for yout help,
Nick

The solutiontf.keras.preprocessing.image_dataset_from_directory can be updated to return the dataset and the image_path as explained here -> https://stackoverflow.com/a/63725072/4994352

There are multiple ways, for larger data it is useful to use a tf.data.DataSet as it can be tweaked for performance quite easily. I will give you the non-performance-optimized code. Replace <YOUR PATH INCL. REGEX> with the path like ../input/pokemon-images-and-types/images/*/*.
import tensorflow as tf
from tensorflow.data.experimental import AUTOTUNE
def load(file_path):
img = tf.io.read_file(file_path)
img = tf.image.decode_jpeg(img, channels=3)
... # do some preprocessing like resizing if necessary
return img
list_ds = tf.data.Dataset.list_files(str('<YOUR PATH INCL. REGEX>'), shuffle=True) # Get all images from subfolders
train_dataset = list_ds.take(-1)
# Set `num_parallel_calls` so multiple images are loaded/processed in parallel.
train_dataset = train_dataset.map(load, num_parallel_calls=AUTOTUNE)

Related

K-fold crossvalidation on images

Let's say I have some pictures divided in 3 categories ("cat", "dog", "mouse") and my DL net is written in keras.
The design I used is the same as in this picture (1):
I splitted the data into three different folders: training, validation and test.
The net should be able to recognize a cat, dog or a mouse given a picture. The accuracy I get is around 98%.
It works.
But I need for some reasons change that design. I would like to use the K-fold cross-validation process and the schema should now look like (2):
Now my problem is that I don't know how to split and distribute the original data according to the schema in Fig. 2.
I can only imagine 2 different ways. Let's forget the test directory for the moment:
I create 2 folders: "Training" and "Validation". In both is the structure the same as in Fig. 1: Three subdirectory for every categories. Now the problem is: should I move the data around when progressing from Fold 1 to Fold 3? Or I can allocate once the images into the subdirectories?
I create 2 folders: "Training" and "Validation", BUT I mix all images togheter. No subdirectory. In this case I have the problem that I lose the connection between the picture name and the pet on it. How can I tell Keras, which animal should be identified?
Personally I would mix all images togheter, no matter what they show. But I would save the information of the content into a file. In this case I pass to Keras the directory (Validation or Training) and a file containing the name of all files and their content.
What would you suggest?
Ok, I can answer my own question.
The easiest way is just to user Kfold form sklearn in the python script
from sklearn.model_selection import KFold
After that you need to istantiate KFold
kfold = KFold(n_splits = 4, shuffle = True)
and you iterate over the splitted dataset like:
datagen = ImageDataGenerator(rescale = 1. / 255.)
for train, test in kfold.split(df_data):
# df is the whole dataset (all together!)
df_train = df.iloc[train, :] # Look that train is coming from the for in .. loop
df_test = df.iloc[test, :] # The same for test
train_generator = datagen.flow_from_dataframe(dataframe = df_train,
directory = dataset_dir,
... )
test_generator = datagen.flow_from_dataframe(dataframe = df_test,
directory = dataset_dir,
...)
model = models.Sequential()
.....
model.compile(...)
model.fit(...)
and it is done! The dataset is now splitted in partitions!!!
Note that the class ImageDataGenerator is not in the for loop!!!
And note please, that the methods (model creation, compile() and fit()) must be in the for-loop.
The code above works for me very good.

Multiprocessing ResNet Feature Extraction with Image URLs

I have a simple function to take an image url and extract features from it using resnet in Keras then hand it off to an Xgboost model loaded from a pkl file.
def classify(img, resnet_model, loaded_model):
try:
images = io.imread(img.strip())
images = Image.fromarray(images)
test_image = img.resize((224,224))
test_image = image.img_to_array(test_image)
test_image = expand_dims(test_image, axis = 0)
img_data = preprocess_input(test_image)
image_features = resnet_model.predict(img_data)
image_features_array = array(image_features)
predicted_image = loaded_model.predict(xgb.DMatrix(DataFrame(image_features_array)))
except:
predicted_image = 'Broken URL'
return predicted_image`
Currently I am just looping through a list of img urls and it works fine but I will need it to perform much faster. The code itself may not be the most efficient yet but I am mostly concerned with multiprocessing. My attempts either hang or immediately result in an empty list.
There is a similar question posted years ago here: question but the answers were not very satisfying and involved holding a batch of image files locally. I would prefer to just have one worker making the request for the image and then predicting that image.

Capture output of ImageDataGenerator without saving images to drive?

I'm using image augmentation to train my model, applying transforms like brightness and color shifts. I like to preview the effects of the augmentations before putting them into use. Normally, I do it like this:
from keras_preprocessing.image import ImageDataGenerator
datagen = ImageDataGenerator(horizontal_flip = True,
fill_mode = "nearest",
zoom_range = 0.3,
rotation_range=360)
i = 0
for batch in datagen.flow_from_directory(directory='./my_images/,
batch_size = 1,
save_to_dir='.',
save_prefix='aug',
save_format='jpeg'):
i += 1
if i > 2:
break # Yields two images
Then I open the images and look at them, or read them in and print them to my notebook. But I don't like this- it's clunky. Is there a way to directly capture the altered images produced by my generator? I'd like to add them to an array.
Thanks!

how to save resized images using ImageDataGenerator and flow_from_directory in keras

I am resizing my RGB images stored in a folder(two classes) using following code:
from keras.preprocessing.image import ImageDataGenerator
dataset=ImageDataGenerator()
dataset.flow_from_directory('/home/1',target_size=(50,50),save_to_dir='/home/resized',class_mode='binary',save_prefix='N',save_format='jpeg',batch_size=10)
My data tree is like following:
1/
1_1/
img1.jpg
img2.jpg
........
1_2/
IMG1.jpg
IMG2.jpg
........
resized/
1_1/ (here i want to save resized images of 1_1)
2_2/ (here i want to save resized images of 1_2)
After running the code i am getting following output but not images:
Found 271 images belonging to 2 classes.
Out[12]: <keras.preprocessing.image.DirectoryIterator at 0x7f22a3569400>
How to save images?
Heres a very simple version of saving augmented images of one image wherever you want:
Step 1. Initialize image data generator
Here we figure out what changes we want to make to the original image and generate the augmented images
You can read up about the diff effects here- https://keras.io/preprocessing/image/
datagen = ImageDataGenerator(rotation_range=10, width_shift_range=0.1,
height_shift_range=0.1,shear_range=0.15,
zoom_range=0.1,channel_shift_range = 10, horizontal_flip=True)
Step 2: Here we pick the original image to perform the augmentation on
read in the image
image_path = 'C:/Users/Darshil/gitly/Deep-Learning/My
Projects/CNN_Keras/test_augment/caty.jpg'
image = np.expand_dims(ndimage.imread(image_path), 0)
step 3: pick where you want to save the augmented images
save_here = 'C:/Users/Darshil/gitly/Deep-Learning/My
Projects/CNN_Keras/test_augment'
Step 4. we fit the original image
datagen.fit(image)
step 5: iterate over images and save using the "save_to_dir" parameter
for x, val in zip(datagen.flow(image, #image we chose
save_to_dir=save_here, #this is where we figure out where to save
save_prefix='aug', # it will save the images as 'aug_0912' some number for every new augmented image
save_format='png'),range(10)) : # here we define a range because we want 10 augmented images otherwise it will keep looping forever I think
pass
The flow_from_directory method gives you an "iterator", as described in your output. An iterator doesn't really do anything on its own. It's waiting to be iterated over, and only then the actual data will be read and generated.
An iterator in Keras for fitting is to be used like this:
generator = dataset.flow_from_directory('/home/1',target_size=(50,50),save_to_dir='/home/resized',class_mode='binary',save_prefix='N',save_format='jpeg',batch_size=10)
for inputs,outputs in generator:
#do things with each batch of inputs and ouptus
Normally, instead of doing the loop above, you just pass the generator to a fit_generator method. There is no real need to do a for loop:
model.fit_generator(generator, ......)
Keras will only save images after they're loaded and augmented by iterating over the generator.
Its only a declaration, you must use that generator, for example, .next()
from keras.preprocessing.image import ImageDataGenerator
dataset=ImageDataGenerator()
image = dataset.flow_from_directory('/home/1',target_size=(50,50),save_to_dir='/home/resized',class_mode='binary',save_prefix='N',save_format='jpeg',batch_size=10)
image.next()
then you will see images in /home/resized
You may try this simple code example and modify it according to your need:
(It generates augmented images from your data and then save them into a different folder)
from keras.preprocessing.image import ImageDataGenerator
data_dir = 'data/train' #Due to the structure of ImageDataGenerator, you need to have another folder under train contains your data, for example: data/train/faces
save_dir = 'data/resized'
datagen = ImageDataGenerator(rescale=1./255)
resized = datagen.flow_from_directory(data_dir, target_size=(224, 224),
save_to_dir=save_dir,
color_mode="rgb", # Choose color mode
class_mode='categorical',
shuffle=True,
save_prefix='N',
save_format='jpg', # Formate
batch_size=1)
for in in range(len(resized)):
resized.next()
In case you want to save the images under a folder having same name as label then you can loop over a list of labels and call the augmentation code within the loop.
from tensorflow.keras.preprocessing.image import ImageDataGenerator
# Augmentation + save augmented images under augmented folder
IMAGE_SIZE = 224
BATCH_SIZE = 500
LABELS = ['lbl_a','lbl_b','lbl_c']
for label in LABELS:
datagen_kwargs = dict(rescale=1./255)
dataflow_kwargs = dict(target_size=(IMAGE_SIZE, IMAGE_SIZE),
batch_size=BATCH_SIZE, interpolation="bilinear")
train_datagen = tf.keras.preprocessing.image.ImageDataGenerator(
rotation_range=40,
horizontal_flip=True,
width_shift_range=0.1, height_shift_range=0.1,
shear_range=0.1, zoom_range=0.1,
**datagen_kwargs)
train_generator = train_datagen.flow_from_directory(
'original_images', subset="training", shuffle=True, save_to_dir='aug_images/'+label, save_prefix='aug', classes=[label], **dataflow_kwargs)
# Following line triggers execution of train_generator
batch = next(train_generator)
So why do this when generator can directly be passed to model? In case, you want to use the tflite-model-maker which does not accept a generator and accepts labelled data under folder for each label:
from tflite_model_maker import ImageClassifierDataLoader
data = ImageClassifierDataLoader.from_folder('aug_images')
Result
aug_images
|
|__ lbl_a
| |
| |_____aug_img_a.png
|
|__ lbl_b
| |
| |_____aug_img_b.png
|
|__ lbl_c
| |
| |_____aug_img_c.png
Note: You need to ensure the folders already exist.
datagen = ImageDataGenerator(preprocessing_function=preprocess_input,
rotation_range =15,
width_shift_range = 0.2,
height_shift_range = 0.2,
shear_range=0.2,
zoom_range=0.2,
horizontal_flip = True,
fill_mode = 'nearest',
brightness_range=[0.5, 1.5])
DATA_DIR = 'splited/train/'
save_here = 'aug dataset/train/normal2/'
cancer = os.listdir(DATA_DIR + 'cancer/')
for i, image_name in enumerate(cancer):
try:
if (image_name.split('.')[1] == 'png'):
image = np.expand_dims(cv2.imread(DATA_DIR +'classs 1/' + image_name), 0)
for x, val in zip(datagen.flow(image, #image we chose save_to_dir=save_here, #this is where we figure out where to save
save_prefix='aug', # it will save the images as 'aug_0912' some number for every new augmented image
save_format='png'),range(10)) : # here we define a range because we want 10 augmented images otherwise it will keep looping forever I think
pass
except Exception:
print("Could not read image {} with name {}".format(i, image_name))

Train your own image with tensorflow?

I have one image ( i don't have dataset ) I want to train a model in tensorflow,
such that I can use that model to recognize the image fast.
I have implemented one such thing, but it doesn't work:
import tensorflow as tf
filenames = ['pic.jpg']
# step 2
filename_queue = tf.train.string_input_producer(filenames)
# step 3: read, decode and resize images
reader = tf.WholeFileReader()
filename, content = reader.read(filename_queue)
image = tf.image.decode_jpeg(content, channels=3)
image = tf.cast(image, tf.float32)
resized_image = tf.image.resize_images(image, [224, 224])
# step 4: Batching
image_batch = tf.train.batch([resized_image], batch_size=8)
Also, how vuforia is able to recognize with only one image so fast?. I want a similar implementation in tensorflow
This is not how machine learning and deep learning works. You can't just grab one element and build a model which explains this one element. If you will check a few NN tutorials, you will see that in order to train a reasonable model people use thousands or even millions of data points.

Resources