Drawing the accuracy of multiple validation of diffferent CNN classifiers - python-3.x
I am evaluating different CNN classifiers that have different parameters (i.e. learning rate, number of filters and dropouts).
I have sucessfully plot the accuracy of each model individually for both training and validation dataset over 100 epochs using the below code
history=classifier.fit_generator(training_set,
steps_per_epoch = nb_training_samples// batchsize,
epochs = 100,
validation_data =test_set,
validation_steps = nb_testing_samples // batchsize)
plt.plot(history.history['accuracy'])
plt.plot(history.history['val_accuracy'])
plt.title('Model Accuracy')
plt.ylabel('accuracy')
plt.xlabel('epoch')
plt.legend(['train', 'test'], loc='upper left')
plt.show()
However, I am trying to have a single plot that compares the validation dataset accuracy throught 100 epochs from different classifiers that have different parameters. Is it feasible?
Yes, it is feasible to do so. These plots are also required when you are doing the hyper parameter tuning of learning rate, filter size, dropout, epoch etc.
In fact we can choose different combinations of Learning rate, kernel, dropout, epoch and other hyperparameters in a model and plot the validation accuracy of these combinations to choose the best.
Also we can do a grid search on the model to select best set of parameters. You can find more about Grid Search Hyperparameters for Deep Learning Models in Python With Keras here.
Here I have added two simple programs along with Validation Accuracy plots -
Model having Learning rate with increase of 0.01 over iterations.
Model with different optimizers.
Program 1: Model having Learning rate with increase of 0.01 over iterations.
import tensorflow as tf
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Conv2D, Flatten, Dropout, MaxPooling2D
from tensorflow.keras.preprocessing.image import ImageDataGenerator
from tensorflow.keras.optimizers import Adam
import os
import numpy as np
import matplotlib.pyplot as plt
_URL = 'https://storage.googleapis.com/mledu-datasets/cats_and_dogs_filtered.zip'
path_to_zip = tf.keras.utils.get_file('cats_and_dogs.zip', origin=_URL, extract=True)
PATH = os.path.join(os.path.dirname(path_to_zip), 'cats_and_dogs_filtered')
train_dir = os.path.join(PATH, 'train')
validation_dir = os.path.join(PATH, 'validation')
train_cats_dir = os.path.join(train_dir, 'cats') # directory with our training cat pictures
train_dogs_dir = os.path.join(train_dir, 'dogs') # directory with our training dog pictures
validation_cats_dir = os.path.join(validation_dir, 'cats') # directory with our validation cat pictures
validation_dogs_dir = os.path.join(validation_dir, 'dogs') # directory with our validation dog pictures
batch_size = 128
epochs = 15
IMG_HEIGHT = 150
IMG_WIDTH = 150
train_image_generator = ImageDataGenerator(rescale=1./255) # Generator for our training data
validation_image_generator = ImageDataGenerator(rescale=1./255) # Generator for our validation data
train_data_gen = train_image_generator.flow_from_directory(batch_size=batch_size,
directory=train_dir,
shuffle=True,
target_size=(IMG_HEIGHT, IMG_WIDTH),
class_mode='binary')
val_data_gen = validation_image_generator.flow_from_directory(batch_size=batch_size,
directory=validation_dir,
target_size=(IMG_HEIGHT, IMG_WIDTH),
class_mode='binary')
model = Sequential([
Conv2D(16, 3, padding='same', activation='relu', input_shape=(IMG_HEIGHT, IMG_WIDTH ,3)),
MaxPooling2D(),
Conv2D(32, 3, padding='same', activation='relu'),
MaxPooling2D(),
Conv2D(64, 3, padding='same', activation='relu'),
MaxPooling2D(),
Flatten(),
Dense(512, activation='relu'),
Dense(1)
])
lr=0.01
for i in range(5):
adam = Adam(lr)
print("Model using learning rate of",lr)
lr = lr + 0.01
model.compile(optimizer=adam,
loss=tf.keras.losses.BinaryCrossentropy(from_logits=True),
metrics=['accuracy'])
history = model.fit_generator(
train_data_gen,
steps_per_epoch=total_train // batch_size,
epochs=epochs,
validation_data=val_data_gen,
validation_steps=total_val // batch_size)
plt.plot(history.history['val_accuracy'])
plt.title('Model Accuracy')
plt.ylabel('accuracy')
plt.xlabel('epoch')
plt.legend(['LR=0.01', 'LR=0.02', 'LR=0.03', 'LR=0.04', 'LR=0.05'], loc='upper left')
plt.show()
Output -
Found 2000 images belonging to 2 classes.
Found 1000 images belonging to 2 classes.
Model using learning rate of 0.01
Epoch 1/15
15/15 [==============================] - 8s 546ms/step - loss: 7.3135 - accuracy: 0.5073 - val_loss: 0.6920 - val_accuracy: 0.4989
Epoch 2/15
15/15 [==============================] - 8s 545ms/step - loss: 0.6929 - accuracy: 0.4968 - val_loss: 0.6931 - val_accuracy: 0.5000
Epoch 3/15
15/15 [==============================] - 8s 533ms/step - loss: 0.6932 - accuracy: 0.5016 - val_loss: 0.6932 - val_accuracy: 0.5134
Epoch 4/15
15/15 [==============================] - 8s 539ms/step - loss: 0.6932 - accuracy: 0.5080 - val_loss: 0.6930 - val_accuracy: 0.5100
Epoch 5/15
15/15 [==============================] - 8s 534ms/step - loss: 0.6934 - accuracy: 0.4893 - val_loss: 0.6932 - val_accuracy: 0.4978
Epoch 6/15
15/15 [==============================] - 8s 532ms/step - loss: 0.6932 - accuracy: 0.5000 - val_loss: 0.6931 - val_accuracy: 0.4944
Epoch 7/15
15/15 [==============================] - 8s 535ms/step - loss: 0.6932 - accuracy: 0.4995 - val_loss: 0.6931 - val_accuracy: 0.4955
Epoch 8/15
15/15 [==============================] - 8s 540ms/step - loss: 0.6934 - accuracy: 0.5101 - val_loss: 0.6932 - val_accuracy: 0.5022
Epoch 9/15
15/15 [==============================] - 8s 535ms/step - loss: 0.6935 - accuracy: 0.4850 - val_loss: 0.6931 - val_accuracy: 0.5033
Epoch 10/15
15/15 [==============================] - 8s 533ms/step - loss: 0.6932 - accuracy: 0.5021 - val_loss: 0.6931 - val_accuracy: 0.5022
Epoch 11/15
15/15 [==============================] - 8s 536ms/step - loss: 0.6932 - accuracy: 0.5053 - val_loss: 0.6932 - val_accuracy: 0.4877
Epoch 12/15
15/15 [==============================] - 8s 538ms/step - loss: 0.6932 - accuracy: 0.5032 - val_loss: 0.6932 - val_accuracy: 0.4967
Epoch 13/15
15/15 [==============================] - 8s 532ms/step - loss: 0.6932 - accuracy: 0.4973 - val_loss: 0.6932 - val_accuracy: 0.4911
Epoch 14/15
15/15 [==============================] - 8s 532ms/step - loss: 0.6932 - accuracy: 0.4995 - val_loss: 0.6933 - val_accuracy: 0.5089
Epoch 15/15
15/15 [==============================] - 8s 532ms/step - loss: 0.6932 - accuracy: 0.4979 - val_loss: 0.6931 - val_accuracy: 0.4877
Model using learning rate of 0.02
Epoch 1/15
15/15 [==============================] - 8s 538ms/step - loss: 0.6935 - accuracy: 0.5011 - val_loss: 0.6932 - val_accuracy: 0.4933
Epoch 2/15
15/15 [==============================] - 8s 524ms/step - loss: 0.6938 - accuracy: 0.5000 - val_loss: 0.6934 - val_accuracy: 0.5000
Epoch 3/15
15/15 [==============================] - 8s 533ms/step - loss: 0.6931 - accuracy: 0.5005 - val_loss: 0.6932 - val_accuracy: 0.5022
Epoch 4/15
15/15 [==============================] - 8s 538ms/step - loss: 0.6936 - accuracy: 0.4984 - val_loss: 0.6932 - val_accuracy: 0.5000
Epoch 5/15
15/15 [==============================] - 8s 532ms/step - loss: 0.6931 - accuracy: 0.4984 - val_loss: 0.6930 - val_accuracy: 0.4900
Epoch 6/15
15/15 [==============================] - 8s 523ms/step - loss: 0.6934 - accuracy: 0.5096 - val_loss: 0.6934 - val_accuracy: 0.4933
Epoch 7/15
15/15 [==============================] - 8s 534ms/step - loss: 0.6933 - accuracy: 0.5043 - val_loss: 0.6931 - val_accuracy: 0.5033
Epoch 8/15
15/15 [==============================] - 8s 536ms/step - loss: 0.6936 - accuracy: 0.4850 - val_loss: 0.6937 - val_accuracy: 0.5022
Epoch 9/15
15/15 [==============================] - 8s 528ms/step - loss: 0.6935 - accuracy: 0.5048 - val_loss: 0.6932 - val_accuracy: 0.5011
Epoch 10/15
15/15 [==============================] - 8s 529ms/step - loss: 0.6933 - accuracy: 0.4952 - val_loss: 0.6931 - val_accuracy: 0.4967
Epoch 11/15
15/15 [==============================] - 8s 532ms/step - loss: 0.6934 - accuracy: 0.5048 - val_loss: 0.6931 - val_accuracy: 0.4989
Epoch 12/15
15/15 [==============================] - 8s 537ms/step - loss: 0.6933 - accuracy: 0.4989 - val_loss: 0.6932 - val_accuracy: 0.5056
Epoch 13/15
15/15 [==============================] - 8s 529ms/step - loss: 0.6933 - accuracy: 0.5016 - val_loss: 0.6931 - val_accuracy: 0.5089
Epoch 14/15
15/15 [==============================] - 8s 533ms/step - loss: 0.6935 - accuracy: 0.4995 - val_loss: 0.6932 - val_accuracy: 0.4989
Epoch 15/15
15/15 [==============================] - 8s 529ms/step - loss: 0.6931 - accuracy: 0.4920 - val_loss: 0.6932 - val_accuracy: 0.5000
Model using learning rate of 0.03
Epoch 1/15
15/15 [==============================] - 8s 534ms/step - loss: 0.6935 - accuracy: 0.5150 - val_loss: 0.6939 - val_accuracy: 0.4978
Epoch 2/15
15/15 [==============================] - 8s 538ms/step - loss: 0.6948 - accuracy: 0.4904 - val_loss: 0.6932 - val_accuracy: 0.5011
Epoch 3/15
15/15 [==============================] - 8s 531ms/step - loss: 0.6935 - accuracy: 0.5043 - val_loss: 0.6934 - val_accuracy: 0.5067
Epoch 4/15
15/15 [==============================] - 8s 521ms/step - loss: 0.6934 - accuracy: 0.4963 - val_loss: 0.6932 - val_accuracy: 0.5011
Epoch 5/15
15/15 [==============================] - 8s 528ms/step - loss: 0.6938 - accuracy: 0.5010 - val_loss: 0.6932 - val_accuracy: 0.5011
Epoch 6/15
15/15 [==============================] - 8s 536ms/step - loss: 0.6933 - accuracy: 0.5021 - val_loss: 0.6932 - val_accuracy: 0.5011
Epoch 7/15
15/15 [==============================] - 8s 532ms/step - loss: 0.6933 - accuracy: 0.5005 - val_loss: 0.6932 - val_accuracy: 0.5100
Epoch 8/15
15/15 [==============================] - 8s 536ms/step - loss: 0.6933 - accuracy: 0.4963 - val_loss: 0.6933 - val_accuracy: 0.5022
Epoch 9/15
15/15 [==============================] - 8s 529ms/step - loss: 0.6932 - accuracy: 0.5016 - val_loss: 0.6931 - val_accuracy: 0.5067
Epoch 10/15
15/15 [==============================] - 8s 553ms/step - loss: 0.6935 - accuracy: 0.4947 - val_loss: 0.6937 - val_accuracy: 0.5089
Epoch 11/15
15/15 [==============================] - 8s 540ms/step - loss: 0.6936 - accuracy: 0.5021 - val_loss: 0.6930 - val_accuracy: 0.5123
Epoch 12/15
15/15 [==============================] - 8s 535ms/step - loss: 0.6934 - accuracy: 0.4979 - val_loss: 0.6932 - val_accuracy: 0.5011
Epoch 13/15
15/15 [==============================] - 8s 530ms/step - loss: 0.6933 - accuracy: 0.5011 - val_loss: 0.6932 - val_accuracy: 0.4989
Epoch 14/15
15/15 [==============================] - 8s 540ms/step - loss: 0.6932 - accuracy: 0.5027 - val_loss: 0.6933 - val_accuracy: 0.4944
Epoch 15/15
15/15 [==============================] - 8s 537ms/step - loss: 0.6934 - accuracy: 0.4989 - val_loss: 0.6931 - val_accuracy: 0.4922
Model using learning rate of 0.04
Epoch 1/15
15/15 [==============================] - 8s 549ms/step - loss: 0.6935 - accuracy: 0.5134 - val_loss: 0.6942 - val_accuracy: 0.5000
Epoch 2/15
15/15 [==============================] - 8s 547ms/step - loss: 0.6948 - accuracy: 0.4840 - val_loss: 0.6931 - val_accuracy: 0.4933
Epoch 3/15
15/15 [==============================] - 8s 543ms/step - loss: 0.6934 - accuracy: 0.4979 - val_loss: 0.6933 - val_accuracy: 0.4989
Epoch 4/15
15/15 [==============================] - 8s 534ms/step - loss: 0.6934 - accuracy: 0.5027 - val_loss: 0.6932 - val_accuracy: 0.5000
Epoch 5/15
15/15 [==============================] - 8s 537ms/step - loss: 0.6935 - accuracy: 0.5027 - val_loss: 0.6932 - val_accuracy: 0.4978
Epoch 6/15
15/15 [==============================] - 8s 540ms/step - loss: 0.6937 - accuracy: 0.4984 - val_loss: 0.6934 - val_accuracy: 0.5045
Epoch 7/15
15/15 [==============================] - 8s 535ms/step - loss: 0.6932 - accuracy: 0.4979 - val_loss: 0.6931 - val_accuracy: 0.4877
Epoch 8/15
15/15 [==============================] - 8s 545ms/step - loss: 0.6936 - accuracy: 0.4963 - val_loss: 0.6931 - val_accuracy: 0.5033
Epoch 9/15
15/15 [==============================] - 8s 532ms/step - loss: 0.6931 - accuracy: 0.4984 - val_loss: 0.6932 - val_accuracy: 0.4978
Epoch 10/15
15/15 [==============================] - 8s 527ms/step - loss: 0.6936 - accuracy: 0.5069 - val_loss: 0.6932 - val_accuracy: 0.4933
Epoch 11/15
15/15 [==============================] - 8s 531ms/step - loss: 0.6934 - accuracy: 0.5069 - val_loss: 0.6931 - val_accuracy: 0.5022
Epoch 12/15
15/15 [==============================] - 8s 528ms/step - loss: 0.6936 - accuracy: 0.4866 - val_loss: 0.6939 - val_accuracy: 0.5022
Epoch 13/15
15/15 [==============================] - 8s 533ms/step - loss: 0.6938 - accuracy: 0.5150 - val_loss: 0.6939 - val_accuracy: 0.5000
Epoch 14/15
15/15 [==============================] - 8s 536ms/step - loss: 0.6939 - accuracy: 0.4915 - val_loss: 0.6933 - val_accuracy: 0.5011
Epoch 15/15
15/15 [==============================] - 8s 541ms/step - loss: 0.6933 - accuracy: 0.4989 - val_loss: 0.6932 - val_accuracy: 0.5011
Model using learning rate of 0.05
Epoch 1/15
15/15 [==============================] - 8s 551ms/step - loss: 0.6935 - accuracy: 0.5134 - val_loss: 0.6958 - val_accuracy: 0.4955
Epoch 2/15
15/15 [==============================] - 8s 548ms/step - loss: 0.6955 - accuracy: 0.4973 - val_loss: 0.6933 - val_accuracy: 0.5078
Epoch 3/15
15/15 [==============================] - 8s 545ms/step - loss: 0.6931 - accuracy: 0.4909 - val_loss: 0.6931 - val_accuracy: 0.4944
Epoch 4/15
15/15 [==============================] - 8s 538ms/step - loss: 0.6935 - accuracy: 0.4989 - val_loss: 0.6931 - val_accuracy: 0.5045
Epoch 5/15
15/15 [==============================] - 8s 527ms/step - loss: 0.6934 - accuracy: 0.4936 - val_loss: 0.6932 - val_accuracy: 0.5011
Epoch 6/15
15/15 [==============================] - 8s 531ms/step - loss: 0.6933 - accuracy: 0.5176 - val_loss: 0.6935 - val_accuracy: 0.5045
Epoch 7/15
15/15 [==============================] - 8s 556ms/step - loss: 0.6938 - accuracy: 0.4920 - val_loss: 0.6934 - val_accuracy: 0.5000
Epoch 8/15
15/15 [==============================] - 8s 533ms/step - loss: 0.6940 - accuracy: 0.4995 - val_loss: 0.6933 - val_accuracy: 0.5033
Epoch 9/15
15/15 [==============================] - 8s 537ms/step - loss: 0.6933 - accuracy: 0.5036 - val_loss: 0.6934 - val_accuracy: 0.4933
Epoch 10/15
15/15 [==============================] - 9s 573ms/step - loss: 0.6942 - accuracy: 0.4952 - val_loss: 0.6933 - val_accuracy: 0.4944
Epoch 11/15
15/15 [==============================] - 8s 540ms/step - loss: 0.6942 - accuracy: 0.4957 - val_loss: 0.6934 - val_accuracy: 0.5000
Epoch 12/15
15/15 [==============================] - 8s 536ms/step - loss: 0.6930 - accuracy: 0.5166 - val_loss: 0.6935 - val_accuracy: 0.4978
Epoch 13/15
15/15 [==============================] - 8s 543ms/step - loss: 0.6940 - accuracy: 0.4952 - val_loss: 0.6932 - val_accuracy: 0.5022
Epoch 14/15
15/15 [==============================] - 8s 558ms/step - loss: 0.6932 - accuracy: 0.4845 - val_loss: 0.6933 - val_accuracy: 0.4978
Epoch 15/15
15/15 [==============================] - 8s 546ms/step - loss: 0.6940 - accuracy: 0.5139 - val_loss: 0.6937 - val_accuracy: 0.5033
Validation Accuracy Plot -
Program 2: Model with different optimizers.
import tensorflow as tf
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Conv2D, Flatten, Dropout, MaxPooling2D
from tensorflow.keras.preprocessing.image import ImageDataGenerator
from tensorflow.keras.optimizers import Adam
import os
import numpy as np
import matplotlib.pyplot as plt
_URL = 'https://storage.googleapis.com/mledu-datasets/cats_and_dogs_filtered.zip'
path_to_zip = tf.keras.utils.get_file('cats_and_dogs.zip', origin=_URL, extract=True)
PATH = os.path.join(os.path.dirname(path_to_zip), 'cats_and_dogs_filtered')
train_dir = os.path.join(PATH, 'train')
validation_dir = os.path.join(PATH, 'validation')
train_cats_dir = os.path.join(train_dir, 'cats') # directory with our training cat pictures
train_dogs_dir = os.path.join(train_dir, 'dogs') # directory with our training dog pictures
validation_cats_dir = os.path.join(validation_dir, 'cats') # directory with our validation cat pictures
validation_dogs_dir = os.path.join(validation_dir, 'dogs') # directory with our validation dog pictures
batch_size = 128
epochs = 15
IMG_HEIGHT = 150
IMG_WIDTH = 150
train_image_generator = ImageDataGenerator(rescale=1./255) # Generator for our training data
validation_image_generator = ImageDataGenerator(rescale=1./255) # Generator for our validation data
train_data_gen = train_image_generator.flow_from_directory(batch_size=batch_size,
directory=train_dir,
shuffle=True,
target_size=(IMG_HEIGHT, IMG_WIDTH),
class_mode='binary')
val_data_gen = validation_image_generator.flow_from_directory(batch_size=batch_size,
directory=validation_dir,
target_size=(IMG_HEIGHT, IMG_WIDTH),
class_mode='binary')
model = Sequential([
Conv2D(16, 3, padding='same', activation='relu', input_shape=(IMG_HEIGHT, IMG_WIDTH ,3)),
MaxPooling2D(),
Conv2D(32, 3, padding='same', activation='relu'),
MaxPooling2D(),
Conv2D(64, 3, padding='same', activation='relu'),
MaxPooling2D(),
Flatten(),
Dense(512, activation='relu'),
Dense(1)
])
optimizer = ['SGD', 'RMSprop', 'Adagrad', 'Adadelta', 'Adam', 'Adamax', 'Nadam']
for i in range(7):
print("Model using",optimizer[i],"optimizer")
model.compile(optimizer=optimizer[i],
loss=tf.keras.losses.BinaryCrossentropy(from_logits=True),
metrics=['accuracy'])
history = model.fit_generator(
train_data_gen,
steps_per_epoch=total_train // batch_size,
epochs=epochs,
validation_data=val_data_gen,
validation_steps=total_val // batch_size)
plt.plot(history.history['val_accuracy'])
plt.title('Validation Accuracy')
plt.ylabel('accuracy')
plt.xlabel('epoch')
plt.legend(['SGD', 'RMSprop', 'Adagrad', 'Adadelta', 'Adam', 'Adamax', 'Nadam'], loc='upper left')
plt.show()
Output -
Found 2000 images belonging to 2 classes.
Found 1000 images belonging to 2 classes.
Model using SGD optimizer
Epoch 1/15
15/15 [==============================] - 8s 550ms/step - loss: 0.6923 - accuracy: 0.5026 - val_loss: 0.6909 - val_accuracy: 0.5033
Epoch 2/15
15/15 [==============================] - 8s 551ms/step - loss: 0.6898 - accuracy: 0.4989 - val_loss: 0.6900 - val_accuracy: 0.5045
Epoch 3/15
15/15 [==============================] - 8s 539ms/step - loss: 0.6893 - accuracy: 0.5005 - val_loss: 0.6889 - val_accuracy: 0.5022
Epoch 4/15
15/15 [==============================] - 8s 546ms/step - loss: 0.6875 - accuracy: 0.4989 - val_loss: 0.6880 - val_accuracy: 0.5056
Epoch 5/15
15/15 [==============================] - 8s 536ms/step - loss: 0.6860 - accuracy: 0.4882 - val_loss: 0.6864 - val_accuracy: 0.4944
Epoch 6/15
15/15 [==============================] - 8s 538ms/step - loss: 0.6868 - accuracy: 0.5048 - val_loss: 0.6846 - val_accuracy: 0.4877
Epoch 7/15
15/15 [==============================] - 8s 536ms/step - loss: 0.6857 - accuracy: 0.5032 - val_loss: 0.6837 - val_accuracy: 0.4866
Epoch 8/15
15/15 [==============================] - 8s 538ms/step - loss: 0.6830 - accuracy: 0.5016 - val_loss: 0.6832 - val_accuracy: 0.4955
Epoch 9/15
15/15 [==============================] - 8s 547ms/step - loss: 0.6819 - accuracy: 0.5107 - val_loss: 0.6816 - val_accuracy: 0.5022
Epoch 10/15
15/15 [==============================] - 8s 543ms/step - loss: 0.6804 - accuracy: 0.4882 - val_loss: 0.6805 - val_accuracy: 0.5033
Epoch 11/15
15/15 [==============================] - 8s 541ms/step - loss: 0.6798 - accuracy: 0.5037 - val_loss: 0.6800 - val_accuracy: 0.4955
Epoch 12/15
15/15 [==============================] - 8s 541ms/step - loss: 0.6796 - accuracy: 0.4941 - val_loss: 0.6791 - val_accuracy: 0.5022
Epoch 13/15
15/15 [==============================] - 8s 536ms/step - loss: 0.6763 - accuracy: 0.5118 - val_loss: 0.6782 - val_accuracy: 0.5056
Epoch 14/15
15/15 [==============================] - 8s 546ms/step - loss: 0.6758 - accuracy: 0.5048 - val_loss: 0.6743 - val_accuracy: 0.4944
Epoch 15/15
15/15 [==============================] - 8s 539ms/step - loss: 0.6715 - accuracy: 0.5064 - val_loss: 0.6767 - val_accuracy: 0.5000
Model using RMSprop optimizer
Epoch 1/15
15/15 [==============================] - 8s 544ms/step - loss: 2.3455 - accuracy: 0.4963 - val_loss: 0.6690 - val_accuracy: 0.4944
Epoch 2/15
15/15 [==============================] - 8s 545ms/step - loss: 0.6912 - accuracy: 0.5358 - val_loss: 0.6596 - val_accuracy: 0.5123
Epoch 3/15
15/15 [==============================] - 8s 545ms/step - loss: 0.6488 - accuracy: 0.5953 - val_loss: 0.6589 - val_accuracy: 0.5234
Epoch 4/15
15/15 [==============================] - 8s 555ms/step - loss: 0.6675 - accuracy: 0.5962 - val_loss: 0.6412 - val_accuracy: 0.5714
Epoch 5/15
15/15 [==============================] - 8s 540ms/step - loss: 0.6165 - accuracy: 0.6330 - val_loss: 0.6365 - val_accuracy: 0.6920
Epoch 6/15
15/15 [==============================] - 8s 542ms/step - loss: 0.6762 - accuracy: 0.6512 - val_loss: 0.6145 - val_accuracy: 0.6440
Epoch 7/15
15/15 [==============================] - 8s 541ms/step - loss: 0.5711 - accuracy: 0.6854 - val_loss: 0.5771 - val_accuracy: 0.6641
Epoch 8/15
15/15 [==============================] - 8s 549ms/step - loss: 0.7130 - accuracy: 0.6571 - val_loss: 0.6068 - val_accuracy: 0.6975
Epoch 9/15
15/15 [==============================] - 8s 550ms/step - loss: 0.4837 - accuracy: 0.7719 - val_loss: 0.5689 - val_accuracy: 0.7042
Epoch 10/15
15/15 [==============================] - 8s 548ms/step - loss: 0.5215 - accuracy: 0.7345 - val_loss: 0.8108 - val_accuracy: 0.6685
Epoch 11/15
15/15 [==============================] - 8s 539ms/step - loss: 0.4842 - accuracy: 0.7548 - val_loss: 0.5851 - val_accuracy: 0.6629
Epoch 12/15
15/15 [==============================] - 8s 540ms/step - loss: 0.4333 - accuracy: 0.7821 - val_loss: 0.5866 - val_accuracy: 0.7065
Epoch 13/15
15/15 [==============================] - 8s 541ms/step - loss: 0.4136 - accuracy: 0.8061 - val_loss: 0.6037 - val_accuracy: 0.7232
Epoch 14/15
15/15 [==============================] - 8s 544ms/step - loss: 0.3493 - accuracy: 0.8456 - val_loss: 0.8027 - val_accuracy: 0.5737
Epoch 15/15
15/15 [==============================] - 8s 541ms/step - loss: 0.3735 - accuracy: 0.8210 - val_loss: 0.6215 - val_accuracy: 0.6317
Model using Adagrad optimizer
Epoch 1/15
15/15 [==============================] - 8s 543ms/step - loss: 0.2212 - accuracy: 0.9017 - val_loss: 0.6342 - val_accuracy: 0.7199
Epoch 2/15
15/15 [==============================] - 8s 534ms/step - loss: 0.1544 - accuracy: 0.9482 - val_loss: 0.6781 - val_accuracy: 0.7087
Epoch 3/15
15/15 [==============================] - 8s 545ms/step - loss: 0.1409 - accuracy: 0.9498 - val_loss: 0.6718 - val_accuracy: 0.7188
Epoch 4/15
15/15 [==============================] - 8s 543ms/step - loss: 0.1133 - accuracy: 0.9610 - val_loss: 0.7026 - val_accuracy: 0.7288
Epoch 5/15
15/15 [==============================] - 8s 551ms/step - loss: 0.1023 - accuracy: 0.9698 - val_loss: 0.6959 - val_accuracy: 0.7243
Epoch 6/15
15/15 [==============================] - 8s 551ms/step - loss: 0.0906 - accuracy: 0.9744 - val_loss: 0.7243 - val_accuracy: 0.7299
Epoch 7/15
15/15 [==============================] - 8s 542ms/step - loss: 0.0821 - accuracy: 0.9813 - val_loss: 0.6867 - val_accuracy: 0.7400
Epoch 8/15
15/15 [==============================] - 8s 534ms/step - loss: 0.0751 - accuracy: 0.9808 - val_loss: 0.7433 - val_accuracy: 0.7388
Epoch 9/15
15/15 [==============================] - 8s 533ms/step - loss: 0.0683 - accuracy: 0.9813 - val_loss: 0.7392 - val_accuracy: 0.7467
Epoch 10/15
15/15 [==============================] - 8s 537ms/step - loss: 0.0584 - accuracy: 0.9877 - val_loss: 0.7992 - val_accuracy: 0.7366
Epoch 11/15
15/15 [==============================] - 8s 535ms/step - loss: 0.0581 - accuracy: 0.9888 - val_loss: 0.8050 - val_accuracy: 0.7355
Epoch 12/15
15/15 [==============================] - 8s 536ms/step - loss: 0.0534 - accuracy: 0.9899 - val_loss: 0.8280 - val_accuracy: 0.7299
Epoch 13/15
15/15 [==============================] - 8s 536ms/step - loss: 0.0455 - accuracy: 0.9920 - val_loss: 0.8068 - val_accuracy: 0.7254
Epoch 14/15
15/15 [==============================] - 8s 540ms/step - loss: 0.0483 - accuracy: 0.9893 - val_loss: 0.8482 - val_accuracy: 0.7411
Epoch 15/15
15/15 [==============================] - 8s 535ms/step - loss: 0.0394 - accuracy: 0.9952 - val_loss: 0.8483 - val_accuracy: 0.7444
Model using Adadelta optimizer
Epoch 1/15
15/15 [==============================] - 8s 541ms/step - loss: 0.0360 - accuracy: 0.9968 - val_loss: 0.8339 - val_accuracy: 0.7500
Epoch 2/15
15/15 [==============================] - 8s 536ms/step - loss: 0.0376 - accuracy: 0.9941 - val_loss: 0.8663 - val_accuracy: 0.7411
Epoch 3/15
15/15 [==============================] - 8s 537ms/step - loss: 0.0380 - accuracy: 0.9947 - val_loss: 0.8333 - val_accuracy: 0.7433
Epoch 4/15
15/15 [==============================] - 8s 536ms/step - loss: 0.0332 - accuracy: 0.9968 - val_loss: 0.8508 - val_accuracy: 0.7455
Epoch 5/15
15/15 [==============================] - 8s 535ms/step - loss: 0.0357 - accuracy: 0.9952 - val_loss: 0.8521 - val_accuracy: 0.7444
Epoch 6/15
15/15 [==============================] - 8s 535ms/step - loss: 0.0364 - accuracy: 0.9952 - val_loss: 0.8440 - val_accuracy: 0.7433
Epoch 7/15
15/15 [==============================] - 8s 539ms/step - loss: 0.0362 - accuracy: 0.9953 - val_loss: 0.8540 - val_accuracy: 0.7388
Epoch 8/15
15/15 [==============================] - 8s 549ms/step - loss: 0.0344 - accuracy: 0.9957 - val_loss: 0.8276 - val_accuracy: 0.7500
Epoch 9/15
15/15 [==============================] - 8s 534ms/step - loss: 0.0364 - accuracy: 0.9952 - val_loss: 0.8934 - val_accuracy: 0.7355
Epoch 10/15
15/15 [==============================] - 8s 542ms/step - loss: 0.0372 - accuracy: 0.9947 - val_loss: 0.8400 - val_accuracy: 0.7422
Epoch 11/15
15/15 [==============================] - 8s 538ms/step - loss: 0.0336 - accuracy: 0.9963 - val_loss: 0.8363 - val_accuracy: 0.7500
Epoch 12/15
15/15 [==============================] - 8s 538ms/step - loss: 0.0361 - accuracy: 0.9952 - val_loss: 0.8305 - val_accuracy: 0.7533
Epoch 13/15
15/15 [==============================] - 8s 534ms/step - loss: 0.0341 - accuracy: 0.9963 - val_loss: 0.8525 - val_accuracy: 0.7433
...
Model using Adam optimizer
Epoch 1/15
15/15 [==============================] - 8s 552ms/step - loss: 0.3095 - accuracy: 0.8985 - val_loss: 0.6640 - val_accuracy: 0.7065
...
Model using Adamax optimizer
Epoch 1/15
15/15 [==============================] - 8s 540ms/step - loss: 0.0850 - accuracy: 0.9760 - val_loss: 1.1438 - val_accuracy: 0.7254
....
Model using Nadam optimizer
Epoch 1/15
15/15 [==============================] - 8s 544ms/step - loss: 0.2377 - accuracy: 0.9546 - val_loss: 1.0978 - val_accuracy: 0.6987
....
Validation Accuracy Plot -
Related
Validation Accuracy doesnt improve at all from the beggining
I am trying to classify the severity of COVID XRay using 426 256x256 xray images and 4 classes present. However the validation accuracy doesnt improve at all. The validation loss also barely decreases from the start This is the model I am using from keras.models import Sequential from keras.layers import Dense,Conv2D,MaxPooling2D,Dropout,Flatten from tensorflow.keras.callbacks import EarlyStopping from tensorflow.keras import regularizers model=Sequential() model.add(Conv2D(filters=64,kernel_size=(4,4),input_shape=image_shape,activation="relu")) model.add(MaxPooling2D(pool_size=(2,2))) model.add(Dropout(0.2)) model.add(Conv2D(filters=128,kernel_size=(6,6),input_shape=image_shape,activation="relu")) model.add(MaxPooling2D(pool_size=(3,3))) model.add(Dropout(0.2)) model.add(Flatten()) model.add(Dense(64,activation="relu")) model.add(Dense(16,activation="relu")) model.add(Dense(4,activation="softmax")) model.compile(loss="categorical_crossentropy",optimizer="adam",metrics=["accuracy"]) These are the outputs I get epochs = 20 batch_size = 8 model.fit(X_train, y_train, validation_data=(X_test, y_test), epochs=epochs, batch_size=batch_size ) Epoch 1/20 27/27 [==============================] - 4s 143ms/step - loss: 0.1776 - accuracy: 0.9528 - val_loss: 3.7355 - val_accuracy: 0.2717 Epoch 2/20 27/27 [==============================] - 4s 142ms/step - loss: 0.1152 - accuracy: 0.9481 - val_loss: 4.0038 - val_accuracy: 0.2283 Epoch 3/20 27/27 [==============================] - 4s 142ms/step - loss: 0.0875 - accuracy: 0.9858 - val_loss: 4.1756 - val_accuracy: 0.2391 Epoch 4/20 27/27 [==============================] - 4s 142ms/step - loss: 0.0521 - accuracy: 0.9906 - val_loss: 4.1034 - val_accuracy: 0.2717 Epoch 5/20 27/27 [==============================] - 4s 142ms/step - loss: 0.0496 - accuracy: 0.9858 - val_loss: 4.8433 - val_accuracy: 0.3152 Epoch 6/20 27/27 [==============================] - 4s 142ms/step - loss: 0.0170 - accuracy: 0.9953 - val_loss: 5.6027 - val_accuracy: 0.3043 Epoch 7/20 27/27 [==============================] - 4s 142ms/step - loss: 0.2307 - accuracy: 0.9245 - val_loss: 4.2759 - val_accuracy: 0.3152 Epoch 8/20 27/27 [==============================] - 4s 142ms/step - loss: 0.6493 - accuracy: 0.7830 - val_loss: 3.8390 - val_accuracy: 0.3478 Epoch 9/20 27/27 [==============================] - 4s 142ms/step - loss: 0.2563 - accuracy: 0.9009 - val_loss: 5.0250 - val_accuracy: 0.2500 Epoch 10/20 27/27 [==============================] - 4s 142ms/step - loss: 0.0286 - accuracy: 1.0000 - val_loss: 4.6475 - val_accuracy: 0.2391 Epoch 11/20 27/27 [==============================] - 4s 142ms/step - loss: 0.0097 - accuracy: 1.0000 - val_loss: 5.2198 - val_accuracy: 0.2391 Epoch 12/20 27/27 [==============================] - 4s 142ms/step - loss: 0.0037 - accuracy: 1.0000 - val_loss: 5.7914 - val_accuracy: 0.2500 Epoch 13/20 27/27 [==============================] - 4s 142ms/step - loss: 0.0048 - accuracy: 1.0000 - val_loss: 5.4341 - val_accuracy: 0.2391 Epoch 14/20 27/27 [==============================] - 4s 142ms/step - loss: 0.0044 - accuracy: 1.0000 - val_loss: 5.6364 - val_accuracy: 0.2391 Epoch 15/20 27/27 [==============================] - 4s 143ms/step - loss: 0.0019 - accuracy: 1.0000 - val_loss: 5.8504 - val_accuracy: 0.2391 Epoch 16/20 27/27 [==============================] - 4s 143ms/step - loss: 0.0013 - accuracy: 1.0000 - val_loss: 5.9604 - val_accuracy: 0.2500 Epoch 17/20 27/27 [==============================] - 4s 149ms/step - loss: 0.0023 - accuracy: 1.0000 - val_loss: 6.0851 - val_accuracy: 0.2717 Epoch 18/20 27/27 [==============================] - 4s 142ms/step - loss: 0.0134 - accuracy: 0.9953 - val_loss: 4.9783 - val_accuracy: 0.2717 Epoch 19/20 27/27 [==============================] - 4s 141ms/step - loss: 0.0068 - accuracy: 1.0000 - val_loss: 5.7421 - val_accuracy: 0.2500 Epoch 20/20 27/27 [==============================] - 4s 142ms/step - loss: 0.0024 - accuracy: 1.0000 - val_loss: 5.8480 - val_accuracy: 0.2283 Any tips on how i can solve this or If i am doing something wrong?
Why is my training loss and validation loss decreasing but training accuracy and validation accuracy not increasing at all?
I am training a DNN model to classify an image in two class: perfect image or imperfect image. I have 60 image for training with 30 images of each class. As for the limited data, I decided to check the model by overfitting i.e. by providing the validation data same as the training data. Here, I hoped to achieve 100% accuracy on both training and validation data(since training data set and validation dataset are the same).The training loss and validation loss seems to decrease however both training and validation accuracy are constant. import tensorflow as tf import tensorflow.keras as keras from sklearn.preprocessing import LabelBinarizer from sklearn.metrics import classification_report from keras.models import Sequential from keras.layers import Dropout from keras.layers.core import Dense from keras.optimizers import SGD from keras.datasets import cifar10 import matplotlib.pyplot as plt import numpy as np import argparse import cv2 import glob initial_lr=0.001 #getting labels from Directories right_labels=[] wrong_labels=[] rightimage_path=glob.glob("images/right_location/*") wrongimage_path=glob.glob("images/wrong_location/*") for _ in rightimage_path: right_labels.append(1) #print(labels) for _ in wrongimage_path: wrong_labels.append(0) labelNames=["right_location","wrong_location"] right_images=[] wrong_images=[] #getting images data from Directories for img in rightimage_path: im=cv2.imread(img) im2=cv2.resize(im,(64,64)) im2=np.expand_dims(im2,axis=0) max_pool=keras.layers.MaxPooling2D(pool_size=(2, 2),strides=(1, 1)) output=max_pool(im2) output=np.squeeze(output) output=output.flatten() output=output/255 right_images.append(output) #wrong images for img in wrongimage_path: im=cv2.imread(img) im2=cv2.resize(im,(64,64)) im2=np.expand_dims(im2,axis=0) max_pool=keras.layers.MaxPooling2D(pool_size=(2, 2),strides=(1, 1)) output=max_pool(im2) output=np.squeeze(output) output=output.flatten() output=output/255 wrong_images.append(output) #print(len(wrong_images)) trainX=right_images[:30]+wrong_images[:30] trainX=np.array(trainX) trainY=right_labels[:30]+wrong_labels[:30] trainY=np.array(trainY) #print(trainX[0].shape) testX=trainX testY=trainY #testX=right_images[31:]+wrong_images[31:] #testX=np.array(testX) #print(len(testX)) #print(len(right_labels[31:])) #testY=right_labels[31:]+wrong_labels[31:] #testY=np.array(testY) #print(testY) print(trainY) print(testY) #Contruction of Neural Network model model = Sequential() model.add(Dense(1024, input_shape=(11907,), activation="relu")) model.add(Dense(512, activation="relu")) model.add(Dense(256, activation="relu")) model.add(Dense(1, activation="softmax")) #Training model print("[INFO] training network...") decay_steps = 1000 sgd = SGD(initial_lr,momentum=0.8) lr_decayed_fn = tf.keras.experimental.CosineDecay(initial_lr, decay_steps) model.compile(loss="binary_crossentropy", optimizer=sgd,metrics=["accuracy"]) H = model.fit(trainX, trainY, validation_data=(testX, testY),epochs=100, batch_size=1) #evaluating the model print("[INFO] evaluating network...") predictions = model.predict(testX, batch_size=32) print(predictions) print(classification_report(testY,predictions, target_names=labelNames)) Training results: [INFO] training network... Epoch 1/100 60/60 [==============================] - 3s 43ms/step - loss: 0.8908 - accuracy: 0.4867 - val_loss: 0.6719 - val_accuracy: 0.5000 Epoch 2/100 60/60 [==============================] - 2s 41ms/step - loss: 0.6893 - accuracy: 0.4791 - val_loss: 0.8592 - val_accuracy: 0.5000 Epoch 3/100 60/60 [==============================] - 2s 41ms/step - loss: 0.7008 - accuracy: 0.5290 - val_loss: 0.6129 - val_accuracy: 0.5000 Epoch 4/100 60/60 [==============================] - 2s 41ms/step - loss: 0.6971 - accuracy: 0.5279 - val_loss: 0.5619 - val_accuracy: 0.5000 Epoch 5/100 60/60 [==============================] - 2s 41ms/step - loss: 0.6770 - accuracy: 0.4745 - val_loss: 0.5669 - val_accuracy: 0.5000 Epoch 6/100 60/60 [==============================] - 2s 41ms/step - loss: 0.5685 - accuracy: 0.5139 - val_loss: 0.4953 - val_accuracy: 0.5000 Epoch 7/100 60/60 [==============================] - 2s 41ms/step - loss: 0.5679 - accuracy: 0.5312 - val_loss: 0.8273 - val_accuracy: 0.5000 Epoch 8/100 60/60 [==============================] - 2s 41ms/step - loss: 0.4373 - accuracy: 0.6591 - val_loss: 0.8112 - val_accuracy: 0.5000 Epoch 9/100 60/60 [==============================] - 2s 41ms/step - loss: 0.7427 - accuracy: 0.5848 - val_loss: 0.5419 - val_accuracy: 0.5000 Epoch 10/100 60/60 [==============================] - 2s 40ms/step - loss: 0.4719 - accuracy: 0.5377 - val_loss: 0.3118 - val_accuracy: 0.5000 Epoch 11/100 60/60 [==============================] - 2s 40ms/step - loss: 0.3253 - accuracy: 0.4684 - val_loss: 0.4851 - val_accuracy: 0.5000 Epoch 12/100 60/60 [==============================] - 3s 42ms/step - loss: 0.5194 - accuracy: 0.4514 - val_loss: 0.1976 - val_accuracy: 0.5000 Epoch 13/100 60/60 [==============================] - 2s 41ms/step - loss: 0.3114 - accuracy: 0.6019 - val_loss: 0.3483 - val_accuracy: 0.5000 Epoch 14/100 60/60 [==============================] - 2s 41ms/step - loss: 0.3794 - accuracy: 0.6003 - val_loss: 0.4723 - val_accuracy: 0.5000 Epoch 15/100 60/60 [==============================] - 2s 41ms/step - loss: 0.4172 - accuracy: 0.5873 - val_loss: 0.4992 - val_accuracy: 0.5000 Epoch 16/100 60/60 [==============================] - 2s 41ms/step - loss: 0.3110 - accuracy: 0.4338 - val_loss: 0.6209 - val_accuracy: 0.5000 Epoch 17/100 60/60 [==============================] - 2s 41ms/step - loss: 0.6362 - accuracy: 0.6615 - val_loss: 0.2337 - val_accuracy: 0.5000 Epoch 18/100 60/60 [==============================] - 3s 42ms/step - loss: 0.1652 - accuracy: 0.5617 - val_loss: 0.0841 - val_accuracy: 0.5000 Epoch 19/100 60/60 [==============================] - 3s 42ms/step - loss: 0.1050 - accuracy: 0.4714 - val_loss: 0.2853 - val_accuracy: 0.5000 Epoch 20/100 60/60 [==============================] - 2s 41ms/step - loss: 0.1031 - accuracy: 0.5254 - val_loss: 0.2085 - val_accuracy: 0.5000 Epoch 21/100 60/60 [==============================] - 2s 42ms/step - loss: 0.0375 - accuracy: 0.5124 - val_loss: 0.0564 - val_accuracy: 0.5000 Epoch 22/100 60/60 [==============================] - 2s 41ms/step - loss: 0.0298 - accuracy: 0.5482 - val_loss: 0.5937 - val_accuracy: 0.5000 Epoch 23/100 60/60 [==============================] - 2s 41ms/step - loss: 0.3126 - accuracy: 0.3884 - val_loss: 0.0527 - val_accuracy: 0.5000 Epoch 24/100 60/60 [==============================] - 2s 41ms/step - loss: 0.1054 - accuracy: 0.5572 - val_loss: 0.0356 - val_accuracy: 0.5000 Epoch 25/100 60/60 [==============================] - 3s 42ms/step - loss: 0.1067 - accuracy: 0.4170 - val_loss: 0.1262 - val_accuracy: 0.5000 Epoch 26/100 60/60 [==============================] - 2s 40ms/step - loss: 0.0551 - accuracy: 0.5608 - val_loss: 0.0255 - val_accuracy: 0.5000 Epoch 27/100 60/60 [==============================] - 2s 41ms/step - loss: 0.0188 - accuracy: 0.5816 - val_loss: 0.3153 - val_accuracy: 0.5000 Epoch 28/100 60/60 [==============================] - 2s 40ms/step - loss: 0.1106 - accuracy: 0.4583 - val_loss: 0.3419 - val_accuracy: 0.5000 Epoch 29/100 60/60 [==============================] - 2s 40ms/step - loss: 0.1493 - accuracy: 0.5334 - val_loss: 0.0351 - val_accuracy: 0.5000 Epoch 30/100 60/60 [==============================] - 2s 41ms/step - loss: 0.1099 - accuracy: 0.4537 - val_loss: 0.1217 - val_accuracy: 0.5000 Epoch 31/100 60/60 [==============================] - 3s 43ms/step - loss: 0.0893 - accuracy: 0.4828 - val_loss: 0.1276 - val_accuracy: 0.5000 Epoch 32/100 60/60 [==============================] - 3s 43ms/step - loss: 0.1806 - accuracy: 0.4265 - val_loss: 0.0157 - val_accuracy: 0.5000 Epoch 33/100 60/60 [==============================] - 3s 44ms/step - loss: 0.0154 - accuracy: 0.3411 - val_loss: 0.0152 - val_accuracy: 0.5000 Epoch 34/100 60/60 [==============================] - 3s 42ms/step - loss: 0.0088 - accuracy: 0.4385 - val_loss: 0.0075 - val_accuracy: 0.5000 Epoch 35/100 60/60 [==============================] - 2s 41ms/step - loss: 0.0068 - accuracy: 0.5450 - val_loss: 0.0045 - val_accuracy: 0.5000 Epoch 36/100 60/60 [==============================] - 2s 41ms/step - loss: 0.0051 - accuracy: 0.4283 - val_loss: 0.0039 - val_accuracy: 0.5000 Epoch 37/100 60/60 [==============================] - 2s 40ms/step - loss: 0.0026 - accuracy: 0.3970 - val_loss: 0.0035 - val_accuracy: 0.5000 Epoch 38/100 60/60 [==============================] - 2s 41ms/step - loss: 0.0037 - accuracy: 0.4758 - val_loss: 0.0030 - val_accuracy: 0.5000 Epoch 39/100 60/60 [==============================] - 2s 41ms/step - loss: 0.0021 - accuracy: 0.5036 - val_loss: 0.0025 - val_accuracy: 0.5000 Epoch 40/100 60/60 [==============================] - 2s 41ms/step - loss: 0.0028 - accuracy: 0.6088 - val_loss: 0.0022 - val_accuracy: 0.5000 Epoch 41/100 60/60 [==============================] - 2s 40ms/step - loss: 0.0023 - accuracy: 0.3521 - val_loss: 0.0020 - val_accuracy: 0.5000 Epoch 42/100 60/60 [==============================] - 2s 39ms/step - loss: 0.0023 - accuracy: 0.4832 - val_loss: 0.0020 - val_accuracy: 0.5000 Epoch 43/100 60/60 [==============================] - 2s 39ms/step - loss: 0.0019 - accuracy: 0.6031 - val_loss: 0.0019 - val_accuracy: 0.5000 Epoch 44/100 60/60 [==============================] - 2s 39ms/step - loss: 0.0014 - accuracy: 0.4757 - val_loss: 0.0017 - val_accuracy: 0.5000 Epoch 45/100 60/60 [==============================] - 2s 39ms/step - loss: 0.0012 - accuracy: 0.5074 - val_loss: 0.0016 - val_accuracy: 0.5000 Epoch 46/100 60/60 [==============================] - 2s 39ms/step - loss: 0.0019 - accuracy: 0.4907 - val_loss: 0.0014 - val_accuracy: 0.5000 Epoch 47/100 60/60 [==============================] - 2s 39ms/step - loss: 0.0013 - accuracy: 0.5113 - val_loss: 0.0013 - val_accuracy: 0.5000 Epoch 48/100 60/60 [==============================] - 2s 42ms/step - loss: 0.0013 - accuracy: 0.4616 - val_loss: 0.0012 - val_accuracy: 0.5000 Epoch 49/100 60/60 [==============================] - 3s 43ms/step - loss: 9.2667e-04 - accuracy: 0.4932 - val_loss: 0.0012 - val_accuracy: 0.5000 Epoch 50/100 60/60 [==============================] - 2s 40ms/step - loss: 0.0012 - accuracy: 0.5685 - val_loss: 0.0011 - val_accuracy: 0.5000 Epoch 51/100 60/60 [==============================] - 2s 41ms/step - loss: 0.0014 - accuracy: 0.4952 - val_loss: 0.0011 - val_accuracy: 0.5000 Epoch 52/100 60/60 [==============================] - 3s 44ms/step - loss: 9.6710e-04 - accuracy: 0.4953 - val_loss: 0.0010 - val_accuracy: 0.5000 Epoch 53/100 60/60 [==============================] - 2s 40ms/step - loss: 0.0013 - accuracy: 0.5196 - val_loss: 9.4684e-04 - val_accuracy: 0.5000 Epoch 54/100 60/60 [==============================] - 2s 39ms/step - loss: 0.0012 - accuracy: 0.6033 - val_loss: 9.0767e-04 - val_accuracy: 0.5000 Epoch 55/100 60/60 [==============================] - 2s 39ms/step - loss: 0.0011 - accuracy: 0.5339 - val_loss: 8.7093e-04 - val_accuracy: 0.5000 Epoch 56/100 60/60 [==============================] - 2s 40ms/step - loss: 7.3141e-04 - accuracy: 0.4408 - val_loss: 8.4973e-04 - val_accuracy: 0.5000 Epoch 57/100 60/60 [==============================] - 2s 40ms/step - loss: 5.9006e-04 - accuracy: 0.5258 - val_loss: 8.1935e-04 - val_accuracy: 0.5000 Epoch 58/100 60/60 [==============================] - 3s 43ms/step - loss: 7.8818e-04 - accuracy: 0.5216 - val_loss: 7.8448e-04 - val_accuracy: 0.5000 Epoch 59/100 60/60 [==============================] - 3s 42ms/step - loss: 9.2272e-04 - accuracy: 0.4472 - val_loss: 7.5098e-04 - val_accuracy: 0.5000 Epoch 60/100 60/60 [==============================] - 3s 42ms/step - loss: 0.0011 - accuracy: 0.5485 - val_loss: 7.2444e-04 - val_accuracy: 0.5000 Epoch 61/100 60/60 [==============================] - 2s 41ms/step - loss: 5.5459e-04 - accuracy: 0.4393 - val_loss: 7.1711e-04 - val_accuracy: 0.5000 Epoch 62/100 60/60 [==============================] - 3s 43ms/step - loss: 7.3943e-04 - accuracy: 0.6748 - val_loss: 7.0446e-04 - val_accuracy: 0.5000 Epoch 63/100 60/60 [==============================] - 2s 41ms/step - loss: 6.0513e-04 - accuracy: 0.4365 - val_loss: 6.5710e-04 - val_accuracy: 0.5000 Epoch 64/100 60/60 [==============================] - 3s 43ms/step - loss: 7.1400e-04 - accuracy: 0.5855 - val_loss: 6.3535e-04 - val_accuracy: 0.5000 Epoch 65/100 60/60 [==============================] - 2s 40ms/step - loss: 4.1557e-04 - accuracy: 0.4226 - val_loss: 6.1638e-04 - val_accuracy: 0.5000 Epoch 66/100 60/60 [==============================] - 2s 39ms/step - loss: 0.0010 - accuracy: 0.5130 - val_loss: 5.9961e-04 - val_accuracy: 0.5000 Epoch 67/100 60/60 [==============================] - 2s 40ms/step - loss: 4.2256e-04 - accuracy: 0.5745 - val_loss: 5.8452e-04 - val_accuracy: 0.5000 Epoch 68/100 60/60 [==============================] - 3s 44ms/step - loss: 4.6930e-04 - accuracy: 0.4256 - val_loss: 5.6929e-04 - val_accuracy: 0.5000 Epoch 69/100 60/60 [==============================] - 3s 43ms/step - loss: 5.0537e-04 - accuracy: 0.5201 - val_loss: 5.5308e-04 - val_accuracy: 0.5000 Epoch 70/100 60/60 [==============================] - 2s 40ms/step - loss: 4.2207e-04 - accuracy: 0.5162 - val_loss: 5.3811e-04 - val_accuracy: 0.5000 Epoch 71/100 60/60 [==============================] - 3s 42ms/step - loss: 4.2835e-04 - accuracy: 0.5187 - val_loss: 5.2421e-04 - val_accuracy: 0.5000 Epoch 72/100 60/60 [==============================] - 2s 41ms/step - loss: 6.9296e-04 - accuracy: 0.5396 - val_loss: 5.1115e-04 - val_accuracy: 0.5000 Epoch 73/100 60/60 [==============================] - 2s 42ms/step - loss: 6.4352e-04 - accuracy: 0.4772 - val_loss: 4.9949e-04 - val_accuracy: 0.5000 Epoch 74/100 60/60 [==============================] - 2s 41ms/step - loss: 4.0728e-04 - accuracy: 0.4406 - val_loss: 4.8785e-04 - val_accuracy: 0.5000 Epoch 75/100 60/60 [==============================] - 2s 41ms/step - loss: 6.5099e-04 - accuracy: 0.4769 - val_loss: 4.7489e-04 - val_accuracy: 0.5000 Epoch 76/100 60/60 [==============================] - 2s 40ms/step - loss: 5.3847e-04 - accuracy: 0.5610 - val_loss: 4.6401e-04 - val_accuracy: 0.5000 Epoch 77/100 60/60 [==============================] - 3s 43ms/step - loss: 3.2081e-04 - accuracy: 0.5025 - val_loss: 4.5471e-04 - val_accuracy: 0.5000 Epoch 78/100 60/60 [==============================] - 2s 41ms/step - loss: 4.1042e-04 - accuracy: 0.4055 - val_loss: 4.4509e-04 - val_accuracy: 0.5000 Epoch 79/100 60/60 [==============================] - 3s 46ms/step - loss: 4.0072e-04 - accuracy: 0.5982 - val_loss: 4.3807e-04 - val_accuracy: 0.5000 Epoch 80/100 60/60 [==============================] - 2s 40ms/step - loss: 3.6314e-04 - accuracy: 0.4305 - val_loss: 4.2492e-04 - val_accuracy: 0.5000 Epoch 81/100 60/60 [==============================] - 3s 42ms/step - loss: 4.9497e-04 - accuracy: 0.4644 - val_loss: 4.2099e-04 - val_accuracy: 0.5000 Epoch 82/100 60/60 [==============================] - 3s 42ms/step - loss: 4.3963e-04 - accuracy: 0.4163 - val_loss: 4.0970e-04 - val_accuracy: 0.5000 Epoch 83/100 60/60 [==============================] - 3s 42ms/step - loss: 2.3065e-04 - accuracy: 0.5292 - val_loss: 4.0007e-04 - val_accuracy: 0.5000 Epoch 84/100 60/60 [==============================] - 2s 40ms/step - loss: 3.6344e-04 - accuracy: 0.4781 - val_loss: 3.9164e-04 - val_accuracy: 0.5000 Epoch 85/100 60/60 [==============================] - 2s 41ms/step - loss: 3.2347e-04 - accuracy: 0.4355 - val_loss: 3.8515e-04 - val_accuracy: 0.5000
Weird Model Summary
I am getting weird model summary using keras and ImageDataGenerator when used with Cats and dogs classification. I am using Google Colab+GPU. The problem is model summary seems to throw weird values and looks like loss function is not working. Kindly suggest what is the problem. My code is as below train_datagen=ImageDataGenerator(rescale=1./255) test_datagen=ImageDataGenerator(rescale=1./255) train_generator=train_datagen.flow_from_directory( train_dir, target_size=(150,150), batch_size=32, class_mode='binary') validation_generator=train_datagen.flow_from_directory(validation_dir,target_size= (150,150),batch_size=50,class_mode='binary') history=model.fit(train_generator,steps_per_epoch=31,epochs=20,validation_data=validation_generator,validation_steps=20) Model Summary is as below Epoch 1/20 31/31 [==============================] - 10s 241ms/step - loss: 0.1302 - acc: 1.0000 - val_loss: 5.0506 - val_acc: 0.5000 Epoch 2/20 31/31 [==============================] - 6s 215ms/step - loss: 4.4286e-05 - acc: 1.0000 - val_loss: 6.8281 - val_acc: 0.5000 Epoch 3/20 31/31 [==============================] - 7s 212ms/step - loss: 4.6900e-06 - acc: 1.0000 - val_loss: 8.1907 - val_acc: 0.5000 Epoch 4/20 31/31 [==============================] - 6s 211ms/step - loss: 5.8646e-07 - acc: 1.0000 - val_loss: 9.3841 - val_acc: 0.5000 Epoch 5/20 31/31 [==============================] - 6s 212ms/step - loss: 2.0634e-07 - acc: 1.0000 - val_loss: 10.3554 - val_acc: 0.5000 Epoch 6/20 31/31 [==============================] - 6s 211ms/step - loss: 2.8432e-08 - acc: 1.0000 - val_loss: 11.3546 - val_acc: 0.5000 Epoch 7/20 31/31 [==============================] - 6s 211ms/step - loss: 1.3657e-08 - acc: 1.0000 - val_loss: 12.1012 - val_acc: 0.5000 Epoch 8/20 31/31 [==============================] - 7s 215ms/step - loss: 4.8156e-09 - acc: 1.0000 - val_loss: 12.6892 - val_acc: 0.5000 Epoch 9/20 31/31 [==============================] - 7s 219ms/step - loss: 2.9152e-09 - acc: 1.0000 - val_loss: 13.1079 - val_acc: 0.5000 Epoch 10/20 31/31 [==============================] - 7s 216ms/step - loss: 1.6705e-09 - acc: 1.0000 - val_loss: 13.4230 - val_acc: 0.5000 Epoch 11/20 31/31 [==============================] - 7s 218ms/step - loss: 1.2603e-09 - acc: 1.0000 - val_loss: 13.6259 - val_acc: 0.5000 Epoch 12/20 31/31 [==============================] - 7s 218ms/step - loss: 1.7701e-09 - acc: 1.0000 - val_loss: 13.7718 - val_acc: 0.5000 Epoch 13/20 31/31 [==============================] - 7s 218ms/step - loss: 1.6043e-09 - acc: 1.0000 - val_loss: 13.9099 - val_acc: 0.5000 Epoch 14/20 31/31 [==============================] - 7s 219ms/step - loss: 3.8831e-10 - acc: 1.0000 - val_loss: 14.0405 - val_acc: 0.5000 Epoch 15/20 31/31 [==============================] - 7s 216ms/step - loss: 8.9113e-10 - acc: 1.0000 - val_loss: 14.1567 - val_acc: 0.5000 Epoch 16/20 31/31 [==============================] - 7s 218ms/step - loss: 8.5343e-10 - acc: 1.0000 - val_loss: 14.2485 - val_acc: 0.5000 Epoch 17/20 31/31 [==============================] - 7s 217ms/step - loss: 2.8638e-10 - acc: 1.0000 - val_loss: 14.3410 - val_acc: 0.5000 Epoch 18/20 31/31 [==============================] - 7s 218ms/step - loss: 5.3467e-10 - acc: 1.0000 - val_loss: 14.4225 - val_acc: 0.5000 Epoch 19/20 31/31 [==============================] - 7s 217ms/step - loss: 4.5269e-10 - acc: 1.0000 - val_loss: 14.4895 - val_acc: 0.5000 Epoch 20/20 31/31 [==============================] - 7s 216ms/step - loss: 3.4228e-10 - acc: 1.0000 - val_loss: 14.5428 - val_acc: 0.5000
You should use model.summary() instead of history = model.fit...
Binarycrossentropy: The lost function does not converge, but rather stagnates at 0.693
The Binarycrossentropy loss function does not converges. It stagnates at 0.6933 = Binarycrossentropy(1.0,0.5) = Binarycrossentropy(0.0,0.5). The net takes 8 images of size 224x224x3 representig a 3d object as input with batch_size = 16. At least I would expect getting the net overfitting, but converge. I hope someone has a hint for me. Thank you. nets, inputs=[], [] base_ResNet50 = ResNet50(weights='imagenet', include_top= False, input_shape=(image_size,image_size,channels)) for layer in base_ResNet50.layers: layer.trainable = False inputs = Input(shape=(views,image_size,image_size,channels)) pre_images = preprocess_input(inputs) for x in tf.split(pre_images,num_or_size_splits=views, axis=1): x = tf.reshape(x,[-1,image_size,image_size,channels]) x = base_model(x) x = Flatten()(x) nets.append(x) out = tf.math.reduce_max(nets, [0]) out = Dense(100)(out) out = Activation('relu')(out) out = Dense(10)(out) out = Activation('relu')(out) out = layers.Dropout(0.5)(out) out = Dense(1)(out) out = Activation('sigmoid')(out) model = Model(inputs=inputs, outputs=out) optimizer=tf.keras.optimizers.Adam(lr=0.1) model.compile(loss='binary_crossentropy', optimizer=optimizer, metrics=['accuracy']) ```Epoch 1/50 592/592 [==============================] - 691s 1s/step - loss: 36731256.0000 - accuracy: 0.4958 - val_loss: 0.6938 - val_accuracy: 0.4933 Epoch 2/50 592/592 [==============================] - 724s 1s/step - loss: 0.6937 - accuracy: 0.5042 - val_loss: 0.6931 - val_accuracy: 0.5067 Epoch 3/50 592/592 [==============================] - 670s 1s/step - loss: 0.6936 - accuracy: 0.4789 - val_loss: 0.6931 - val_accuracy: 0.5067 Epoch 4/50 592/592 [==============================] - 673s 1s/step - loss: 0.6934 - accuracy: 0.4941 - val_loss: 0.6931 - val_accuracy: 0.5067 Epoch 5/50 592/592 [==============================] - 670s 1s/step - loss: 0.6934 - accuracy: 0.4823 - val_loss: 0.6931 - val_accuracy: 0.5067 Epoch 6/50 592/592 [==============================] - 666s 1s/step - loss: 0.6933 - accuracy: 0.4738 - val_loss: 0.6931 - val_accuracy: 0.5067 Epoch 7/50 592/592 [==============================] - 664s 1s/step - loss: 0.6933 - accuracy: 0.4941 - val_loss: 0.6932 - val_accuracy: 0.4933 Epoch 8/50 592/592 [==============================] - 673s 1s/step - loss: 0.6932 - accuracy: 0.4924 - val_loss: 0.6932 - val_accuracy: 0.4933 Epoch 9/50 592/592 [==============================] - 670s 1s/step - loss: 0.6932 - accuracy: 0.4941 - val_loss: 0.6932 - val_accuracy: 0.4933```
Accuracy remains constant after every epoch
I Have created a model to classify plane and cars images bu after very epoch the acc and val_acc remains same import numpy as np import matplotlib as plt from keras.models import Sequential from keras.layers import Convolution2D from keras.layers import MaxPooling2D from keras.layers import Flatten from keras.layers import Dense from keras.preprocessing.image import ImageDataGenerator from keras.preprocessing.image import image import os model=Sequential() model.add(Convolution2D(32,(3,3),input_shape=(64,64,3),activation="relu")) model.add(MaxPooling2D(2,2)) model.add(Convolution2D(64,(3,3),activation="relu")) model.add(MaxPooling2D(2,2)) model.add(Convolution2D(64,(3,3),activation="sigmoid")) model.add(MaxPooling2D(2,2)) model.add(Flatten()) model.add(Dense(32,activation="sigmoid")) model.add(Dense(32,activation="sigmoid")) model.add(Dense(32,activation="sigmoid")) model.add(Dense(1,activation="softmax")) model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy']) train_datagen = ImageDataGenerator( rescale=1./255, shear_range=0.2, zoom_range=0.2, horizontal_flip=True) test_datagen = ImageDataGenerator(rescale=1./255) train_set = train_datagen.flow_from_directory( 'train_images', target_size=(64,64), batch_size=32, class_mode='binary') test_set = train_datagen.flow_from_directory( 'val_set', target_size=(64,64), batch_size=32, class_mode='binary') model.fit_generator( train_set, steps_per_epoch=160, epochs=25, validation_data=test_set, validation_steps=40) Epoch 1/25 30/30 [==============================] - 18s 593ms/step - loss: 7.9712 - acc: 0.5000 - val_loss: 7.9712 - val_acc: 0.5000 Epoch 2/25 30/30 [==============================] - 15s 491ms/step - loss: 7.9712 - acc: 0.5000 - val_loss: 7.9712 - val_acc: 0.5000 Epoch 3/25 30/30 [==============================] - 19s 640ms/step - loss: 7.9712 - acc: 0.5000 - val_loss: 7.9712 - val_acc: 0.5000 Epoch 4/25 30/30 [==============================] - 14s 474ms/step - loss: 7.9712 - acc: 0.5000 - val_loss: 7.9712 - val_acc: 0.5000 Epoch 5/25 30/30 [==============================] - 16s 532ms/step - loss: 7.9712 - acc: 0.5000 - val_loss: 7.9712 - val_acc: 0.5000 Epoch 6/25 30/30 [==============================] - 14s 473ms/step - loss: 7.9712 - acc: 0.5000 - val_loss: 7.9712 - val_acc: 0.5000 Epoch 7/25 30/30 [==============================] - 14s 469ms/step - loss: 7.9712 - acc: 0.5000 - val_loss: 7.9712 - val_acc: 0.5000 Epoch 8/25 30/30 [==============================] - 14s 469ms/step - loss: 7.9712 - acc: 0.5000 - val_loss: 7.9712 - val_acc: 0.5000 Epoch 9/25 30/30 [==============================] - 14s 472ms/step - loss: 7.9712 - acc: 0.5000 - val_loss: 7.9712 - val_acc: 0.5000 Epoch 10/25 30/30 [==============================] - 16s 537ms/step - loss: 7.9712 - acc: 0.5000 - val_loss: 7.9712 - val_acc: 0.5000 Epoch 11/25 30/30 [==============================] - 18s 590ms/step - loss: 7.9712 - acc: 0.5000 - val_loss: 7.9712 - val_acc: 0.5000 Epoch 12/25 30/30 [==============================] - 13s 441ms/step - loss: 7.9712 - acc: 0.5000 - val_loss: 7.9712 - val_acc: 0.5000 Epoch 13/25 30/30 [==============================] - 11s 374ms/step - loss: 7.9712 - acc: 0.5000 - val_loss: 7.9712 - val_acc: 0.5000 Epoch 14/25 30/30 [==============================] - 11s 370ms/step - loss: 7.9712 - acc: 0.5000 - val_loss: 7.9712 - val_acc: 0.5000 Epoch 15/25 30/30 [==============================] - 13s 441ms/step - loss: 7.9712 - acc: 0.5000 - val_loss: 7.9712 - val_acc: 0.5000 Epoch 16/25 30/30 [==============================] - 13s 419ms/step - loss: 7.9712 - acc: 0.5000 - val_loss: 7.9712 - val_acc: 0.5000 Epoch 17/25 30/30 [==============================] - 12s 401ms/step - loss: 7.9712 - acc: 0.5000 - val_loss: 7.9712 - val_acc: 0.5000 Epoch 18/25 30/30 [==============================] - 16s 536ms/step - loss: 7.9712 - acc: 0.5000 - val_loss: 7.9712 - val_acc: 0.5000 Epoch 19/25 30/30 [==============================] - 16s 523ms/step - loss: 7.9712 - acc: 0.5000 - val_loss: 7.9712 - val_acc: 0.5000 Epoch 20/25 30/30 [==============================] - 16s 530ms/step - loss: 7.9712 - acc: 0.5000 - val_loss: 7.9712 - val_acc: 0.5000 Epoch 21/25 30/30 [==============================] - 16s 546ms/step - loss: 7.9712 - acc: 0.5000 - val_loss: 7.9712 - val_acc: 0.5000 Epoch 22/25 30/30 [==============================] - 15s 500ms/step - loss: 7.9712 - acc: 0.5000 - val_loss: 7.9712 - val_acc: 0.5000 Epoch 23/25 30/30 [==============================] - 16s 546ms/step - loss: 7.9712 - acc: 0.5000 - val_loss: 7.9712 - val_acc: 0.5000 Epoch 24/25 30/30 [==============================] - 16s 545ms/step - loss: 7.9712 - acc: 0.5000 - val_loss: 7.9712 - val_acc: 0.5000 Epoch 25/25 30/30 [==============================] - 15s 515ms/step - loss: 7.9712 - acc: 0.5000 - val_loss: 7.9712 - val_acc: 0.5000
You have several issues in your model structure. First of all, for the output of your model model.add(Dense(1,activation="softmax")) You are using a softmax, which means you try to solve a multi-class classification, not a binary classification. If it is really the case, you need to change your loss to categorical_crossentropy. In this way the compile line will turn into: model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy']) If it is not the case and you want only solve a binary classification, you might be good, but I do suggest to change the last layer activation to sigmoid Second: That is a bad idea to use sigmoid as the activation in the middle layer since it can easily cause the gradient to vanish (read more here ). Try to change all the sigmoid activation in the middle layer with wether relu or even better with leakyrelu
The problem is exactly here: model.add(Dense(1,activation="softmax")) You cannot use softmax with one neuron, as it normalizes over neurons, meaning that with one neuron it will always produce a constant 1.0 value. For binary classification you have to use sigmoid activation at the output: model.add(Dense(1,activation="sigmoid")) Also it is not wise to use sigmoid activations in hidden layers, as they will produce vanishing gradient problems. Please prefer ReLU or similar activations.