I am trying to classify the severity of COVID XRay using 426 256x256 xray images and 4 classes present. However the validation accuracy doesnt improve at all. The validation loss also barely decreases from the start
This is the model I am using
from keras.models import Sequential
from keras.layers import Dense,Conv2D,MaxPooling2D,Dropout,Flatten
from tensorflow.keras.callbacks import EarlyStopping
from tensorflow.keras import regularizers
model=Sequential()
model.add(Conv2D(filters=64,kernel_size=(4,4),input_shape=image_shape,activation="relu"))
model.add(MaxPooling2D(pool_size=(2,2)))
model.add(Dropout(0.2))
model.add(Conv2D(filters=128,kernel_size=(6,6),input_shape=image_shape,activation="relu"))
model.add(MaxPooling2D(pool_size=(3,3)))
model.add(Dropout(0.2))
model.add(Flatten())
model.add(Dense(64,activation="relu"))
model.add(Dense(16,activation="relu"))
model.add(Dense(4,activation="softmax"))
model.compile(loss="categorical_crossentropy",optimizer="adam",metrics=["accuracy"])
These are the outputs I get
epochs = 20
batch_size = 8
model.fit(X_train, y_train, validation_data=(X_test, y_test),
epochs=epochs,
batch_size=batch_size
)
Epoch 1/20
27/27 [==============================] - 4s 143ms/step - loss: 0.1776 - accuracy: 0.9528 - val_loss: 3.7355 - val_accuracy: 0.2717
Epoch 2/20
27/27 [==============================] - 4s 142ms/step - loss: 0.1152 - accuracy: 0.9481 - val_loss: 4.0038 - val_accuracy: 0.2283
Epoch 3/20
27/27 [==============================] - 4s 142ms/step - loss: 0.0875 - accuracy: 0.9858 - val_loss: 4.1756 - val_accuracy: 0.2391
Epoch 4/20
27/27 [==============================] - 4s 142ms/step - loss: 0.0521 - accuracy: 0.9906 - val_loss: 4.1034 - val_accuracy: 0.2717
Epoch 5/20
27/27 [==============================] - 4s 142ms/step - loss: 0.0496 - accuracy: 0.9858 - val_loss: 4.8433 - val_accuracy: 0.3152
Epoch 6/20
27/27 [==============================] - 4s 142ms/step - loss: 0.0170 - accuracy: 0.9953 - val_loss: 5.6027 - val_accuracy: 0.3043
Epoch 7/20
27/27 [==============================] - 4s 142ms/step - loss: 0.2307 - accuracy: 0.9245 - val_loss: 4.2759 - val_accuracy: 0.3152
Epoch 8/20
27/27 [==============================] - 4s 142ms/step - loss: 0.6493 - accuracy: 0.7830 - val_loss: 3.8390 - val_accuracy: 0.3478
Epoch 9/20
27/27 [==============================] - 4s 142ms/step - loss: 0.2563 - accuracy: 0.9009 - val_loss: 5.0250 - val_accuracy: 0.2500
Epoch 10/20
27/27 [==============================] - 4s 142ms/step - loss: 0.0286 - accuracy: 1.0000 - val_loss: 4.6475 - val_accuracy: 0.2391
Epoch 11/20
27/27 [==============================] - 4s 142ms/step - loss: 0.0097 - accuracy: 1.0000 - val_loss: 5.2198 - val_accuracy: 0.2391
Epoch 12/20
27/27 [==============================] - 4s 142ms/step - loss: 0.0037 - accuracy: 1.0000 - val_loss: 5.7914 - val_accuracy: 0.2500
Epoch 13/20
27/27 [==============================] - 4s 142ms/step - loss: 0.0048 - accuracy: 1.0000 - val_loss: 5.4341 - val_accuracy: 0.2391
Epoch 14/20
27/27 [==============================] - 4s 142ms/step - loss: 0.0044 - accuracy: 1.0000 - val_loss: 5.6364 - val_accuracy: 0.2391
Epoch 15/20
27/27 [==============================] - 4s 143ms/step - loss: 0.0019 - accuracy: 1.0000 - val_loss: 5.8504 - val_accuracy: 0.2391
Epoch 16/20
27/27 [==============================] - 4s 143ms/step - loss: 0.0013 - accuracy: 1.0000 - val_loss: 5.9604 - val_accuracy: 0.2500
Epoch 17/20
27/27 [==============================] - 4s 149ms/step - loss: 0.0023 - accuracy: 1.0000 - val_loss: 6.0851 - val_accuracy: 0.2717
Epoch 18/20
27/27 [==============================] - 4s 142ms/step - loss: 0.0134 - accuracy: 0.9953 - val_loss: 4.9783 - val_accuracy: 0.2717
Epoch 19/20
27/27 [==============================] - 4s 141ms/step - loss: 0.0068 - accuracy: 1.0000 - val_loss: 5.7421 - val_accuracy: 0.2500
Epoch 20/20
27/27 [==============================] - 4s 142ms/step - loss: 0.0024 - accuracy: 1.0000 - val_loss: 5.8480 - val_accuracy: 0.2283
Any tips on how i can solve this or If i am doing something wrong?
I'm working on the problem classify activity get out the car and get in.
Also need to classify if upload and download activity going near the car
Need advice how to fix problem of overfitting model in testing dataset
Using CNN + LSTM architecture. In the attachment i've provided samples of the dataset.
Have around 15 000 images for each class
Dataset example
go in image
go in image 2
go in image 3
go out image 1
go out image 2
Now let's go to code.
First i get my dataset using keras
batch_size = 128
batch_size_train = 148
def bring_data_from_directory():
datagen = ImageDataGenerator(rescale=1./255)
train_generator = datagen.flow_from_directory(
'train',
target_size=(224, 224),
batch_size=batch_size,
class_mode='categorical', # this means our generator will only yield batches of data, no labels
shuffle=True,
classes=['get_on','get_off','load','unload'])
validation_generator = datagen.flow_from_directory(
'validate',
target_size=(224, 224),
batch_size=batch_size,
class_mode='categorical', # this means our generator will only yield batches of data, no labels
shuffle=True,
classes=['get_on','get_off','load','unload'])
return train_generator,validation_generator
Use VGG16 network to extract features and store them into .npy format
def load_VGG16_model():
base_model = VGG16(weights='imagenet', include_top=False, input_shape=(224,224,3))
print ("Model loaded..!")
print (base_model.summary())
return base_model
def extract_features_and_store(train_generator,validation_generator,base_model):
x_generator = None
y_lable = None
batch = 0
for x,y in train_generator:
if batch == int(56021/batch_size):
break
print("Total needed:", int(56021/batch_size))
print ("predict on batch:",batch)
batch+=1
if np.any(x_generator)==None:
x_generator = base_model.predict_on_batch(x)
y_lable = y
print (y)
else:
x_generator = np.append(x_generator,base_model.predict_on_batch(x),axis=0)
y_lable = np.append(y_lable,y,axis=0)
print (y)
x_generator,y_lable = shuffle(x_generator,y_lable)
np.save(open('video_x_VGG16.npy', 'wb'), x_generator)
np.save(open('video_y_VGG16.npy','wb'),y_lable)
batch = 0
x_generator = None
y_lable = None
for x,y in validation_generator:
if batch == int(3971/batch_size):
break
print("Total needed:", int(3971/batch_size))
print ("predict on batch validate:",batch)
batch+=1
if np.any(x_generator)==None:
x_generator = base_model.predict_on_batch(x)
y_lable = y
print (y)
else:
x_generator = np.append(x_generator,base_model.predict_on_batch(x),axis=0)
y_lable = np.append(y_lable,y,axis=0)
print (y)
x_generator,y_lable = shuffle(x_generator,y_lable)
np.save(open('video_x_validate_VGG16.npy', 'wb'),x_generator)
np.save(open('video_y_validate_VGG16.npy','wb'),y_lable)
train_data = np.load(open('video_x_VGG16.npy', 'rb'))
train_labels = np.load(open('video_y_VGG16.npy', 'rb'))
train_data,train_labels = shuffle(train_data,train_labels)
print(train_data)
validation_data = np.load(open('video_x_validate_VGG16.npy', 'rb'))
validation_labels = np.load(open('video_y_validate_VGG16.npy', 'rb'))
validation_data,validation_labels = shuffle(validation_data,validation_labels)
train_data = train_data.reshape(train_data.shape[0],
train_data.shape[1] * train_data.shape[2],
train_data.shape[3])
validation_data = validation_data.reshape(validation_data.shape[0],
validation_data.shape[1] * validation_data.shape[2],
validation_data.shape[3])
return train_data,train_labels,validation_data,validation_labels
Model
def train_model(train_data,train_labels,validation_data,validation_labels):
print("SHAPE OF DATA : {}".format(train_data.shape))
model = Sequential()
model.add(LSTM(2048, stateful=True, activation='relu', kernel_regularizer=l2(0.0000001), activity_regularizer=l2(0.0000001), kernel_initializer='glorot_uniform', return_sequences=True, bias_initializer='zeros', dropout=0.2 , batch_input_shape=( batch_size_train, train_data.shape[1],
train_data.shape[2])))
model.add(LSTM(1024, stateful=True, activation='relu', kernel_regularizer=l2(0.0000001), activity_regularizer=l2(0.0000001), kernel_initializer='glorot_uniform', return_sequences=True, bias_initializer='zeros', dropout=0.2))
model.add(LSTM(512, stateful=True, activation='relu', kernel_regularizer=l2(0.0000001), activity_regularizer=l2(0.0000001), kernel_initializer='glorot_uniform', return_sequences=True, bias_initializer='zeros', dropout=0.2))
model.add(LSTM(128, stateful=True, activation='relu', kernel_regularizer=l2(0.0000001), activity_regularizer=l2(0.0000001), kernel_initializer='glorot_uniform', bias_initializer='zeros', dropout=0.2))
model.add(Dense(1024, kernel_regularizer=l2(0.01), activity_regularizer=l2(0.01), kernel_initializer='random_uniform', bias_initializer='zeros', activation='relu'))
model.add(Dropout(0.2))
model.add(Dense(4, kernel_initializer='random_uniform', bias_initializer='zeros', activation='softmax'))
adam = Adam(lr=0.00005, decay = 1e-6, clipnorm=1.0, clipvalue=0.5)
model.compile(optimizer=adam, loss='categorical_crossentropy', metrics=['accuracy'])
callbacks = [ EarlyStopping(monitor='val_loss', patience=10, verbose=0), ReduceLROnPlateau(monitor='val_loss', factor=0.2, patience=5, min_lr=0), ModelCheckpoint('video_1_LSTM_1_1024.h5', monitor='val_loss', save_best_only=True, verbose=0) ]
nb_epoch = 500
model.fit(train_data,train_labels,validation_data=(validation_data,validation_labels),batch_size=batch_size_train,nb_epoch=nb_epoch,callbacks=callbacks,shuffle=True,verbose=1)
return model
LOGS
Train on 55796 samples, validate on 3552 samples
Epoch 1/500
55796/55796 [==============================] - 209s 4ms/step - loss: 2.0079 - acc: 0.4518 - val_loss: 1.6785 - val_acc: 0.6166
Epoch 2/500
55796/55796 [==============================] - 205s 4ms/step - loss: 1.3974 - acc: 0.8347 - val_loss: 1.3561 - val_acc: 0.6740
Epoch 3/500
55796/55796 [==============================] - 205s 4ms/step - loss: 1.1181 - acc: 0.8628 - val_loss: 1.1961 - val_acc: 0.7311
Epoch 4/500
55796/55796 [==============================] - 205s 4ms/step - loss: 0.9644 - acc: 0.8689 - val_loss: 1.1276 - val_acc: 0.7218
Epoch 5/500
55796/55796 [==============================] - 204s 4ms/step - loss: 0.8681 - acc: 0.8703 - val_loss: 1.0483 - val_acc: 0.7435
Epoch 6/500
55796/55796 [==============================] - 204s 4ms/step - loss: 0.7944 - acc: 0.8717 - val_loss: 0.9755 - val_acc: 0.7641
Epoch 7/500
55796/55796 [==============================] - 204s 4ms/step - loss: 0.7296 - acc: 0.9245 - val_loss: 0.9444 - val_acc: 0.8260
Epoch 8/500
55796/55796 [==============================] - 204s 4ms/step - loss: 0.6670 - acc: 0.9866 - val_loss: 0.8486 - val_acc: 0.8426
Epoch 9/500
55796/55796 [==============================] - 204s 4ms/step - loss: 0.6121 - acc: 0.9943 - val_loss: 0.8455 - val_acc: 0.8708
Epoch 10/500
55796/55796 [==============================] - 205s 4ms/step - loss: 0.5634 - acc: 0.9964 - val_loss: 0.8335 - val_acc: 0.8553
Epoch 11/500
55796/55796 [==============================] - 205s 4ms/step - loss: 0.5216 - acc: 0.9973 - val_loss: 0.9688 - val_acc: 0.7838
Epoch 12/500
55796/55796 [==============================] - 204s 4ms/step - loss: 0.4841 - acc: 0.9986 - val_loss: 0.8166 - val_acc: 0.8133
Epoch 13/500
55796/55796 [==============================] - 205s 4ms/step - loss: 0.4522 - acc: 0.9984 - val_loss: 0.8399 - val_acc: 0.8184
Epoch 14/500
55796/55796 [==============================] - 205s 4ms/step - loss: 0.4234 - acc: 0.9987 - val_loss: 0.7864 - val_acc: 0.8072
Epoch 15/500
55796/55796 [==============================] - 205s 4ms/step - loss: 0.3977 - acc: 0.9990 - val_loss: 0.7306 - val_acc: 0.8446
Epoch 16/500
55796/55796 [==============================] - 205s 4ms/step - loss: 0.3750 - acc: 0.9990 - val_loss: 0.7644 - val_acc: 0.8514
Epoch 17/500
55796/55796 [==============================] - 205s 4ms/step - loss: 0.3546 - acc: 0.9989 - val_loss: 0.7542 - val_acc: 0.7908
Epoch 18/500
55796/55796 [==============================] - 204s 4ms/step - loss: 0.3345 - acc: 0.9994 - val_loss: 0.7150 - val_acc: 0.8314
Epoch 19/500
55796/55796 [==============================] - 205s 4ms/step - loss: 0.3170 - acc: 0.9993 - val_loss: 0.8910 - val_acc: 0.7798
Epoch 20/500
55796/55796 [==============================] - 204s 4ms/step - loss: 0.3017 - acc: 0.9992 - val_loss: 0.6143 - val_acc: 0.8809
Epoch 21/500
55796/55796 [==============================] - 204s 4ms/step - loss: 0.2861 - acc: 0.9995 - val_loss: 0.7907 - val_acc: 0.8156
Epoch 22/500
55796/55796 [==============================] - 205s 4ms/step - loss: 0.2719 - acc: 0.9996 - val_loss: 0.7077 - val_acc: 0.8401
Epoch 23/500
55796/55796 [==============================] - 206s 4ms/step - loss: 0.2593 - acc: 0.9995 - val_loss: 0.6482 - val_acc: 0.8133
Epoch 24/500
55796/55796 [==============================] - 204s 4ms/step - loss: 0.2474 - acc: 0.9995 - val_loss: 0.7671 - val_acc: 0.7942
The problem is appears that the model starts to overfit and on the testing dataset makes significant detection errors. So far as i see the problem that model can't see the difference between these to actions, or maybe the sequence problem.
As you see i've already tried regularization, clipping and so on. No result.
Please any advice regarding how to fix this problem.
I implemented a neural network with Keras to predict the rating of an item. I consider each rating as a class, so this is my code (outputY is categorical):
inputLayerU = Input(shape=(features,))
inputLayerM = Input(shape=(features,))
dense1 = Dense(features, activation='relu')
denseU = dense1(inputLayerU)
denseM = dense1(inputLayerM)
concatLayer = concatenate([denseU, denseM], axis = 1)
denseLayer = Dense(features*2, activation='relu')(concatLayer)
outputLayer = Dense(5, activation='softmax')(denseLayer)
model = Model(inputs=[inputLayerU, inputLayerM], outputs=outputLayer)
model.compile(loss='categorical_crossentropy', optimizer=Adam(lr=0.01), metrics=['accuracy'])
model.fit([inputU, inputM],outputY , epochs=10, steps_per_epoch=10)
When I train this network I get the following result which is fine:
10/10 [==============================] - 2s 187ms/step - loss: 1.4778 - acc: 0.3209
Epoch 2/10
10/10 [==============================] - 0s 49ms/step - loss: 1.4058 - acc: 0.3625
Epoch 3/10
10/10 [==============================] - 1s 54ms/step - loss: 1.3825 - acc: 0.3824
Epoch 4/10
10/10 [==============================] - 0s 47ms/step - loss: 1.3614 - acc: 0.3923
Epoch 5/10
10/10 [==============================] - 0s 48ms/step - loss: 1.3372 - acc: 0.4060
Epoch 6/10
10/10 [==============================] - 0s 45ms/step - loss: 1.3138 - acc: 0.4202
Epoch 7/10
10/10 [==============================] - 0s 46ms/step - loss: 1.2976 - acc: 0.4266
Epoch 8/10
10/10 [==============================] - 0s 48ms/step - loss: 1.2842 - acc: 0.4325
Epoch 9/10
10/10 [==============================] - 1s 62ms/step - loss: 1.2729 - acc: 0.4402
Epoch 10/10
10/10 [==============================] - 1s 54ms/step - loss: 1.2631 - acc: 0.4464
Then I consider the problem as regression and try to predict the value of user ratings(I need to calculate error in both ways). So this is my code:
inputLayerU = Input(shape=(features,))
inputLayerM = Input(shape=(features,))
dense1 = Dense(features, activation='relu')
denseU = dense1(inputLayerU)
denseM = dense1(inputLayerM)
concatLayer = concatenate([denseU, denseM], axis = 1)
denseLayer = Dense(features*2, activation='relu')(concatLayer)
outputLayer = Dense(1, activation='softmax')(denseLayer)
model = Model(inputs=[inputLayerU, inputLayerM], outputs=outputLayer)
model.compile(loss='mean_squared_error', optimizer=Adam(lr=0.01), metrics=['accuracy'])
model.fit([inputU, inputM],outputY , epochs=10, steps_per_epoch=10)
and I get this results:
Epoch 1/10
10/10 [==============================] - 9s 894ms/step - loss: 7.9451 - acc: 0.0563
Epoch 2/10
10/10 [==============================] - 7s 711ms/step - loss: 7.9447 - acc: 0.0563
Epoch 3/10
10/10 [==============================] - 7s 709ms/step - loss: 7.9446 - acc: 0.0563
Epoch 4/10
10/10 [==============================] - 7s 710ms/step - loss: 7.9446 - acc: 0.0563
Epoch 5/10
10/10 [==============================] - 7s 702ms/step - loss: 7.9446 - acc: 0.0563
Epoch 6/10
10/10 [==============================] - 7s 706ms/step - loss: 7.9446 - acc: 0.0563
Epoch 7/10
10/10 [==============================] - 7s 701ms/step - loss: 7.9446 - acc: 0.0563
Epoch 8/10
10/10 [==============================] - 7s 702ms/step - loss: 7.9446 - acc: 0.0563
Epoch 9/10
10/10 [==============================] - 7s 717ms/step - loss: 7.9446 - acc: 0.0563
Epoch 10/10
10/10 [==============================] - 7s 700ms/step - loss: 7.9446 - acc: 0.0563
As you see it decreases a little, some times it doesn't change at all.
So what's wrong with my regression?
First, m not sure if we are supposed to apply the softmax function for regression problem, and secondly try using the Adam Optimizer with default parameters.
I have 3D CNN U-net architecture to solve segmentation problem. I am using Adam optimisation together with binary cross entropy and the metric is "accuracy". I try to understand why it does not improve.
Train on 2774 samples, validate on 694 samples
Epoch 1/20
2774/2774 [==============================] - 166s 60ms/step - loss: 0.5189 - acc: 0.7928 - val_loss: 0.5456 - val_acc: 0.7674
Epoch 00001: val_loss improved from inf to 0.54555, saving model to model-tgs-salt-1.h5
Epoch 2/20
2774/2774 [==============================] - 170s 61ms/step - loss: 0.5170 - acc: 0.7928 - val_loss: 0.5485 - val_acc: 0.7674
Epoch 00002: val_loss did not improve from 0.54555
Epoch 3/20
2774/2774 [==============================] - 169s 61ms/step - loss: 0.5119 - acc: 0.7928 - val_loss: 0.5455 - val_acc: 0.7674
Epoch 00003: val_loss improved from 0.54555 to 0.54549, saving model to model-tgs-salt-1.h5
Epoch 4/20
2774/2774 [==============================] - 170s 61ms/step - loss: 0.5117 - acc: 0.7928 - val_loss: 0.5715 - val_acc: 0.7674
Epoch 00004: val_loss did not improve from 0.54549
Epoch 5/20
2774/2774 [==============================] - 169s 61ms/step - loss: 0.5126 - acc: 0.7928 - val_loss: 0.5566 - val_acc: 0.7674
Epoch 00005: val_loss did not improve from 0.54549
Epoch 6/20
2774/2774 [==============================] - 169s 61ms/step - loss: 0.5138 - acc: 0.7928 - val_loss: 0.5503 - val_acc: 0.7674
Epoch 00006: val_loss did not improve from 0.54549
Epoch 7/20
2774/2774 [==============================] - 170s 61ms/step - loss: 0.5103 - acc: 0.7928 - val_loss: 0.5444 - val_acc: 0.7674
Epoch 00007: val_loss improved from 0.54549 to 0.54436, saving model to model-tgs-salt-1.h5
Epoch 8/20
2774/2774 [==============================] - 169s 61ms/step - loss: 0.5137 - acc: 0.7928 - val_loss: 0.5454 - val_acc: 0.7674
If you use batch size in your network. let's try to increase that. I think it could be useful in speed of train.
I am currently working on a digit recognition challenge by Analytics Vidhya, the link to which is https://datahack.analyticsvidhya.com/contest/practice-problem-identify-the-digits/ .
The images in the dataset pertaining to this challenge are of dimensions 28*28*4 (28 = length = width , 4 = no. of channels).The code I have implemented is:
from keras.models import Sequential
from keras.layers import Dense,Dropout,Flatten,Activation
from keras.layers.convolutional import Conv2D
from keras.layers.convolutional import MaxPooling2D
from keras.utils import np_utils
from keras import backend as K
K.set_image_dim_ordering('th')
import numpy as np
# fix random seed for reproducibility
seed = 7
np.random.seed(seed)
# define the larger model
def larger_model():
# create model
model = Sequential()
model.add(Conv2D(32, (3, 3), input_shape=(4, 28, 28),activation='relu',padding='same'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.25))
model.add(Conv2D(15, (3, 3), activation='relu',padding='same'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.2))
model.add(Flatten())
model.add(Dense(200, activation='relu'))
model.add(Dense(num_classes, activation='softmax'))
# Compile model
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
return model
def loadImages(path):
# return array of images
imagesList = listdir(path)
loadedImages = []
for image in imagesList:
img = io.imread(path + "/" + image,as_grey = False)
loadedImages.append(np.array(img))
return loadedImages
path = "C:/Users/Farz Jamal/Downloads/mnist/Train/Images/train" #path_to_train_dataset
import pandas as pd
df = pd.read_csv("C:/Users/Farz Jamal/Downloads/mnist/Train/train.csv") #path_to_class_labels
y = np.array(df['label'])
from sklearn.cross_validation import train_test_split as ttt
x_train,x_val,y_train,y_val = ttt(imgs,y,test_size = 0.2)
Continued Code:
x_vall,x_test,y_vall,y_test = ttt(x_val,y_val,test_size = 0.4)
x_train,x_vall,x_test = np.array(x_train).astype('float32'),np.array(x_vall).astype('float32'),np.array(x_test).astype('float32')
# normalize inputs from 0-255 to 0-1
x_train = x_train / 255.0
x_vall = x_vall / 255.0
x_test = x_test / 255.0
y_train = np_utils.to_categorical(y_train)
y_vall = np_utils.to_categorical(y_vall)
y_test = np_utils.to_categorical(y_test)
num_classes = y_vall.shape[1] #10
#fitting_and_evaluating
model = larger_model()
# Fit the model
model.fit(x_train, y_train, validation_data=(x_vall, y_vall), epochs=50, batch_size=200)
# Final evaluation of the model
scores = model.evaluate(x_test, y_test, verbose=0)
The output is coming as follows:(from 16thepoch to 37th epoch)
Epoch 16/50
39200/39200 [==============================] - 271s 7ms/step - loss: 2.3013 - acc: 0.1135 - val_loss: 2.3015 - val_acc: 0.1095
Epoch 17/50
39200/39200 [==============================] - 275s 7ms/step - loss: 2.3011 - acc: 0.1128 - val_loss: 2.3014 - val_acc: 0.1095
Epoch 18/50
39200/39200 [==============================] - 270s 7ms/step - loss: 2.3011 - acc: 0.1124 - val_loss: 2.3015 - val_acc: 0.1095
Epoch 19/50
39200/39200 [==============================] - 273s 7ms/step - loss: 2.3012 - acc: 0.1131 - val_loss: 2.3017 - val_acc: 0.1095
Epoch 20/50
39200/39200 [==============================] - 273s 7ms/step - loss: 2.3011 - acc: 0.1130 - val_loss: 2.3018 - val_acc: 0.1111
Epoch 21/50
39200/39200 [==============================] - 272s 7ms/step - loss: 2.3010 - acc: 0.1127 - val_loss: 2.3013 - val_acc: 0.1095
Epoch 22/50
39200/39200 [==============================] - 281s 7ms/step - loss: 2.3006 - acc: 0.1133 - val_loss: 2.3015 - val_acc: 0.1097
Epoch 23/50
39200/39200 [==============================] - 273s 7ms/step - loss: 2.3005 - acc: 0.1136 - val_loss: 2.3018 - val_acc: 0.1099
Epoch 24/50
39200/39200 [==============================] - 276s 7ms/step - loss: 2.3005 - acc: 0.1135 - val_loss: 2.3022 - val_acc: 0.1116
Epoch 25/50
39200/39200 [==============================] - 271s 7ms/step - loss: 2.2998 - acc: 0.1155 - val_loss: 2.3025 - val_acc: 0.1071
Epoch 26/50
39200/39200 [==============================] - 271s 7ms/step - loss: 2.2996 - acc: 0.1156 - val_loss: 2.3021 - val_acc: 0.1100
Epoch 27/50
39200/39200 [==============================] - 272s 7ms/step - loss: 2.2981 - acc: 0.1168 - val_loss: 2.3024 - val_acc: 0.1078
Epoch 28/50
39200/39200 [==============================] - 270s 7ms/step - loss: 2.2970 - acc: 0.1187 - val_loss: 2.3035 - val_acc: 0.1065
Epoch 29/50
39200/39200 [==============================] - 271s 7ms/step - loss: 2.2945 - acc: 0.1218 - val_loss: 2.3061 - val_acc: 0.1041
Epoch 30/50
39200/39200 [==============================] - 270s 7ms/step - loss: 2.2935 - acc: 0.1223 - val_loss: 2.3059 - val_acc: 0.1003
Epoch 31/50
39200/39200 [==============================] - 274s 7ms/step - loss: 2.2906 - acc: 0.1268 - val_loss: 2.3067 - val_acc: 0.1014
Epoch 32/50
39200/39200 [==============================] - 276s 7ms/step - loss: 2.2873 - acc: 0.1278 - val_loss: 2.3078 - val_acc: 0.1073
Epoch 33/50
39200/39200 [==============================] - 292s 7ms/step - loss: 2.2806 - acc: 0.1368 - val_loss: 2.3118 - val_acc: 0.1034
Epoch 34/50
39200/39200 [==============================] - 301s 8ms/step - loss: 2.2744 - acc: 0.1404 - val_loss: 2.3160 - val_acc: 0.1022
Epoch 35/50
39200/39200 [==============================] - 289s 7ms/step - loss: 2.2662 - acc: 0.1486 - val_loss: 2.3172 - val_acc: 0.1029
Epoch 36/50
39200/39200 [==============================] - 295s 8ms/step - loss: 2.2557 - acc: 0.1543 - val_loss: 2.3162 - val_acc: 0.1087
Epoch 37/50
39200/39200 [==============================] - 308s 8ms/step - loss: 2.2459 - acc: 0.1632 - val_loss: 2.3275 - val_acc: 0.1083
As can be seen, there is very low training as well validation accuracy.
I have tried reducing Dropout(previously it was 0.5 for one of the layers) but still no effect. I doubled the neurons in the last hidden layer,(previously they were 100), still no effect. It seems like, it is something to do with the pre processing of the images as well as the input parameters for the image.
What can be done?
Copied in from comments as the answer:
In fact your model isn't learning anything, which usually points to a bug. I don't see anything overtly wrong. A common error is inputting garbage to the network accidentally. Take the first few images that you're feeding to the network and display them in a debugger before your fit step and print out the labels and make sure they match. Do a sanity check on your inputs.