I Have created a model to classify plane and cars images bu after very epoch the acc and val_acc remains same
import numpy as np
import matplotlib as plt
from keras.models import Sequential
from keras.layers import Convolution2D
from keras.layers import MaxPooling2D
from keras.layers import Flatten
from keras.layers import Dense
from keras.preprocessing.image import ImageDataGenerator
from keras.preprocessing.image import image
import os
model=Sequential()
model.add(Convolution2D(32,(3,3),input_shape=(64,64,3),activation="relu"))
model.add(MaxPooling2D(2,2))
model.add(Convolution2D(64,(3,3),activation="relu"))
model.add(MaxPooling2D(2,2))
model.add(Convolution2D(64,(3,3),activation="sigmoid"))
model.add(MaxPooling2D(2,2))
model.add(Flatten())
model.add(Dense(32,activation="sigmoid"))
model.add(Dense(32,activation="sigmoid"))
model.add(Dense(32,activation="sigmoid"))
model.add(Dense(1,activation="softmax"))
model.compile(optimizer='adam', loss='binary_crossentropy',
metrics=['accuracy'])
train_datagen = ImageDataGenerator(
rescale=1./255,
shear_range=0.2,
zoom_range=0.2,
horizontal_flip=True)
test_datagen = ImageDataGenerator(rescale=1./255)
train_set = train_datagen.flow_from_directory(
'train_images',
target_size=(64,64),
batch_size=32,
class_mode='binary')
test_set = train_datagen.flow_from_directory(
'val_set',
target_size=(64,64),
batch_size=32,
class_mode='binary')
model.fit_generator(
train_set,
steps_per_epoch=160,
epochs=25,
validation_data=test_set,
validation_steps=40)
Epoch 1/25
30/30 [==============================] - 18s 593ms/step - loss: 7.9712 - acc: 0.5000 - val_loss:
7.9712 - val_acc: 0.5000
Epoch 2/25
30/30 [==============================] - 15s 491ms/step - loss: 7.9712 - acc: 0.5000 - val_loss:
7.9712 - val_acc: 0.5000
Epoch 3/25
30/30 [==============================] - 19s 640ms/step - loss: 7.9712 - acc: 0.5000 - val_loss:
7.9712 - val_acc: 0.5000
Epoch 4/25
30/30 [==============================] - 14s 474ms/step - loss: 7.9712 - acc: 0.5000 - val_loss:
7.9712 - val_acc: 0.5000
Epoch 5/25
30/30 [==============================] - 16s 532ms/step - loss: 7.9712 - acc: 0.5000 - val_loss:
7.9712 - val_acc: 0.5000
Epoch 6/25
30/30 [==============================] - 14s 473ms/step - loss: 7.9712 - acc: 0.5000 - val_loss:
7.9712 - val_acc: 0.5000
Epoch 7/25
30/30 [==============================] - 14s 469ms/step - loss: 7.9712 - acc: 0.5000 - val_loss:
7.9712 - val_acc: 0.5000
Epoch 8/25
30/30 [==============================] - 14s 469ms/step - loss: 7.9712 - acc: 0.5000 - val_loss:
7.9712 - val_acc: 0.5000
Epoch 9/25
30/30 [==============================] - 14s 472ms/step - loss: 7.9712 - acc: 0.5000 - val_loss:
7.9712 - val_acc: 0.5000
Epoch 10/25
30/30 [==============================] - 16s 537ms/step - loss: 7.9712 - acc: 0.5000 - val_loss:
7.9712 - val_acc: 0.5000
Epoch 11/25
30/30 [==============================] - 18s 590ms/step - loss: 7.9712 - acc: 0.5000 - val_loss:
7.9712 - val_acc: 0.5000
Epoch 12/25
30/30 [==============================] - 13s 441ms/step - loss: 7.9712 - acc: 0.5000 - val_loss:
7.9712 - val_acc: 0.5000
Epoch 13/25
30/30 [==============================] - 11s 374ms/step - loss: 7.9712 - acc: 0.5000 - val_loss:
7.9712 - val_acc: 0.5000
Epoch 14/25
30/30 [==============================] - 11s 370ms/step - loss: 7.9712 - acc: 0.5000 - val_loss:
7.9712 - val_acc: 0.5000
Epoch 15/25
30/30 [==============================] - 13s 441ms/step - loss: 7.9712 - acc: 0.5000 - val_loss:
7.9712 - val_acc: 0.5000
Epoch 16/25
30/30 [==============================] - 13s 419ms/step - loss: 7.9712 - acc: 0.5000 - val_loss:
7.9712 - val_acc: 0.5000
Epoch 17/25
30/30 [==============================] - 12s 401ms/step - loss: 7.9712 - acc: 0.5000 - val_loss:
7.9712 - val_acc: 0.5000
Epoch 18/25
30/30 [==============================] - 16s 536ms/step - loss: 7.9712 - acc: 0.5000 - val_loss:
7.9712 - val_acc: 0.5000
Epoch 19/25
30/30 [==============================] - 16s 523ms/step - loss: 7.9712 - acc: 0.5000 - val_loss:
7.9712 - val_acc: 0.5000
Epoch 20/25
30/30 [==============================] - 16s 530ms/step - loss: 7.9712 - acc: 0.5000 - val_loss:
7.9712 - val_acc: 0.5000
Epoch 21/25
30/30 [==============================] - 16s 546ms/step - loss: 7.9712 - acc: 0.5000 - val_loss:
7.9712 - val_acc: 0.5000
Epoch 22/25
30/30 [==============================] - 15s 500ms/step - loss: 7.9712 - acc: 0.5000 - val_loss:
7.9712 - val_acc: 0.5000
Epoch 23/25
30/30 [==============================] - 16s 546ms/step - loss: 7.9712 - acc: 0.5000 - val_loss:
7.9712 - val_acc: 0.5000
Epoch 24/25
30/30 [==============================] - 16s 545ms/step - loss: 7.9712 - acc: 0.5000 - val_loss:
7.9712 - val_acc: 0.5000
Epoch 25/25
30/30 [==============================] - 15s 515ms/step - loss: 7.9712 - acc: 0.5000 - val_loss:
7.9712 - val_acc: 0.5000
You have several issues in your model structure.
First of all, for the output of your model
model.add(Dense(1,activation="softmax"))
You are using a softmax, which means you try to solve a multi-class classification, not a binary classification. If it is really the case, you need to change your loss to categorical_crossentropy. In this way the compile line will turn into:
model.compile(optimizer='adam', loss='categorical_crossentropy',
metrics=['accuracy'])
If it is not the case and you want only solve a binary classification, you might be good, but I do suggest to change the last layer activation to sigmoid
Second: That is a bad idea to use sigmoid as the activation in the middle layer since it can easily cause the gradient to vanish (read more here ). Try to change all the sigmoid activation in the middle layer with wether relu or even better with leakyrelu
The problem is exactly here:
model.add(Dense(1,activation="softmax"))
You cannot use softmax with one neuron, as it normalizes over neurons, meaning that with one neuron it will always produce a constant 1.0 value. For binary classification you have to use sigmoid activation at the output:
model.add(Dense(1,activation="sigmoid"))
Also it is not wise to use sigmoid activations in hidden layers, as they will produce vanishing gradient problems. Please prefer ReLU or similar activations.
Related
I am trying to classify the severity of COVID XRay using 426 256x256 xray images and 4 classes present. However the validation accuracy doesnt improve at all. The validation loss also barely decreases from the start
This is the model I am using
from keras.models import Sequential
from keras.layers import Dense,Conv2D,MaxPooling2D,Dropout,Flatten
from tensorflow.keras.callbacks import EarlyStopping
from tensorflow.keras import regularizers
model=Sequential()
model.add(Conv2D(filters=64,kernel_size=(4,4),input_shape=image_shape,activation="relu"))
model.add(MaxPooling2D(pool_size=(2,2)))
model.add(Dropout(0.2))
model.add(Conv2D(filters=128,kernel_size=(6,6),input_shape=image_shape,activation="relu"))
model.add(MaxPooling2D(pool_size=(3,3)))
model.add(Dropout(0.2))
model.add(Flatten())
model.add(Dense(64,activation="relu"))
model.add(Dense(16,activation="relu"))
model.add(Dense(4,activation="softmax"))
model.compile(loss="categorical_crossentropy",optimizer="adam",metrics=["accuracy"])
These are the outputs I get
epochs = 20
batch_size = 8
model.fit(X_train, y_train, validation_data=(X_test, y_test),
epochs=epochs,
batch_size=batch_size
)
Epoch 1/20
27/27 [==============================] - 4s 143ms/step - loss: 0.1776 - accuracy: 0.9528 - val_loss: 3.7355 - val_accuracy: 0.2717
Epoch 2/20
27/27 [==============================] - 4s 142ms/step - loss: 0.1152 - accuracy: 0.9481 - val_loss: 4.0038 - val_accuracy: 0.2283
Epoch 3/20
27/27 [==============================] - 4s 142ms/step - loss: 0.0875 - accuracy: 0.9858 - val_loss: 4.1756 - val_accuracy: 0.2391
Epoch 4/20
27/27 [==============================] - 4s 142ms/step - loss: 0.0521 - accuracy: 0.9906 - val_loss: 4.1034 - val_accuracy: 0.2717
Epoch 5/20
27/27 [==============================] - 4s 142ms/step - loss: 0.0496 - accuracy: 0.9858 - val_loss: 4.8433 - val_accuracy: 0.3152
Epoch 6/20
27/27 [==============================] - 4s 142ms/step - loss: 0.0170 - accuracy: 0.9953 - val_loss: 5.6027 - val_accuracy: 0.3043
Epoch 7/20
27/27 [==============================] - 4s 142ms/step - loss: 0.2307 - accuracy: 0.9245 - val_loss: 4.2759 - val_accuracy: 0.3152
Epoch 8/20
27/27 [==============================] - 4s 142ms/step - loss: 0.6493 - accuracy: 0.7830 - val_loss: 3.8390 - val_accuracy: 0.3478
Epoch 9/20
27/27 [==============================] - 4s 142ms/step - loss: 0.2563 - accuracy: 0.9009 - val_loss: 5.0250 - val_accuracy: 0.2500
Epoch 10/20
27/27 [==============================] - 4s 142ms/step - loss: 0.0286 - accuracy: 1.0000 - val_loss: 4.6475 - val_accuracy: 0.2391
Epoch 11/20
27/27 [==============================] - 4s 142ms/step - loss: 0.0097 - accuracy: 1.0000 - val_loss: 5.2198 - val_accuracy: 0.2391
Epoch 12/20
27/27 [==============================] - 4s 142ms/step - loss: 0.0037 - accuracy: 1.0000 - val_loss: 5.7914 - val_accuracy: 0.2500
Epoch 13/20
27/27 [==============================] - 4s 142ms/step - loss: 0.0048 - accuracy: 1.0000 - val_loss: 5.4341 - val_accuracy: 0.2391
Epoch 14/20
27/27 [==============================] - 4s 142ms/step - loss: 0.0044 - accuracy: 1.0000 - val_loss: 5.6364 - val_accuracy: 0.2391
Epoch 15/20
27/27 [==============================] - 4s 143ms/step - loss: 0.0019 - accuracy: 1.0000 - val_loss: 5.8504 - val_accuracy: 0.2391
Epoch 16/20
27/27 [==============================] - 4s 143ms/step - loss: 0.0013 - accuracy: 1.0000 - val_loss: 5.9604 - val_accuracy: 0.2500
Epoch 17/20
27/27 [==============================] - 4s 149ms/step - loss: 0.0023 - accuracy: 1.0000 - val_loss: 6.0851 - val_accuracy: 0.2717
Epoch 18/20
27/27 [==============================] - 4s 142ms/step - loss: 0.0134 - accuracy: 0.9953 - val_loss: 4.9783 - val_accuracy: 0.2717
Epoch 19/20
27/27 [==============================] - 4s 141ms/step - loss: 0.0068 - accuracy: 1.0000 - val_loss: 5.7421 - val_accuracy: 0.2500
Epoch 20/20
27/27 [==============================] - 4s 142ms/step - loss: 0.0024 - accuracy: 1.0000 - val_loss: 5.8480 - val_accuracy: 0.2283
Any tips on how i can solve this or If i am doing something wrong?
I am training a DNN model to classify an image in two class: perfect image or imperfect image. I have 60 image for training with 30 images of each class. As for the limited data, I decided to check the model by overfitting i.e. by providing the validation data same as the training data. Here, I hoped to achieve 100% accuracy on both training and validation data(since training data set and validation dataset are the same).The training loss and validation loss seems to decrease however both training and validation accuracy are constant.
import tensorflow as tf
import tensorflow.keras as keras
from sklearn.preprocessing import LabelBinarizer
from sklearn.metrics import classification_report
from keras.models import Sequential
from keras.layers import Dropout
from keras.layers.core import Dense
from keras.optimizers import SGD
from keras.datasets import cifar10
import matplotlib.pyplot as plt
import numpy as np
import argparse
import cv2
import glob
initial_lr=0.001
#getting labels from Directories
right_labels=[]
wrong_labels=[]
rightimage_path=glob.glob("images/right_location/*")
wrongimage_path=glob.glob("images/wrong_location/*")
for _ in rightimage_path:
right_labels.append(1)
#print(labels)
for _ in wrongimage_path:
wrong_labels.append(0)
labelNames=["right_location","wrong_location"]
right_images=[]
wrong_images=[]
#getting images data from Directories
for img in rightimage_path:
im=cv2.imread(img)
im2=cv2.resize(im,(64,64))
im2=np.expand_dims(im2,axis=0)
max_pool=keras.layers.MaxPooling2D(pool_size=(2, 2),strides=(1, 1))
output=max_pool(im2)
output=np.squeeze(output)
output=output.flatten()
output=output/255
right_images.append(output)
#wrong images
for img in wrongimage_path:
im=cv2.imread(img)
im2=cv2.resize(im,(64,64))
im2=np.expand_dims(im2,axis=0)
max_pool=keras.layers.MaxPooling2D(pool_size=(2, 2),strides=(1, 1))
output=max_pool(im2)
output=np.squeeze(output)
output=output.flatten()
output=output/255
wrong_images.append(output)
#print(len(wrong_images))
trainX=right_images[:30]+wrong_images[:30]
trainX=np.array(trainX)
trainY=right_labels[:30]+wrong_labels[:30]
trainY=np.array(trainY)
#print(trainX[0].shape)
testX=trainX
testY=trainY
#testX=right_images[31:]+wrong_images[31:]
#testX=np.array(testX)
#print(len(testX))
#print(len(right_labels[31:]))
#testY=right_labels[31:]+wrong_labels[31:]
#testY=np.array(testY)
#print(testY)
print(trainY)
print(testY)
#Contruction of Neural Network model
model = Sequential()
model.add(Dense(1024, input_shape=(11907,), activation="relu"))
model.add(Dense(512, activation="relu"))
model.add(Dense(256, activation="relu"))
model.add(Dense(1, activation="softmax"))
#Training model
print("[INFO] training network...")
decay_steps = 1000
sgd = SGD(initial_lr,momentum=0.8)
lr_decayed_fn = tf.keras.experimental.CosineDecay(initial_lr, decay_steps)
model.compile(loss="binary_crossentropy", optimizer=sgd,metrics=["accuracy"])
H = model.fit(trainX, trainY, validation_data=(testX, testY),epochs=100, batch_size=1)
#evaluating the model
print("[INFO] evaluating network...")
predictions = model.predict(testX, batch_size=32)
print(predictions)
print(classification_report(testY,predictions, target_names=labelNames))
Training results:
[INFO] training network...
Epoch 1/100
60/60 [==============================] - 3s 43ms/step - loss: 0.8908 - accuracy: 0.4867 - val_loss: 0.6719 - val_accuracy: 0.5000
Epoch 2/100
60/60 [==============================] - 2s 41ms/step - loss: 0.6893 - accuracy: 0.4791 - val_loss: 0.8592 - val_accuracy: 0.5000
Epoch 3/100
60/60 [==============================] - 2s 41ms/step - loss: 0.7008 - accuracy: 0.5290 - val_loss: 0.6129 - val_accuracy: 0.5000
Epoch 4/100
60/60 [==============================] - 2s 41ms/step - loss: 0.6971 - accuracy: 0.5279 - val_loss: 0.5619 - val_accuracy: 0.5000
Epoch 5/100
60/60 [==============================] - 2s 41ms/step - loss: 0.6770 - accuracy: 0.4745 - val_loss: 0.5669 - val_accuracy: 0.5000
Epoch 6/100
60/60 [==============================] - 2s 41ms/step - loss: 0.5685 - accuracy: 0.5139 - val_loss: 0.4953 - val_accuracy: 0.5000
Epoch 7/100
60/60 [==============================] - 2s 41ms/step - loss: 0.5679 - accuracy: 0.5312 - val_loss: 0.8273 - val_accuracy: 0.5000
Epoch 8/100
60/60 [==============================] - 2s 41ms/step - loss: 0.4373 - accuracy: 0.6591 - val_loss: 0.8112 - val_accuracy: 0.5000
Epoch 9/100
60/60 [==============================] - 2s 41ms/step - loss: 0.7427 - accuracy: 0.5848 - val_loss: 0.5419 - val_accuracy: 0.5000
Epoch 10/100
60/60 [==============================] - 2s 40ms/step - loss: 0.4719 - accuracy: 0.5377 - val_loss: 0.3118 - val_accuracy: 0.5000
Epoch 11/100
60/60 [==============================] - 2s 40ms/step - loss: 0.3253 - accuracy: 0.4684 - val_loss: 0.4851 - val_accuracy: 0.5000
Epoch 12/100
60/60 [==============================] - 3s 42ms/step - loss: 0.5194 - accuracy: 0.4514 - val_loss: 0.1976 - val_accuracy: 0.5000
Epoch 13/100
60/60 [==============================] - 2s 41ms/step - loss: 0.3114 - accuracy: 0.6019 - val_loss: 0.3483 - val_accuracy: 0.5000
Epoch 14/100
60/60 [==============================] - 2s 41ms/step - loss: 0.3794 - accuracy: 0.6003 - val_loss: 0.4723 - val_accuracy: 0.5000
Epoch 15/100
60/60 [==============================] - 2s 41ms/step - loss: 0.4172 - accuracy: 0.5873 - val_loss: 0.4992 - val_accuracy: 0.5000
Epoch 16/100
60/60 [==============================] - 2s 41ms/step - loss: 0.3110 - accuracy: 0.4338 - val_loss: 0.6209 - val_accuracy: 0.5000
Epoch 17/100
60/60 [==============================] - 2s 41ms/step - loss: 0.6362 - accuracy: 0.6615 - val_loss: 0.2337 - val_accuracy: 0.5000
Epoch 18/100
60/60 [==============================] - 3s 42ms/step - loss: 0.1652 - accuracy: 0.5617 - val_loss: 0.0841 - val_accuracy: 0.5000
Epoch 19/100
60/60 [==============================] - 3s 42ms/step - loss: 0.1050 - accuracy: 0.4714 - val_loss: 0.2853 - val_accuracy: 0.5000
Epoch 20/100
60/60 [==============================] - 2s 41ms/step - loss: 0.1031 - accuracy: 0.5254 - val_loss: 0.2085 - val_accuracy: 0.5000
Epoch 21/100
60/60 [==============================] - 2s 42ms/step - loss: 0.0375 - accuracy: 0.5124 - val_loss: 0.0564 - val_accuracy: 0.5000
Epoch 22/100
60/60 [==============================] - 2s 41ms/step - loss: 0.0298 - accuracy: 0.5482 - val_loss: 0.5937 - val_accuracy: 0.5000
Epoch 23/100
60/60 [==============================] - 2s 41ms/step - loss: 0.3126 - accuracy: 0.3884 - val_loss: 0.0527 - val_accuracy: 0.5000
Epoch 24/100
60/60 [==============================] - 2s 41ms/step - loss: 0.1054 - accuracy: 0.5572 - val_loss: 0.0356 - val_accuracy: 0.5000
Epoch 25/100
60/60 [==============================] - 3s 42ms/step - loss: 0.1067 - accuracy: 0.4170 - val_loss: 0.1262 - val_accuracy: 0.5000
Epoch 26/100
60/60 [==============================] - 2s 40ms/step - loss: 0.0551 - accuracy: 0.5608 - val_loss: 0.0255 - val_accuracy: 0.5000
Epoch 27/100
60/60 [==============================] - 2s 41ms/step - loss: 0.0188 - accuracy: 0.5816 - val_loss: 0.3153 - val_accuracy: 0.5000
Epoch 28/100
60/60 [==============================] - 2s 40ms/step - loss: 0.1106 - accuracy: 0.4583 - val_loss: 0.3419 - val_accuracy: 0.5000
Epoch 29/100
60/60 [==============================] - 2s 40ms/step - loss: 0.1493 - accuracy: 0.5334 - val_loss: 0.0351 - val_accuracy: 0.5000
Epoch 30/100
60/60 [==============================] - 2s 41ms/step - loss: 0.1099 - accuracy: 0.4537 - val_loss: 0.1217 - val_accuracy: 0.5000
Epoch 31/100
60/60 [==============================] - 3s 43ms/step - loss: 0.0893 - accuracy: 0.4828 - val_loss: 0.1276 - val_accuracy: 0.5000
Epoch 32/100
60/60 [==============================] - 3s 43ms/step - loss: 0.1806 - accuracy: 0.4265 - val_loss: 0.0157 - val_accuracy: 0.5000
Epoch 33/100
60/60 [==============================] - 3s 44ms/step - loss: 0.0154 - accuracy: 0.3411 - val_loss: 0.0152 - val_accuracy: 0.5000
Epoch 34/100
60/60 [==============================] - 3s 42ms/step - loss: 0.0088 - accuracy: 0.4385 - val_loss: 0.0075 - val_accuracy: 0.5000
Epoch 35/100
60/60 [==============================] - 2s 41ms/step - loss: 0.0068 - accuracy: 0.5450 - val_loss: 0.0045 - val_accuracy: 0.5000
Epoch 36/100
60/60 [==============================] - 2s 41ms/step - loss: 0.0051 - accuracy: 0.4283 - val_loss: 0.0039 - val_accuracy: 0.5000
Epoch 37/100
60/60 [==============================] - 2s 40ms/step - loss: 0.0026 - accuracy: 0.3970 - val_loss: 0.0035 - val_accuracy: 0.5000
Epoch 38/100
60/60 [==============================] - 2s 41ms/step - loss: 0.0037 - accuracy: 0.4758 - val_loss: 0.0030 - val_accuracy: 0.5000
Epoch 39/100
60/60 [==============================] - 2s 41ms/step - loss: 0.0021 - accuracy: 0.5036 - val_loss: 0.0025 - val_accuracy: 0.5000
Epoch 40/100
60/60 [==============================] - 2s 41ms/step - loss: 0.0028 - accuracy: 0.6088 - val_loss: 0.0022 - val_accuracy: 0.5000
Epoch 41/100
60/60 [==============================] - 2s 40ms/step - loss: 0.0023 - accuracy: 0.3521 - val_loss: 0.0020 - val_accuracy: 0.5000
Epoch 42/100
60/60 [==============================] - 2s 39ms/step - loss: 0.0023 - accuracy: 0.4832 - val_loss: 0.0020 - val_accuracy: 0.5000
Epoch 43/100
60/60 [==============================] - 2s 39ms/step - loss: 0.0019 - accuracy: 0.6031 - val_loss: 0.0019 - val_accuracy: 0.5000
Epoch 44/100
60/60 [==============================] - 2s 39ms/step - loss: 0.0014 - accuracy: 0.4757 - val_loss: 0.0017 - val_accuracy: 0.5000
Epoch 45/100
60/60 [==============================] - 2s 39ms/step - loss: 0.0012 - accuracy: 0.5074 - val_loss: 0.0016 - val_accuracy: 0.5000
Epoch 46/100
60/60 [==============================] - 2s 39ms/step - loss: 0.0019 - accuracy: 0.4907 - val_loss: 0.0014 - val_accuracy: 0.5000
Epoch 47/100
60/60 [==============================] - 2s 39ms/step - loss: 0.0013 - accuracy: 0.5113 - val_loss: 0.0013 - val_accuracy: 0.5000
Epoch 48/100
60/60 [==============================] - 2s 42ms/step - loss: 0.0013 - accuracy: 0.4616 - val_loss: 0.0012 - val_accuracy: 0.5000
Epoch 49/100
60/60 [==============================] - 3s 43ms/step - loss: 9.2667e-04 - accuracy: 0.4932 - val_loss: 0.0012 - val_accuracy: 0.5000
Epoch 50/100
60/60 [==============================] - 2s 40ms/step - loss: 0.0012 - accuracy: 0.5685 - val_loss: 0.0011 - val_accuracy: 0.5000
Epoch 51/100
60/60 [==============================] - 2s 41ms/step - loss: 0.0014 - accuracy: 0.4952 - val_loss: 0.0011 - val_accuracy: 0.5000
Epoch 52/100
60/60 [==============================] - 3s 44ms/step - loss: 9.6710e-04 - accuracy: 0.4953 - val_loss: 0.0010 - val_accuracy: 0.5000
Epoch 53/100
60/60 [==============================] - 2s 40ms/step - loss: 0.0013 - accuracy: 0.5196 - val_loss: 9.4684e-04 - val_accuracy: 0.5000
Epoch 54/100
60/60 [==============================] - 2s 39ms/step - loss: 0.0012 - accuracy: 0.6033 - val_loss: 9.0767e-04 - val_accuracy: 0.5000
Epoch 55/100
60/60 [==============================] - 2s 39ms/step - loss: 0.0011 - accuracy: 0.5339 - val_loss: 8.7093e-04 - val_accuracy: 0.5000
Epoch 56/100
60/60 [==============================] - 2s 40ms/step - loss: 7.3141e-04 - accuracy: 0.4408 - val_loss: 8.4973e-04 - val_accuracy: 0.5000
Epoch 57/100
60/60 [==============================] - 2s 40ms/step - loss: 5.9006e-04 - accuracy: 0.5258 - val_loss: 8.1935e-04 - val_accuracy: 0.5000
Epoch 58/100
60/60 [==============================] - 3s 43ms/step - loss: 7.8818e-04 - accuracy: 0.5216 - val_loss: 7.8448e-04 - val_accuracy: 0.5000
Epoch 59/100
60/60 [==============================] - 3s 42ms/step - loss: 9.2272e-04 - accuracy: 0.4472 - val_loss: 7.5098e-04 - val_accuracy: 0.5000
Epoch 60/100
60/60 [==============================] - 3s 42ms/step - loss: 0.0011 - accuracy: 0.5485 - val_loss: 7.2444e-04 - val_accuracy: 0.5000
Epoch 61/100
60/60 [==============================] - 2s 41ms/step - loss: 5.5459e-04 - accuracy: 0.4393 - val_loss: 7.1711e-04 - val_accuracy: 0.5000
Epoch 62/100
60/60 [==============================] - 3s 43ms/step - loss: 7.3943e-04 - accuracy: 0.6748 - val_loss: 7.0446e-04 - val_accuracy: 0.5000
Epoch 63/100
60/60 [==============================] - 2s 41ms/step - loss: 6.0513e-04 - accuracy: 0.4365 - val_loss: 6.5710e-04 - val_accuracy: 0.5000
Epoch 64/100
60/60 [==============================] - 3s 43ms/step - loss: 7.1400e-04 - accuracy: 0.5855 - val_loss: 6.3535e-04 - val_accuracy: 0.5000
Epoch 65/100
60/60 [==============================] - 2s 40ms/step - loss: 4.1557e-04 - accuracy: 0.4226 - val_loss: 6.1638e-04 - val_accuracy: 0.5000
Epoch 66/100
60/60 [==============================] - 2s 39ms/step - loss: 0.0010 - accuracy: 0.5130 - val_loss: 5.9961e-04 - val_accuracy: 0.5000
Epoch 67/100
60/60 [==============================] - 2s 40ms/step - loss: 4.2256e-04 - accuracy: 0.5745 - val_loss: 5.8452e-04 - val_accuracy: 0.5000
Epoch 68/100
60/60 [==============================] - 3s 44ms/step - loss: 4.6930e-04 - accuracy: 0.4256 - val_loss: 5.6929e-04 - val_accuracy: 0.5000
Epoch 69/100
60/60 [==============================] - 3s 43ms/step - loss: 5.0537e-04 - accuracy: 0.5201 - val_loss: 5.5308e-04 - val_accuracy: 0.5000
Epoch 70/100
60/60 [==============================] - 2s 40ms/step - loss: 4.2207e-04 - accuracy: 0.5162 - val_loss: 5.3811e-04 - val_accuracy: 0.5000
Epoch 71/100
60/60 [==============================] - 3s 42ms/step - loss: 4.2835e-04 - accuracy: 0.5187 - val_loss: 5.2421e-04 - val_accuracy: 0.5000
Epoch 72/100
60/60 [==============================] - 2s 41ms/step - loss: 6.9296e-04 - accuracy: 0.5396 - val_loss: 5.1115e-04 - val_accuracy: 0.5000
Epoch 73/100
60/60 [==============================] - 2s 42ms/step - loss: 6.4352e-04 - accuracy: 0.4772 - val_loss: 4.9949e-04 - val_accuracy: 0.5000
Epoch 74/100
60/60 [==============================] - 2s 41ms/step - loss: 4.0728e-04 - accuracy: 0.4406 - val_loss: 4.8785e-04 - val_accuracy: 0.5000
Epoch 75/100
60/60 [==============================] - 2s 41ms/step - loss: 6.5099e-04 - accuracy: 0.4769 - val_loss: 4.7489e-04 - val_accuracy: 0.5000
Epoch 76/100
60/60 [==============================] - 2s 40ms/step - loss: 5.3847e-04 - accuracy: 0.5610 - val_loss: 4.6401e-04 - val_accuracy: 0.5000
Epoch 77/100
60/60 [==============================] - 3s 43ms/step - loss: 3.2081e-04 - accuracy: 0.5025 - val_loss: 4.5471e-04 - val_accuracy: 0.5000
Epoch 78/100
60/60 [==============================] - 2s 41ms/step - loss: 4.1042e-04 - accuracy: 0.4055 - val_loss: 4.4509e-04 - val_accuracy: 0.5000
Epoch 79/100
60/60 [==============================] - 3s 46ms/step - loss: 4.0072e-04 - accuracy: 0.5982 - val_loss: 4.3807e-04 - val_accuracy: 0.5000
Epoch 80/100
60/60 [==============================] - 2s 40ms/step - loss: 3.6314e-04 - accuracy: 0.4305 - val_loss: 4.2492e-04 - val_accuracy: 0.5000
Epoch 81/100
60/60 [==============================] - 3s 42ms/step - loss: 4.9497e-04 - accuracy: 0.4644 - val_loss: 4.2099e-04 - val_accuracy: 0.5000
Epoch 82/100
60/60 [==============================] - 3s 42ms/step - loss: 4.3963e-04 - accuracy: 0.4163 - val_loss: 4.0970e-04 - val_accuracy: 0.5000
Epoch 83/100
60/60 [==============================] - 3s 42ms/step - loss: 2.3065e-04 - accuracy: 0.5292 - val_loss: 4.0007e-04 - val_accuracy: 0.5000
Epoch 84/100
60/60 [==============================] - 2s 40ms/step - loss: 3.6344e-04 - accuracy: 0.4781 - val_loss: 3.9164e-04 - val_accuracy: 0.5000
Epoch 85/100
60/60 [==============================] - 2s 41ms/step - loss: 3.2347e-04 - accuracy: 0.4355 - val_loss: 3.8515e-04 - val_accuracy: 0.5000
I am getting weird model summary using keras and ImageDataGenerator when used with Cats and dogs classification.
I am using Google Colab+GPU.
The problem is model summary seems to throw weird values and looks like loss function is not working.
Kindly suggest what is the problem.
My code is as below
train_datagen=ImageDataGenerator(rescale=1./255)
test_datagen=ImageDataGenerator(rescale=1./255)
train_generator=train_datagen.flow_from_directory(
train_dir,
target_size=(150,150),
batch_size=32,
class_mode='binary')
validation_generator=train_datagen.flow_from_directory(validation_dir,target_size=
(150,150),batch_size=50,class_mode='binary')
history=model.fit(train_generator,steps_per_epoch=31,epochs=20,validation_data=validation_generator,validation_steps=20)
Model Summary is as below
Epoch 1/20
31/31 [==============================] - 10s 241ms/step - loss: 0.1302 - acc: 1.0000 -
val_loss: 5.0506 - val_acc: 0.5000
Epoch 2/20
31/31 [==============================] - 6s 215ms/step - loss: 4.4286e-05 - acc: 1.0000 -
val_loss: 6.8281 - val_acc: 0.5000
Epoch 3/20
31/31 [==============================] - 7s 212ms/step - loss: 4.6900e-06 - acc: 1.0000 -
val_loss: 8.1907 - val_acc: 0.5000
Epoch 4/20
31/31 [==============================] - 6s 211ms/step - loss: 5.8646e-07 - acc: 1.0000 -
val_loss: 9.3841 - val_acc: 0.5000
Epoch 5/20
31/31 [==============================] - 6s 212ms/step - loss: 2.0634e-07 - acc: 1.0000 -
val_loss: 10.3554 - val_acc: 0.5000
Epoch 6/20
31/31 [==============================] - 6s 211ms/step - loss: 2.8432e-08 - acc: 1.0000 -
val_loss: 11.3546 - val_acc: 0.5000
Epoch 7/20
31/31 [==============================] - 6s 211ms/step - loss: 1.3657e-08 - acc: 1.0000 -
val_loss: 12.1012 - val_acc: 0.5000
Epoch 8/20
31/31 [==============================] - 7s 215ms/step - loss: 4.8156e-09 - acc: 1.0000 -
val_loss: 12.6892 - val_acc: 0.5000
Epoch 9/20
31/31 [==============================] - 7s 219ms/step - loss: 2.9152e-09 - acc: 1.0000 -
val_loss: 13.1079 - val_acc: 0.5000
Epoch 10/20
31/31 [==============================] - 7s 216ms/step - loss: 1.6705e-09 - acc: 1.0000 -
val_loss: 13.4230 - val_acc: 0.5000
Epoch 11/20
31/31 [==============================] - 7s 218ms/step - loss: 1.2603e-09 - acc: 1.0000 -
val_loss: 13.6259 - val_acc: 0.5000
Epoch 12/20
31/31 [==============================] - 7s 218ms/step - loss: 1.7701e-09 - acc: 1.0000 - val_loss:
13.7718 - val_acc: 0.5000
Epoch 13/20
31/31 [==============================] - 7s 218ms/step - loss: 1.6043e-09 - acc: 1.0000 - val_loss:
13.9099 - val_acc: 0.5000
Epoch 14/20
31/31 [==============================] - 7s 219ms/step - loss: 3.8831e-10 - acc: 1.0000 -
val_loss: 14.0405 - val_acc: 0.5000
Epoch 15/20
31/31 [==============================] - 7s 216ms/step - loss: 8.9113e-10 - acc: 1.0000 - val_loss:
14.1567 - val_acc: 0.5000
Epoch 16/20
31/31 [==============================] - 7s 218ms/step - loss: 8.5343e-10 - acc: 1.0000 -
val_loss: 14.2485 - val_acc: 0.5000
Epoch 17/20
31/31 [==============================] - 7s 217ms/step - loss: 2.8638e-10 - acc: 1.0000 -
val_loss: 14.3410 - val_acc: 0.5000
Epoch 18/20
31/31 [==============================] - 7s 218ms/step - loss: 5.3467e-10 - acc: 1.0000
- val_loss: 14.4225 - val_acc: 0.5000
Epoch 19/20
31/31 [==============================] - 7s 217ms/step - loss: 4.5269e-10 - acc: 1.0000
- val_loss: 14.4895 - val_acc: 0.5000
Epoch 20/20
31/31 [==============================] - 7s 216ms/step - loss: 3.4228e-10 - acc:
1.0000 - val_loss: 14.5428 - val_acc: 0.5000
You should use model.summary() instead of history = model.fit...
I have 3D CNN U-net architecture to solve segmentation problem. I am using Adam optimisation together with binary cross entropy and the metric is "accuracy". I try to understand why it does not improve.
Train on 2774 samples, validate on 694 samples
Epoch 1/20
2774/2774 [==============================] - 166s 60ms/step - loss: 0.5189 - acc: 0.7928 - val_loss: 0.5456 - val_acc: 0.7674
Epoch 00001: val_loss improved from inf to 0.54555, saving model to model-tgs-salt-1.h5
Epoch 2/20
2774/2774 [==============================] - 170s 61ms/step - loss: 0.5170 - acc: 0.7928 - val_loss: 0.5485 - val_acc: 0.7674
Epoch 00002: val_loss did not improve from 0.54555
Epoch 3/20
2774/2774 [==============================] - 169s 61ms/step - loss: 0.5119 - acc: 0.7928 - val_loss: 0.5455 - val_acc: 0.7674
Epoch 00003: val_loss improved from 0.54555 to 0.54549, saving model to model-tgs-salt-1.h5
Epoch 4/20
2774/2774 [==============================] - 170s 61ms/step - loss: 0.5117 - acc: 0.7928 - val_loss: 0.5715 - val_acc: 0.7674
Epoch 00004: val_loss did not improve from 0.54549
Epoch 5/20
2774/2774 [==============================] - 169s 61ms/step - loss: 0.5126 - acc: 0.7928 - val_loss: 0.5566 - val_acc: 0.7674
Epoch 00005: val_loss did not improve from 0.54549
Epoch 6/20
2774/2774 [==============================] - 169s 61ms/step - loss: 0.5138 - acc: 0.7928 - val_loss: 0.5503 - val_acc: 0.7674
Epoch 00006: val_loss did not improve from 0.54549
Epoch 7/20
2774/2774 [==============================] - 170s 61ms/step - loss: 0.5103 - acc: 0.7928 - val_loss: 0.5444 - val_acc: 0.7674
Epoch 00007: val_loss improved from 0.54549 to 0.54436, saving model to model-tgs-salt-1.h5
Epoch 8/20
2774/2774 [==============================] - 169s 61ms/step - loss: 0.5137 - acc: 0.7928 - val_loss: 0.5454 - val_acc: 0.7674
If you use batch size in your network. let's try to increase that. I think it could be useful in speed of train.
I try to run the churn_modeling.csv file in Keras but I don't the model learning. Here is my code:
# -*- coding: utf-8 -*-
import keras
from keras.models import Sequential
from keras.layers import Dense
from keras.optimizers import SGD
import pandas as pd
import numpy as np
#read from CSV file ,convert categorial value to one-hot-encoding and convert the result to numpy array
df=pd.read_csv("churn_modelling.csv")
X=pd.get_dummies(df, columns=['Geography','Gender'])
X=X[['CreditScore','Age','Tenure','Balance','NumOfProducts','HasCrCard','IsActiveMember','EstimatedSalary','Geography_France','Geography_Germany','Geography_Spain','Gender_Female','Gender_Male','Exited']]
dataset=X.as_matrix()
X_train=dataset[:,0:13]
Y_train=dataset[:,13]
model=Sequential()
model.add(Dense(26, input_dim=13, activation='relu'))
#model.add(Dense(15, activation='relu'))
model.add(Dense(1, activation='sigmoid'))
sgd = SGD(lr=0.02)
model.compile(loss='binary_crossentropy', optimizer=sgd, metrics=['accuracy'])
model.fit(X_train,Y_train, validation_split=0.05, epochs=10, batch_size=200)
This is the output I got:
Train on 9500 samples, validate on 500 samples
Epoch 1/10
9500/9500 [==============================] - 0s - loss: 3.7996 - acc: 0.7637 - val_loss: 2.8045 - val_acc: 0.8260
Epoch 2/10
9500/9500 [==============================] - 0s - loss: 3.3085 - acc: 0.7947 - val_loss: 2.8045 - val_acc: 0.8260
Epoch 3/10
9500/9500 [==============================] - 0s - loss: 3.3085 - acc: 0.7947 - val_loss: 2.8045 - val_acc: 0.8260
Epoch 4/10
9500/9500 [==============================] - 0s - loss: 3.3085 - acc: 0.7947 - val_loss: 2.8045 - val_acc: 0.8260
Epoch 5/10
9500/9500 [==============================] - 0s - loss: 3.3085 - acc: 0.7947 - val_loss: 2.8045 - val_acc: 0.8260
Epoch 6/10
9500/9500 [==============================] - 0s - loss: 3.3085 - acc: 0.7947 - val_loss: 2.8045 - val_acc: 0.8260
Epoch 7/10
9500/9500 [==============================] - 0s - loss: 3.3085 - acc: 0.7947 - val_loss: 2.8045 - val_acc: 0.8260
Epoch 8/10
9500/9500 [==============================] - 0s - loss: 3.3085 - acc: 0.7947 - val_loss: 2.8045 - val_acc: 0.8260
Epoch 9/10
9500/9500 [==============================] - 0s - loss: 3.3085 - acc: 0.7947 - val_loss: 2.8045 - val_acc: 0.8260
Epoch 10/10
9500/9500 [==============================] - 0s - loss: 3.3085 - acc: 0.7947 - val_loss: 2.8045 - val_acc: 0.8260
Even if I run the program with 100 epochs, I still get the same result val_acc: 0.8260. Thank you
It seems that if training set is rescaled, the accuracy improves slightly 86%. I have used the following code:
from sklearn.preprocessing import MinMaxScaler
scaler = MinMaxScaler(feature_range=(0, 1))
rescaledX_train = scaler.fit_transform(X_train)