Comparing results of model training in keras tensorflow backend - python-3.x

I trained 6000 images on two cnn networks and they appear to be giving me similar results. The model is trained for binary classification the diffrence between the two models is in the dense layers Model A has following dense layer config
fc1 (Dense) (None, 512) 12845568
_________________________________________________________________
fc2 (Dense) (None, 256) 131328
_________________________________________________________________
output (Dense) (None, 2) 514
and result
Train on 4800 samples, validate on 1200 samples
Epoch 1/30
4800/4800 [===] - 98s - loss: 0.7923 - acc: 0.6865 - val_loss: 0.4599 - val_acc: 0.7858
Epoch 2/30
4800/4800 [===] - 80s - loss: 0.4263 - acc: 0.7996 - val_loss: 0.5913 - val_acc: 0.6350
Epoch 3/30
4800/4800 [===] - 80s - loss: 0.3912 - acc: 0.8133 - val_loss: 0.3199 - val_acc: 0.8625
Epoch 4/30
4800/4800 [===] - 80s - loss: 0.3562 - acc: 0.8402 - val_loss: 0.3086 - val_acc: 0.8708
Epoch 5/30
4800/4800 [===] - 80s - loss: 0.3251 - acc: 0.8558 - val_loss: 0.2784 - val_acc: 0.8817
Epoch 6/30
4800/4800 [===] - 80s - loss: 0.3150 - acc: 0.8631 - val_loss: 0.2792 - val_acc: 0.8817
Epoch 7/30
4800/4800 [===] - 80s - loss: 0.2997 - acc: 0.8692 - val_loss: 0.3615 - val_acc: 0.8342
Epoch 8/30
4800/4800 [===] - 80s - loss: 0.2990 - acc: 0.8662 - val_loss: 0.2630 - val_acc: 0.8908
Epoch 9/30
4800/4800 [===] - 80s - loss: 0.2594 - acc: 0.8867 - val_loss: 0.3102 - val_acc: 0.8700
Epoch 10/30
4800/4800 [===] - 80s - loss: 0.2846 - acc: 0.8785 - val_loss: 0.4234 - val_acc: 0.7842
Epoch 11/30
4800/4800 [===] - 80s - loss: 0.2510 - acc: 0.8969 - val_loss: 0.2952 - val_acc: 0.8742
Epoch 12/30
4800/4800 [===] - 80s - loss: 0.2288 - acc: 0.9090 - val_loss: 0.2680 - val_acc: 0.8858
Epoch 13/30
4800/4800 [===] - 80s - loss: 0.2277 - acc: 0.9044 - val_loss: 0.3745 - val_acc: 0.8600
Epoch 14/30
4800/4800 [===] - 80s - loss: 0.2659 - acc: 0.8873 - val_loss: 0.2438 - val_acc: 0.9025
Epoch 15/30
4800/4800 [===] - 80s - loss: 0.2101 - acc: 0.9133 - val_loss: 0.3176 - val_acc: 0.8667
Epoch 16/30
4800/4800 [===] - 80s - loss: 0.2094 - acc: 0.9146 - val_loss: 0.2763 - val_acc: 0.8875
Epoch 17/30
4800/4800 [===] - 80s - loss: 0.2058 - acc: 0.9125 - val_loss: 0.2677 - val_acc: 0.8925
Epoch 18/30
4800/4800 [===] - 80s - loss: 0.1839 - acc: 0.9296 - val_loss: 0.2449 - val_acc: 0.9117
Epoch 19/30
4800/4800 [===] - 80s - loss: 0.1918 - acc: 0.9221 - val_loss: 0.2471 - val_acc: 0.8992
Epoch 20/30
4800/4800 [===] - 80s - loss: 0.2014 - acc: 0.9225 - val_loss: 0.2709 - val_acc: 0.8808
Epoch 21/30
4800/4800 [===] - 80s - loss: 0.1540 - acc: 0.9425 - val_loss: 0.2541 - val_acc: 0.8933
Epoch 22/30
4800/4800 [===] - 80s - loss: 0.1803 - acc: 0.9294 - val_loss: 0.2289 - val_acc: 0.9058
Epoch 23/30
4800/4800 [===] - 80s - loss: 0.1548 - acc: 0.9425 - val_loss: 0.2417 - val_acc: 0.9175
Epoch 24/30
4800/4800 [===] - 80s - loss: 0.1754 - acc: 0.9294 - val_loss: 0.4914 - val_acc: 0.8183
Epoch 25/30
4800/4800 [===] - 80s - loss: 0.1449 - acc: 0.9419 - val_loss: 0.2281 - val_acc: 0.9125
Epoch 26/30
4800/4800 [===] - 80s - loss: 0.1529 - acc: 0.9385 - val_loss: 0.2328 - val_acc: 0.9217
Epoch 27/30
4800/4800 [===] - 80s - loss: 0.1237 - acc: 0.9533 - val_loss: 0.2646 - val_acc: 0.9167
Epoch 28/30
4800/4800 [===] - 80s - loss: 0.1236 - acc: 0.9531 - val_loss: 0.2485 - val_acc: 0.9100
Epoch 29/30
4800/4800 [===] - 80s - loss: 0.1301 - acc: 0.9500 - val_loss: 0.2726 - val_acc: 0.9042
Epoch 30/30
4800/4800 [===] - 80s - loss: 0.1335 - acc: 0.9500 - val_loss: 0.2803 - val_acc: 0.9183
Training time: 2440.315860271454
1200/1200 [===] - 27s
[INFO] loss=0.2803, accuracy: 91.8333%
=================================================================
Model B has following final dense layer config
fc1 (Dense) (None, 1024) 25691136
_________________________________________________________________
fc2 (Dense) (None, 512) 524800
_________________________________________________________________
output (Dense) (None, 2) 1026
Result
Train on 4800 samples, validate on 1200 samples
Epoch 1/30
4800/4800 [===] - 87s - loss: 0.4743 - acc: 0.7708 - val_loss: 0.4073 - val_acc: 0.8233
Epoch 2/30
4800/4800 [===] - 87s - loss: 0.3732 - acc: 0.8263 - val_loss: 0.3359 - val_acc: 0.8525
Epoch 3/30
4800/4800 [===] - 87s - loss: 0.3383 - acc: 0.8500 - val_loss: 0.3017 - val_acc: 0.8658
Epoch 4/30
4800/4800 [===] - 87s - loss: 0.3094 - acc: 0.8637 - val_loss: 0.3024 - val_acc: 0.8683
Epoch 5/30
4800/4800 [===] - 87s - loss: 0.3036 - acc: 0.8669 - val_loss: 0.3848 - val_acc: 0.8058
Epoch 6/30
4800/4800 [===] - 87s - loss: 0.2848 - acc: 0.8802 - val_loss: 0.2730 - val_acc: 0.8883
Epoch 7/30
4800/4800 [===] - 87s - loss: 0.2630 - acc: 0.8877 - val_loss: 0.3234 - val_acc: 0.8667
Epoch 8/30
4800/4800 [===] - 87s - loss: 0.2491 - acc: 0.8952 - val_loss: 0.2758 - val_acc: 0.8933
Epoch 9/30
4800/4800 [===] - 87s - loss: 0.2484 - acc: 0.8992 - val_loss: 0.3271 - val_acc: 0.8467
Epoch 10/30
4800/4800 [===] - 87s - loss: 0.2427 - acc: 0.8992 - val_loss: 0.2743 - val_acc: 0.8808
Epoch 11/30
4800/4800 [===] - 87s - loss: 0.2346 - acc: 0.9017 - val_loss: 0.2379 - val_acc: 0.9008
Epoch 12/30
4800/4800 [===] - 87s - loss: 0.2250 - acc: 0.9108 - val_loss: 0.2432 - val_acc: 0.9017
Epoch 13/30
4800/4800 [===] - 87s - loss: 0.1993 - acc: 0.9221 - val_loss: 0.2892 - val_acc: 0.8858
Epoch 14/30
4800/4800 [===] - 87s - loss: 0.2148 - acc: 0.9125 - val_loss: 0.3201 - val_acc: 0.8842
Epoch 15/30
4800/4800 [===] - 87s - loss: 0.1823 - acc: 0.9287 - val_loss: 0.5481 - val_acc: 0.8133
Epoch 16/30
4800/4800 [===] - 87s - loss: 0.1873 - acc: 0.9281 - val_loss: 0.2449 - val_acc: 0.9092
Epoch 17/30
4800/4800 [===] - 87s - loss: 0.1622 - acc: 0.9392 - val_loss: 0.2373 - val_acc: 0.9092
Epoch 18/30
4800/4800 [===] - 87s - loss: 0.1782 - acc: 0.9304 - val_loss: 0.2856 - val_acc: 0.8725
Epoch 19/30
4800/4800 [===] - 87s - loss: 0.1632 - acc: 0.9369 - val_loss: 0.2518 - val_acc: 0.9067
Epoch 20/30
4800/4800 [===] - 87s - loss: 0.1577 - acc: 0.9381 - val_loss: 0.2629 - val_acc: 0.9050
Epoch 21/30
4800/4800 [===] - 87s - loss: 0.1395 - acc: 0.9481 - val_loss: 0.2278 - val_acc: 0.9133
Epoch 22/30
4800/4800 [===] - 87s - loss: 0.1422 - acc: 0.9444 - val_loss: 0.2232 - val_acc: 0.9158
Epoch 23/30
4800/4800 [===] - 87s - loss: 0.1436 - acc: 0.9448 - val_loss: 0.2862 - val_acc: 0.9042
Epoch 24/30
4800/4800 [===] - 87s - loss: 0.1402 - acc: 0.9448 - val_loss: 0.3186 - val_acc: 0.8842
Epoch 25/30
4800/4800 [===] - 86s - loss: 0.1261 - acc: 0.9542 - val_loss: 0.2762 - val_acc: 0.9092
Epoch 26/30
4800/4800 [===] - 86s - loss: 0.1143 - acc: 0.9529 - val_loss: 0.2442 - val_acc: 0.9125
Epoch 27/30
4800/4800 [===] - 86s - loss: 0.1141 - acc: 0.9565 - val_loss: 0.3128 - val_acc: 0.8658
Epoch 28/30
4800/4800 [===] - 86s - loss: 0.1092 - acc: 0.9606 - val_loss: 0.2669 - val_acc: 0.9092
Epoch 29/30
4800/4800 [===] - 86s - loss: 0.0939 - acc: 0.9642 - val_loss: 0.2535 - val_acc: 0.8975
Epoch 30/30
4800/4800 [===] - 86s - loss: 0.1098 - acc: 0.9615 - val_loss: 0.2594 - val_acc: 0.9008
Training time: 2615.465226173401
1200/1200 [==============================] - 30s
[INFO] loss=0.2594, accuracy: 90.0833%
both models seem to give similar results.Is this a good result or is there any anomalies that I cant detect? Or is the model a good one?
Additional info Batch size 128,loss=categorical cross-entropy,optimizer-adadelta
Any suggestions for improvent is also appriciated

The next steps I would consider are:
What is happening to your training loss. This is can be a useful metric. If you see your training loss go to 100% try adding more regularization, such as L2 regularization.
Is your convolutional network using residual layers and batch normalization? Those residual networks using batch norm seem to be state of the art in most cases right now.
Test different numbers of filters in each convolutional layer. If you have too many filters you overfit if you have too few you underfit. There's a sweet spot in there and it makes a difference.
Your batch size can affect your results as well, play with it, I've done extensive hyperparameter searches on small vs. large datasets and come up with widely different ideal batch sizes
You might test the Adam optimizer. AdaDelta is solid as well though.
Train a number of models, random initialization will produce slightly different end results. Using an ensemble of models will do better still.
Randomly initializing with the Xavier Initializer may give you a small bump in accuracy (there's a different one for conv layers and fc layers).
Lower your learning rate after validation error plateaus, this typically improves performance some each time you lower it.
There are always many things you can try. Start running experiments, train a few models with various values of some of these suggestions and see where you get improvements.

Related

Validation Accuracy doesnt improve at all from the beggining

I am trying to classify the severity of COVID XRay using 426 256x256 xray images and 4 classes present. However the validation accuracy doesnt improve at all. The validation loss also barely decreases from the start
This is the model I am using
from keras.models import Sequential
from keras.layers import Dense,Conv2D,MaxPooling2D,Dropout,Flatten
from tensorflow.keras.callbacks import EarlyStopping
from tensorflow.keras import regularizers
model=Sequential()
model.add(Conv2D(filters=64,kernel_size=(4,4),input_shape=image_shape,activation="relu"))
model.add(MaxPooling2D(pool_size=(2,2)))
model.add(Dropout(0.2))
model.add(Conv2D(filters=128,kernel_size=(6,6),input_shape=image_shape,activation="relu"))
model.add(MaxPooling2D(pool_size=(3,3)))
model.add(Dropout(0.2))
model.add(Flatten())
model.add(Dense(64,activation="relu"))
model.add(Dense(16,activation="relu"))
model.add(Dense(4,activation="softmax"))
model.compile(loss="categorical_crossentropy",optimizer="adam",metrics=["accuracy"])
These are the outputs I get
epochs = 20
batch_size = 8
model.fit(X_train, y_train, validation_data=(X_test, y_test),
epochs=epochs,
batch_size=batch_size
)
Epoch 1/20
27/27 [==============================] - 4s 143ms/step - loss: 0.1776 - accuracy: 0.9528 - val_loss: 3.7355 - val_accuracy: 0.2717
Epoch 2/20
27/27 [==============================] - 4s 142ms/step - loss: 0.1152 - accuracy: 0.9481 - val_loss: 4.0038 - val_accuracy: 0.2283
Epoch 3/20
27/27 [==============================] - 4s 142ms/step - loss: 0.0875 - accuracy: 0.9858 - val_loss: 4.1756 - val_accuracy: 0.2391
Epoch 4/20
27/27 [==============================] - 4s 142ms/step - loss: 0.0521 - accuracy: 0.9906 - val_loss: 4.1034 - val_accuracy: 0.2717
Epoch 5/20
27/27 [==============================] - 4s 142ms/step - loss: 0.0496 - accuracy: 0.9858 - val_loss: 4.8433 - val_accuracy: 0.3152
Epoch 6/20
27/27 [==============================] - 4s 142ms/step - loss: 0.0170 - accuracy: 0.9953 - val_loss: 5.6027 - val_accuracy: 0.3043
Epoch 7/20
27/27 [==============================] - 4s 142ms/step - loss: 0.2307 - accuracy: 0.9245 - val_loss: 4.2759 - val_accuracy: 0.3152
Epoch 8/20
27/27 [==============================] - 4s 142ms/step - loss: 0.6493 - accuracy: 0.7830 - val_loss: 3.8390 - val_accuracy: 0.3478
Epoch 9/20
27/27 [==============================] - 4s 142ms/step - loss: 0.2563 - accuracy: 0.9009 - val_loss: 5.0250 - val_accuracy: 0.2500
Epoch 10/20
27/27 [==============================] - 4s 142ms/step - loss: 0.0286 - accuracy: 1.0000 - val_loss: 4.6475 - val_accuracy: 0.2391
Epoch 11/20
27/27 [==============================] - 4s 142ms/step - loss: 0.0097 - accuracy: 1.0000 - val_loss: 5.2198 - val_accuracy: 0.2391
Epoch 12/20
27/27 [==============================] - 4s 142ms/step - loss: 0.0037 - accuracy: 1.0000 - val_loss: 5.7914 - val_accuracy: 0.2500
Epoch 13/20
27/27 [==============================] - 4s 142ms/step - loss: 0.0048 - accuracy: 1.0000 - val_loss: 5.4341 - val_accuracy: 0.2391
Epoch 14/20
27/27 [==============================] - 4s 142ms/step - loss: 0.0044 - accuracy: 1.0000 - val_loss: 5.6364 - val_accuracy: 0.2391
Epoch 15/20
27/27 [==============================] - 4s 143ms/step - loss: 0.0019 - accuracy: 1.0000 - val_loss: 5.8504 - val_accuracy: 0.2391
Epoch 16/20
27/27 [==============================] - 4s 143ms/step - loss: 0.0013 - accuracy: 1.0000 - val_loss: 5.9604 - val_accuracy: 0.2500
Epoch 17/20
27/27 [==============================] - 4s 149ms/step - loss: 0.0023 - accuracy: 1.0000 - val_loss: 6.0851 - val_accuracy: 0.2717
Epoch 18/20
27/27 [==============================] - 4s 142ms/step - loss: 0.0134 - accuracy: 0.9953 - val_loss: 4.9783 - val_accuracy: 0.2717
Epoch 19/20
27/27 [==============================] - 4s 141ms/step - loss: 0.0068 - accuracy: 1.0000 - val_loss: 5.7421 - val_accuracy: 0.2500
Epoch 20/20
27/27 [==============================] - 4s 142ms/step - loss: 0.0024 - accuracy: 1.0000 - val_loss: 5.8480 - val_accuracy: 0.2283
Any tips on how i can solve this or If i am doing something wrong?

Weird Model Summary

I am getting weird model summary using keras and ImageDataGenerator when used with Cats and dogs classification.
I am using Google Colab+GPU.
The problem is model summary seems to throw weird values and looks like loss function is not working.
Kindly suggest what is the problem.
My code is as below
train_datagen=ImageDataGenerator(rescale=1./255)
test_datagen=ImageDataGenerator(rescale=1./255)
train_generator=train_datagen.flow_from_directory(
train_dir,
target_size=(150,150),
batch_size=32,
class_mode='binary')
validation_generator=train_datagen.flow_from_directory(validation_dir,target_size=
(150,150),batch_size=50,class_mode='binary')
history=model.fit(train_generator,steps_per_epoch=31,epochs=20,validation_data=validation_generator,validation_steps=20)
Model Summary is as below
Epoch 1/20
31/31 [==============================] - 10s 241ms/step - loss: 0.1302 - acc: 1.0000 -
val_loss: 5.0506 - val_acc: 0.5000
Epoch 2/20
31/31 [==============================] - 6s 215ms/step - loss: 4.4286e-05 - acc: 1.0000 -
val_loss: 6.8281 - val_acc: 0.5000
Epoch 3/20
31/31 [==============================] - 7s 212ms/step - loss: 4.6900e-06 - acc: 1.0000 -
val_loss: 8.1907 - val_acc: 0.5000
Epoch 4/20
31/31 [==============================] - 6s 211ms/step - loss: 5.8646e-07 - acc: 1.0000 -
val_loss: 9.3841 - val_acc: 0.5000
Epoch 5/20
31/31 [==============================] - 6s 212ms/step - loss: 2.0634e-07 - acc: 1.0000 -
val_loss: 10.3554 - val_acc: 0.5000
Epoch 6/20
31/31 [==============================] - 6s 211ms/step - loss: 2.8432e-08 - acc: 1.0000 -
val_loss: 11.3546 - val_acc: 0.5000
Epoch 7/20
31/31 [==============================] - 6s 211ms/step - loss: 1.3657e-08 - acc: 1.0000 -
val_loss: 12.1012 - val_acc: 0.5000
Epoch 8/20
31/31 [==============================] - 7s 215ms/step - loss: 4.8156e-09 - acc: 1.0000 -
val_loss: 12.6892 - val_acc: 0.5000
Epoch 9/20
31/31 [==============================] - 7s 219ms/step - loss: 2.9152e-09 - acc: 1.0000 -
val_loss: 13.1079 - val_acc: 0.5000
Epoch 10/20
31/31 [==============================] - 7s 216ms/step - loss: 1.6705e-09 - acc: 1.0000 -
val_loss: 13.4230 - val_acc: 0.5000
Epoch 11/20
31/31 [==============================] - 7s 218ms/step - loss: 1.2603e-09 - acc: 1.0000 -
val_loss: 13.6259 - val_acc: 0.5000
Epoch 12/20
31/31 [==============================] - 7s 218ms/step - loss: 1.7701e-09 - acc: 1.0000 - val_loss:
13.7718 - val_acc: 0.5000
Epoch 13/20
31/31 [==============================] - 7s 218ms/step - loss: 1.6043e-09 - acc: 1.0000 - val_loss:
13.9099 - val_acc: 0.5000
Epoch 14/20
31/31 [==============================] - 7s 219ms/step - loss: 3.8831e-10 - acc: 1.0000 -
val_loss: 14.0405 - val_acc: 0.5000
Epoch 15/20
31/31 [==============================] - 7s 216ms/step - loss: 8.9113e-10 - acc: 1.0000 - val_loss:
14.1567 - val_acc: 0.5000
Epoch 16/20
31/31 [==============================] - 7s 218ms/step - loss: 8.5343e-10 - acc: 1.0000 -
val_loss: 14.2485 - val_acc: 0.5000
Epoch 17/20
31/31 [==============================] - 7s 217ms/step - loss: 2.8638e-10 - acc: 1.0000 -
val_loss: 14.3410 - val_acc: 0.5000
Epoch 18/20
31/31 [==============================] - 7s 218ms/step - loss: 5.3467e-10 - acc: 1.0000
- val_loss: 14.4225 - val_acc: 0.5000
Epoch 19/20
31/31 [==============================] - 7s 217ms/step - loss: 4.5269e-10 - acc: 1.0000
- val_loss: 14.4895 - val_acc: 0.5000
Epoch 20/20
31/31 [==============================] - 7s 216ms/step - loss: 3.4228e-10 - acc:
1.0000 - val_loss: 14.5428 - val_acc: 0.5000
You should use model.summary() instead of history = model.fit...

Accuracy remains constant after every epoch

I Have created a model to classify plane and cars images bu after very epoch the acc and val_acc remains same
import numpy as np
import matplotlib as plt
from keras.models import Sequential
from keras.layers import Convolution2D
from keras.layers import MaxPooling2D
from keras.layers import Flatten
from keras.layers import Dense
from keras.preprocessing.image import ImageDataGenerator
from keras.preprocessing.image import image
import os
model=Sequential()
model.add(Convolution2D(32,(3,3),input_shape=(64,64,3),activation="relu"))
model.add(MaxPooling2D(2,2))
model.add(Convolution2D(64,(3,3),activation="relu"))
model.add(MaxPooling2D(2,2))
model.add(Convolution2D(64,(3,3),activation="sigmoid"))
model.add(MaxPooling2D(2,2))
model.add(Flatten())
model.add(Dense(32,activation="sigmoid"))
model.add(Dense(32,activation="sigmoid"))
model.add(Dense(32,activation="sigmoid"))
model.add(Dense(1,activation="softmax"))
model.compile(optimizer='adam', loss='binary_crossentropy',
metrics=['accuracy'])
train_datagen = ImageDataGenerator(
rescale=1./255,
shear_range=0.2,
zoom_range=0.2,
horizontal_flip=True)
test_datagen = ImageDataGenerator(rescale=1./255)
train_set = train_datagen.flow_from_directory(
'train_images',
target_size=(64,64),
batch_size=32,
class_mode='binary')
test_set = train_datagen.flow_from_directory(
'val_set',
target_size=(64,64),
batch_size=32,
class_mode='binary')
model.fit_generator(
train_set,
steps_per_epoch=160,
epochs=25,
validation_data=test_set,
validation_steps=40)
Epoch 1/25
30/30 [==============================] - 18s 593ms/step - loss: 7.9712 - acc: 0.5000 - val_loss:
7.9712 - val_acc: 0.5000
Epoch 2/25
30/30 [==============================] - 15s 491ms/step - loss: 7.9712 - acc: 0.5000 - val_loss:
7.9712 - val_acc: 0.5000
Epoch 3/25
30/30 [==============================] - 19s 640ms/step - loss: 7.9712 - acc: 0.5000 - val_loss:
7.9712 - val_acc: 0.5000
Epoch 4/25
30/30 [==============================] - 14s 474ms/step - loss: 7.9712 - acc: 0.5000 - val_loss:
7.9712 - val_acc: 0.5000
Epoch 5/25
30/30 [==============================] - 16s 532ms/step - loss: 7.9712 - acc: 0.5000 - val_loss:
7.9712 - val_acc: 0.5000
Epoch 6/25
30/30 [==============================] - 14s 473ms/step - loss: 7.9712 - acc: 0.5000 - val_loss:
7.9712 - val_acc: 0.5000
Epoch 7/25
30/30 [==============================] - 14s 469ms/step - loss: 7.9712 - acc: 0.5000 - val_loss:
7.9712 - val_acc: 0.5000
Epoch 8/25
30/30 [==============================] - 14s 469ms/step - loss: 7.9712 - acc: 0.5000 - val_loss:
7.9712 - val_acc: 0.5000
Epoch 9/25
30/30 [==============================] - 14s 472ms/step - loss: 7.9712 - acc: 0.5000 - val_loss:
7.9712 - val_acc: 0.5000
Epoch 10/25
30/30 [==============================] - 16s 537ms/step - loss: 7.9712 - acc: 0.5000 - val_loss:
7.9712 - val_acc: 0.5000
Epoch 11/25
30/30 [==============================] - 18s 590ms/step - loss: 7.9712 - acc: 0.5000 - val_loss:
7.9712 - val_acc: 0.5000
Epoch 12/25
30/30 [==============================] - 13s 441ms/step - loss: 7.9712 - acc: 0.5000 - val_loss:
7.9712 - val_acc: 0.5000
Epoch 13/25
30/30 [==============================] - 11s 374ms/step - loss: 7.9712 - acc: 0.5000 - val_loss:
7.9712 - val_acc: 0.5000
Epoch 14/25
30/30 [==============================] - 11s 370ms/step - loss: 7.9712 - acc: 0.5000 - val_loss:
7.9712 - val_acc: 0.5000
Epoch 15/25
30/30 [==============================] - 13s 441ms/step - loss: 7.9712 - acc: 0.5000 - val_loss:
7.9712 - val_acc: 0.5000
Epoch 16/25
30/30 [==============================] - 13s 419ms/step - loss: 7.9712 - acc: 0.5000 - val_loss:
7.9712 - val_acc: 0.5000
Epoch 17/25
30/30 [==============================] - 12s 401ms/step - loss: 7.9712 - acc: 0.5000 - val_loss:
7.9712 - val_acc: 0.5000
Epoch 18/25
30/30 [==============================] - 16s 536ms/step - loss: 7.9712 - acc: 0.5000 - val_loss:
7.9712 - val_acc: 0.5000
Epoch 19/25
30/30 [==============================] - 16s 523ms/step - loss: 7.9712 - acc: 0.5000 - val_loss:
7.9712 - val_acc: 0.5000
Epoch 20/25
30/30 [==============================] - 16s 530ms/step - loss: 7.9712 - acc: 0.5000 - val_loss:
7.9712 - val_acc: 0.5000
Epoch 21/25
30/30 [==============================] - 16s 546ms/step - loss: 7.9712 - acc: 0.5000 - val_loss:
7.9712 - val_acc: 0.5000
Epoch 22/25
30/30 [==============================] - 15s 500ms/step - loss: 7.9712 - acc: 0.5000 - val_loss:
7.9712 - val_acc: 0.5000
Epoch 23/25
30/30 [==============================] - 16s 546ms/step - loss: 7.9712 - acc: 0.5000 - val_loss:
7.9712 - val_acc: 0.5000
Epoch 24/25
30/30 [==============================] - 16s 545ms/step - loss: 7.9712 - acc: 0.5000 - val_loss:
7.9712 - val_acc: 0.5000
Epoch 25/25
30/30 [==============================] - 15s 515ms/step - loss: 7.9712 - acc: 0.5000 - val_loss:
7.9712 - val_acc: 0.5000
You have several issues in your model structure.
First of all, for the output of your model
model.add(Dense(1,activation="softmax"))
You are using a softmax, which means you try to solve a multi-class classification, not a binary classification. If it is really the case, you need to change your loss to categorical_crossentropy. In this way the compile line will turn into:
model.compile(optimizer='adam', loss='categorical_crossentropy',
metrics=['accuracy'])
If it is not the case and you want only solve a binary classification, you might be good, but I do suggest to change the last layer activation to sigmoid
Second: That is a bad idea to use sigmoid as the activation in the middle layer since it can easily cause the gradient to vanish (read more here ). Try to change all the sigmoid activation in the middle layer with wether relu or even better with leakyrelu
The problem is exactly here:
model.add(Dense(1,activation="softmax"))
You cannot use softmax with one neuron, as it normalizes over neurons, meaning that with one neuron it will always produce a constant 1.0 value. For binary classification you have to use sigmoid activation at the output:
model.add(Dense(1,activation="sigmoid"))
Also it is not wise to use sigmoid activations in hidden layers, as they will produce vanishing gradient problems. Please prefer ReLU or similar activations.

Why my loss function or accuracy does not improve?

I have 3D CNN U-net architecture to solve segmentation problem. I am using Adam optimisation together with binary cross entropy and the metric is "accuracy". I try to understand why it does not improve.
Train on 2774 samples, validate on 694 samples
Epoch 1/20
2774/2774 [==============================] - 166s 60ms/step - loss: 0.5189 - acc: 0.7928 - val_loss: 0.5456 - val_acc: 0.7674
Epoch 00001: val_loss improved from inf to 0.54555, saving model to model-tgs-salt-1.h5
Epoch 2/20
2774/2774 [==============================] - 170s 61ms/step - loss: 0.5170 - acc: 0.7928 - val_loss: 0.5485 - val_acc: 0.7674
Epoch 00002: val_loss did not improve from 0.54555
Epoch 3/20
2774/2774 [==============================] - 169s 61ms/step - loss: 0.5119 - acc: 0.7928 - val_loss: 0.5455 - val_acc: 0.7674
Epoch 00003: val_loss improved from 0.54555 to 0.54549, saving model to model-tgs-salt-1.h5
Epoch 4/20
2774/2774 [==============================] - 170s 61ms/step - loss: 0.5117 - acc: 0.7928 - val_loss: 0.5715 - val_acc: 0.7674
Epoch 00004: val_loss did not improve from 0.54549
Epoch 5/20
2774/2774 [==============================] - 169s 61ms/step - loss: 0.5126 - acc: 0.7928 - val_loss: 0.5566 - val_acc: 0.7674
Epoch 00005: val_loss did not improve from 0.54549
Epoch 6/20
2774/2774 [==============================] - 169s 61ms/step - loss: 0.5138 - acc: 0.7928 - val_loss: 0.5503 - val_acc: 0.7674
Epoch 00006: val_loss did not improve from 0.54549
Epoch 7/20
2774/2774 [==============================] - 170s 61ms/step - loss: 0.5103 - acc: 0.7928 - val_loss: 0.5444 - val_acc: 0.7674
Epoch 00007: val_loss improved from 0.54549 to 0.54436, saving model to model-tgs-salt-1.h5
Epoch 8/20
2774/2774 [==============================] - 169s 61ms/step - loss: 0.5137 - acc: 0.7928 - val_loss: 0.5454 - val_acc: 0.7674
If you use batch size in your network. let's try to increase that. I think it could be useful in speed of train.

Same code run on two different machines, disparity in accuracy; From Deep Learning with Python Chapter 5.3 pretrained convnet

I'm following along with Chollet's book Deep Learning with Python and in chapter 5.3 I've come across a weird accuracy disparity between myself and the author.
After running the exact code pulled from the github, obtainable here I'm getting
test acc: 0.9409999930858612
while the author is getting
test acc: 0.967999992371
Also, when initially starting to train the models I am usually 10% behind versus when the author starts. Here are all of my outputs in the order in which they appear on that github link.
I'm looking for any pointers as to why running the same code is leaving such a huge gap. Thanks for taking a look!
First
Train on 2000 samples, validate on 1000 samples
Epoch 1/30
2000/2000 [==============================] - 1s 392us/step - loss: 0.6145 - acc: 0.6570 - val_loss: 0.4502 - val_acc: 0.8250
Epoch 2/30
2000/2000 [==============================] - 1s 260us/step - loss: 0.4402 - acc: 0.7980 - val_loss: 0.3596 - val_acc: 0.8600
Epoch 3/30
2000/2000 [==============================] - 1s 258us/step - loss: 0.3559 - acc: 0.8420 - val_loss: 0.3238 - val_acc: 0.8710
Epoch 4/30
2000/2000 [==============================] - 1s 257us/step - loss: 0.3149 - acc: 0.8655 - val_loss: 0.2945 - val_acc: 0.8800
Epoch 5/30
2000/2000 [==============================] - 1s 259us/step - loss: 0.2895 - acc: 0.8850 - val_loss: 0.2905 - val_acc: 0.8710
Epoch 6/30
2000/2000 [==============================] - 1s 257us/step - loss: 0.2627 - acc: 0.8970 - val_loss: 0.2695 - val_acc: 0.8950
Epoch 7/30
2000/2000 [==============================] - 1s 265us/step - loss: 0.2450 - acc: 0.9040 - val_loss: 0.2608 - val_acc: 0.8930
Epoch 8/30
2000/2000 [==============================] - 1s 259us/step - loss: 0.2328 - acc: 0.9150 - val_loss: 0.2937 - val_acc: 0.8670
Epoch 9/30
2000/2000 [==============================] - 1s 260us/step - loss: 0.2208 - acc: 0.9170 - val_loss: 0.2933 - val_acc: 0.8660
Epoch 10/30
2000/2000 [==============================] - 1s 254us/step - loss: 0.2026 - acc: 0.9225 - val_loss: 0.2471 - val_acc: 0.9040
Epoch 11/30
2000/2000 [==============================] - 1s 259us/step - loss: 0.1954 - acc: 0.9260 - val_loss: 0.2461 - val_acc: 0.9000
Epoch 12/30
2000/2000 [==============================] - 1s 260us/step - loss: 0.1786 - acc: 0.9360 - val_loss: 0.2414 - val_acc: 0.9070
Epoch 13/30
2000/2000 [==============================] - 0s 248us/step - loss: 0.1781 - acc: 0.9305 - val_loss: 0.2410 - val_acc: 0.9080
Epoch 14/30
2000/2000 [==============================] - 0s 249us/step - loss: 0.1701 - acc: 0.9380 - val_loss: 0.2372 - val_acc: 0.9080
Epoch 15/30
2000/2000 [==============================] - 1s 257us/step - loss: 0.1624 - acc: 0.9450 - val_loss: 0.2403 - val_acc: 0.9050
Epoch 16/30
2000/2000 [==============================] - 1s 258us/step - loss: 0.1580 - acc: 0.9465 - val_loss: 0.2448 - val_acc: 0.9060
Epoch 17/30
2000/2000 [==============================] - 1s 256us/step - loss: 0.1467 - acc: 0.9520 - val_loss: 0.2347 - val_acc: 0.9050
Epoch 18/30
2000/2000 [==============================] - 1s 255us/step - loss: 0.1421 - acc: 0.9505 - val_loss: 0.2366 - val_acc: 0.9020
Epoch 19/30
2000/2000 [==============================] - 1s 258us/step - loss: 0.1375 - acc: 0.9540 - val_loss: 0.2327 - val_acc: 0.9080
Epoch 20/30
2000/2000 [==============================] - 0s 248us/step - loss: 0.1268 - acc: 0.9545 - val_loss: 0.2395 - val_acc: 0.9030
Epoch 21/30
2000/2000 [==============================] - 1s 255us/step - loss: 0.1216 - acc: 0.9565 - val_loss: 0.2436 - val_acc: 0.9040
Epoch 22/30
2000/2000 [==============================] - 1s 255us/step - loss: 0.1220 - acc: 0.9565 - val_loss: 0.2340 - val_acc: 0.9040
Epoch 23/30
2000/2000 [==============================] - 1s 261us/step - loss: 0.1152 - acc: 0.9630 - val_loss: 0.2328 - val_acc: 0.9030
Epoch 24/30
2000/2000 [==============================] - 1s 251us/step - loss: 0.1111 - acc: 0.9605 - val_loss: 0.2506 - val_acc: 0.8990
Epoch 25/30
2000/2000 [==============================] - 1s 257us/step - loss: 0.1024 - acc: 0.9665 - val_loss: 0.2391 - val_acc: 0.9040
Epoch 26/30
2000/2000 [==============================] - 0s 250us/step - loss: 0.0999 - acc: 0.9680 - val_loss: 0.2573 - val_acc: 0.8980
Epoch 27/30
2000/2000 [==============================] - 1s 261us/step - loss: 0.0996 - acc: 0.9680 - val_loss: 0.2365 - val_acc: 0.9060
Epoch 28/30
2000/2000 [==============================] - 0s 250us/step - loss: 0.0873 - acc: 0.9765 - val_loss: 0.2444 - val_acc: 0.9020
Epoch 29/30
2000/2000 [==============================] - 0s 244us/step - loss: 0.0904 - acc: 0.9730 - val_loss: 0.2494 - val_acc: 0.9020
Epoch 30/30
2000/2000 [==============================] - 0s 245us/step - loss: 0.0876 - acc: 0.9745 - val_loss: 0.2426 - val_acc: 0.9020
Second
Found 2000 images belonging to 2 classes.
Found 1000 images belonging to 2 classes.
Epoch 1/30
- 13s - loss: 0.6106 - acc: 0.6725 - val_loss: 0.4488 - val_acc: 0.8300
Epoch 2/30
- 12s - loss: 0.4856 - acc: 0.7820 - val_loss: 0.3938 - val_acc: 0.8290
Epoch 3/30
- 12s - loss: 0.4271 - acc: 0.8125 - val_loss: 0.3307 - val_acc: 0.8690
Epoch 4/30
- 12s - loss: 0.4046 - acc: 0.8215 - val_loss: 0.3040 - val_acc: 0.8780
Epoch 5/30
- 12s - loss: 0.3809 - acc: 0.8275 - val_loss: 0.2999 - val_acc: 0.8670
Epoch 6/30
- 12s - loss: 0.3592 - acc: 0.8510 - val_loss: 0.2794 - val_acc: 0.8890
Epoch 7/30
- 12s - loss: 0.3709 - acc: 0.8350 - val_loss: 0.2703 - val_acc: 0.8950
Epoch 8/30
- 12s - loss: 0.3460 - acc: 0.8525 - val_loss: 0.2683 - val_acc: 0.8940
Epoch 9/30
- 12s - loss: 0.3532 - acc: 0.8430 - val_loss: 0.2660 - val_acc: 0.8820
Epoch 10/30
- 12s - loss: 0.3277 - acc: 0.8545 - val_loss: 0.2641 - val_acc: 0.8950
Epoch 11/30
- 12s - loss: 0.3236 - acc: 0.8685 - val_loss: 0.2705 - val_acc: 0.8770
Epoch 12/30
- 12s - loss: 0.3123 - acc: 0.8740 - val_loss: 0.2533 - val_acc: 0.8960
Epoch 13/30
- 12s - loss: 0.3279 - acc: 0.8605 - val_loss: 0.2718 - val_acc: 0.8740
Epoch 14/30
- 12s - loss: 0.3088 - acc: 0.8595 - val_loss: 0.2510 - val_acc: 0.9000
Epoch 15/30
- 12s - loss: 0.2999 - acc: 0.8700 - val_loss: 0.2468 - val_acc: 0.9010
Epoch 16/30
- 12s - loss: 0.3128 - acc: 0.8600 - val_loss: 0.2496 - val_acc: 0.9020
Epoch 17/30
- 12s - loss: 0.3064 - acc: 0.8605 - val_loss: 0.2496 - val_acc: 0.9010
Epoch 18/30
- 12s - loss: 0.3090 - acc: 0.8660 - val_loss: 0.2467 - val_acc: 0.8980
Epoch 19/30
- 12s - loss: 0.2903 - acc: 0.8710 - val_loss: 0.2709 - val_acc: 0.8790
Epoch 20/30
- 12s - loss: 0.3012 - acc: 0.8700 - val_loss: 0.2499 - val_acc: 0.8940
Epoch 21/30
- 12s - loss: 0.2944 - acc: 0.8820 - val_loss: 0.2593 - val_acc: 0.8960
Epoch 22/30
- 12s - loss: 0.2978 - acc: 0.8670 - val_loss: 0.2421 - val_acc: 0.9040
Epoch 23/30
- 12s - loss: 0.2942 - acc: 0.8695 - val_loss: 0.2378 - val_acc: 0.9050
Epoch 24/30
- 12s - loss: 0.2809 - acc: 0.8830 - val_loss: 0.2447 - val_acc: 0.8920
Epoch 25/30
- 12s - loss: 0.2963 - acc: 0.8765 - val_loss: 0.2420 - val_acc: 0.8950
Epoch 26/30
- 12s - loss: 0.2869 - acc: 0.8725 - val_loss: 0.2620 - val_acc: 0.8910
Epoch 27/30
- 12s - loss: 0.2789 - acc: 0.8820 - val_loss: 0.2447 - val_acc: 0.8950
Epoch 28/30
- 12s - loss: 0.2852 - acc: 0.8745 - val_loss: 0.2488 - val_acc: 0.8990
Epoch 29/30
- 12s - loss: 0.2821 - acc: 0.8810 - val_loss: 0.2402 - val_acc: 0.9010
Epoch 30/30
- 12s - loss: 0.2810 - acc: 0.8815 - val_loss: 0.2392 - val_acc: 0.9040
Third
Epoch 1/100
100/100 [==============================] - 13s 130ms/step - loss: 0.2866 - acc: 0.8735 - val_loss: 0.2175 - val_acc: 0.9080
Epoch 2/100
100/100 [==============================] - 12s 119ms/step - loss: 0.2588 - acc: 0.8925 - val_loss: 0.2073 - val_acc: 0.9200
Epoch 3/100
100/100 [==============================] - 12s 121ms/step - loss: 0.2464 - acc: 0.8985 - val_loss: 0.2072 - val_acc: 0.9200
Epoch 4/100
100/100 [==============================] - 12s 121ms/step - loss: 0.2127 - acc: 0.9085 - val_loss: 0.2032 - val_acc: 0.9230
Epoch 5/100
100/100 [==============================] - 12s 120ms/step - loss: 0.2147 - acc: 0.9100 - val_loss: 0.1972 - val_acc: 0.9200
Epoch 6/100
100/100 [==============================] - 12s 118ms/step - loss: 0.1998 - acc: 0.9130 - val_loss: 0.1975 - val_acc: 0.9240
Epoch 7/100
100/100 [==============================] - 12s 120ms/step - loss: 0.1977 - acc: 0.9235 - val_loss: 0.2052 - val_acc: 0.9170
Epoch 8/100
100/100 [==============================] - 12s 120ms/step - loss: 0.1748 - acc: 0.9270 - val_loss: 0.1890 - val_acc: 0.9270
Epoch 9/100
100/100 [==============================] - 12s 119ms/step - loss: 0.1724 - acc: 0.9325 - val_loss: 0.2060 - val_acc: 0.9230
Epoch 10/100
100/100 [==============================] - 12s 120ms/step - loss: 0.1412 - acc: 0.9435 - val_loss: 0.1968 - val_acc: 0.9190
Epoch 11/100
100/100 [==============================] - 12s 119ms/step - loss: 0.1455 - acc: 0.9450 - val_loss: 0.1805 - val_acc: 0.9350
Epoch 12/100
100/100 [==============================] - 12s 119ms/step - loss: 0.1462 - acc: 0.9450 - val_loss: 0.1814 - val_acc: 0.9340
Epoch 13/100
100/100 [==============================] - 12s 120ms/step - loss: 0.1243 - acc: 0.9535 - val_loss: 0.2028 - val_acc: 0.9250
Epoch 14/100
100/100 [==============================] - 12s 119ms/step - loss: 0.1306 - acc: 0.9500 - val_loss: 0.1753 - val_acc: 0.9310
Epoch 15/100
100/100 [==============================] - 12s 120ms/step - loss: 0.1222 - acc: 0.9525 - val_loss: 0.1981 - val_acc: 0.9310
Epoch 16/100
100/100 [==============================] - 12s 119ms/step - loss: 0.1221 - acc: 0.9500 - val_loss: 0.2299 - val_acc: 0.9160
Epoch 17/100
100/100 [==============================] - 12s 120ms/step - loss: 0.1019 - acc: 0.9625 - val_loss: 0.2630 - val_acc: 0.9160
Epoch 18/100
100/100 [==============================] - 12s 120ms/step - loss: 0.0970 - acc: 0.9630 - val_loss: 0.1876 - val_acc: 0.9250
Epoch 19/100
100/100 [==============================] - 12s 120ms/step - loss: 0.0961 - acc: 0.9620 - val_loss: 0.2018 - val_acc: 0.9300
Epoch 20/100
100/100 [==============================] - 12s 121ms/step - loss: 0.1085 - acc: 0.9570 - val_loss: 0.1957 - val_acc: 0.9320
Epoch 21/100
100/100 [==============================] - 12s 119ms/step - loss: 0.0937 - acc: 0.9630 - val_loss: 0.1920 - val_acc: 0.9290
Epoch 22/100
100/100 [==============================] - 12s 119ms/step - loss: 0.0953 - acc: 0.9605 - val_loss: 0.2289 - val_acc: 0.9260
Epoch 23/100
100/100 [==============================] - 12s 120ms/step - loss: 0.0808 - acc: 0.9700 - val_loss: 0.2148 - val_acc: 0.9260
Epoch 24/100
100/100 [==============================] - 12s 120ms/step - loss: 0.0927 - acc: 0.9645 - val_loss: 0.2542 - val_acc: 0.9230
Epoch 25/100
100/100 [==============================] - 12s 119ms/step - loss: 0.0924 - acc: 0.9580 - val_loss: 0.2366 - val_acc: 0.9250
Epoch 26/100
100/100 [==============================] - 12s 119ms/step - loss: 0.0686 - acc: 0.9760 - val_loss: 0.2021 - val_acc: 0.9370
Epoch 27/100
100/100 [==============================] - 12s 119ms/step - loss: 0.0761 - acc: 0.9735 - val_loss: 0.2552 - val_acc: 0.9190
Epoch 28/100
100/100 [==============================] - 12s 119ms/step - loss: 0.0713 - acc: 0.9740 - val_loss: 0.1946 - val_acc: 0.9330
Epoch 29/100
100/100 [==============================] - 12s 120ms/step - loss: 0.0670 - acc: 0.9735 - val_loss: 0.2767 - val_acc: 0.9140
Epoch 30/100
100/100 [==============================] - 12s 120ms/step - loss: 0.0562 - acc: 0.9780 - val_loss: 0.2539 - val_acc: 0.9300
Epoch 31/100
100/100 [==============================] - 12s 119ms/step - loss: 0.0723 - acc: 0.9750 - val_loss: 0.2265 - val_acc: 0.9270
Epoch 32/100
100/100 [==============================] - 12s 119ms/step - loss: 0.0661 - acc: 0.9755 - val_loss: 0.1973 - val_acc: 0.9340
Epoch 33/100
100/100 [==============================] - 12s 119ms/step - loss: 0.0683 - acc: 0.9740 - val_loss: 0.1937 - val_acc: 0.9330
Epoch 34/100
100/100 [==============================] - 12s 121ms/step - loss: 0.0575 - acc: 0.9800 - val_loss: 0.2816 - val_acc: 0.9250
Epoch 35/100
100/100 [==============================] - 12s 123ms/step - loss: 0.0602 - acc: 0.9795 - val_loss: 0.2012 - val_acc: 0.9300
Epoch 36/100
100/100 [==============================] - 12s 122ms/step - loss: 0.0550 - acc: 0.9790 - val_loss: 0.2138 - val_acc: 0.9360
Epoch 37/100
100/100 [==============================] - 12s 124ms/step - loss: 0.0546 - acc: 0.9750 - val_loss: 0.2061 - val_acc: 0.9400
Epoch 38/100
100/100 [==============================] - 12s 121ms/step - loss: 0.0638 - acc: 0.9780 - val_loss: 0.2375 - val_acc: 0.9290
Epoch 39/100
100/100 [==============================] - 12s 122ms/step - loss: 0.0520 - acc: 0.9785 - val_loss: 0.2437 - val_acc: 0.9260
Epoch 40/100
100/100 [==============================] - 12s 119ms/step - loss: 0.0522 - acc: 0.9775 - val_loss: 0.1932 - val_acc: 0.9430
Epoch 41/100
100/100 [==============================] - 12s 119ms/step - loss: 0.0512 - acc: 0.9800 - val_loss: 0.2903 - val_acc: 0.9200
Epoch 42/100
100/100 [==============================] - 12s 120ms/step - loss: 0.0546 - acc: 0.9790 - val_loss: 0.2127 - val_acc: 0.9410
Epoch 43/100
100/100 [==============================] - 12s 120ms/step - loss: 0.0558 - acc: 0.9805 - val_loss: 0.2027 - val_acc: 0.9410
Epoch 44/100
100/100 [==============================] - 12s 119ms/step - loss: 0.0408 - acc: 0.9875 - val_loss: 0.2138 - val_acc: 0.9380
Epoch 45/100
100/100 [==============================] - 12s 120ms/step - loss: 0.0451 - acc: 0.9810 - val_loss: 0.2076 - val_acc: 0.9390
Epoch 46/100
100/100 [==============================] - 12s 119ms/step - loss: 0.0529 - acc: 0.9820 - val_loss: 0.2035 - val_acc: 0.9420
Epoch 47/100
100/100 [==============================] - 12s 119ms/step - loss: 0.0375 - acc: 0.9850 - val_loss: 0.1965 - val_acc: 0.9430
Epoch 48/100
100/100 [==============================] - 12s 119ms/step - loss: 0.0407 - acc: 0.9870 - val_loss: 0.2131 - val_acc: 0.9410
Epoch 49/100
100/100 [==============================] - 12s 118ms/step - loss: 0.0387 - acc: 0.9840 - val_loss: 0.2467 - val_acc: 0.9350
Epoch 50/100
100/100 [==============================] - 12s 119ms/step - loss: 0.0412 - acc: 0.9860 - val_loss: 0.1852 - val_acc: 0.9430
Epoch 51/100
100/100 [==============================] - 12s 119ms/step - loss: 0.0350 - acc: 0.9855 - val_loss: 0.3657 - val_acc: 0.9200
Epoch 52/100
100/100 [==============================] - 12s 119ms/step - loss: 0.0337 - acc: 0.9850 - val_loss: 0.2103 - val_acc: 0.9450
Epoch 53/100
100/100 [==============================] - 12s 120ms/step - loss: 0.0478 - acc: 0.9815 - val_loss: 0.2192 - val_acc: 0.9440
Epoch 54/100
100/100 [==============================] - 12s 119ms/step - loss: 0.0446 - acc: 0.9820 - val_loss: 0.2293 - val_acc: 0.9360
Epoch 55/100
100/100 [==============================] - 12s 120ms/step - loss: 0.0318 - acc: 0.9885 - val_loss: 0.2361 - val_acc: 0.9390
Epoch 56/100
100/100 [==============================] - 12s 120ms/step - loss: 0.0317 - acc: 0.9865 - val_loss: 0.2123 - val_acc: 0.9450
Epoch 57/100
100/100 [==============================] - 12s 120ms/step - loss: 0.0337 - acc: 0.9905 - val_loss: 0.2219 - val_acc: 0.9420
Epoch 58/100
100/100 [==============================] - 12s 120ms/step - loss: 0.0390 - acc: 0.9895 - val_loss: 0.2046 - val_acc: 0.9380
Epoch 59/100
100/100 [==============================] - 12s 119ms/step - loss: 0.0295 - acc: 0.9905 - val_loss: 0.2522 - val_acc: 0.9410
Epoch 60/100
100/100 [==============================] - 12s 119ms/step - loss: 0.0315 - acc: 0.9890 - val_loss: 0.2451 - val_acc: 0.9330
Epoch 61/100
100/100 [==============================] - 12s 118ms/step - loss: 0.0251 - acc: 0.9935 - val_loss: 0.2584 - val_acc: 0.9300
Epoch 62/100
100/100 [==============================] - 12s 118ms/step - loss: 0.0338 - acc: 0.9860 - val_loss: 0.1990 - val_acc: 0.9440
Epoch 63/100
100/100 [==============================] - 12s 118ms/step - loss: 0.0301 - acc: 0.9885 - val_loss: 0.2289 - val_acc: 0.9330
Epoch 64/100
100/100 [==============================] - 12s 118ms/step - loss: 0.0255 - acc: 0.9900 - val_loss: 0.2251 - val_acc: 0.9440
Epoch 65/100
100/100 [==============================] - 12s 118ms/step - loss: 0.0302 - acc: 0.9880 - val_loss: 0.2312 - val_acc: 0.9440
Epoch 66/100
100/100 [==============================] - 12s 118ms/step - loss: 0.0198 - acc: 0.9925 - val_loss: 0.2832 - val_acc: 0.9360
Epoch 67/100
100/100 [==============================] - 12s 119ms/step - loss: 0.0257 - acc: 0.9890 - val_loss: 0.3406 - val_acc: 0.9230
Epoch 68/100
100/100 [==============================] - 12s 118ms/step - loss: 0.0261 - acc: 0.9885 - val_loss: 0.2148 - val_acc: 0.9410
Epoch 69/100
100/100 [==============================] - 12s 118ms/step - loss: 0.0414 - acc: 0.9850 - val_loss: 0.2319 - val_acc: 0.9370
Epoch 70/100
100/100 [==============================] - 12s 119ms/step - loss: 0.0286 - acc: 0.9910 - val_loss: 0.2229 - val_acc: 0.9400
Epoch 71/100
100/100 [==============================] - 12s 118ms/step - loss: 0.0275 - acc: 0.9905 - val_loss: 0.2303 - val_acc: 0.9360
Epoch 72/100
100/100 [==============================] - 12s 118ms/step - loss: 0.0293 - acc: 0.9895 - val_loss: 0.2329 - val_acc: 0.9400
Epoch 73/100
100/100 [==============================] - 12s 118ms/step - loss: 0.0262 - acc: 0.9925 - val_loss: 0.2768 - val_acc: 0.9350
Epoch 74/100
100/100 [==============================] - 12s 119ms/step - loss: 0.0258 - acc: 0.9895 - val_loss: 0.2277 - val_acc: 0.9410
Epoch 75/100
100/100 [==============================] - 12s 119ms/step - loss: 0.0293 - acc: 0.9900 - val_loss: 0.3432 - val_acc: 0.9270
Epoch 76/100
100/100 [==============================] - 12s 118ms/step - loss: 0.0245 - acc: 0.9895 - val_loss: 0.2557 - val_acc: 0.9460
Epoch 77/100
100/100 [==============================] - 12s 119ms/step - loss: 0.0242 - acc: 0.9920 - val_loss: 0.3263 - val_acc: 0.9310
Epoch 78/100
100/100 [==============================] - 12s 118ms/step - loss: 0.0269 - acc: 0.9925 - val_loss: 0.2669 - val_acc: 0.9390
Epoch 79/100
100/100 [==============================] - 12s 118ms/step - loss: 0.0277 - acc: 0.9895 - val_loss: 0.3285 - val_acc: 0.9330
Epoch 80/100
100/100 [==============================] - 12s 118ms/step - loss: 0.0211 - acc: 0.9930 - val_loss: 0.2640 - val_acc: 0.9300
Epoch 81/100
100/100 [==============================] - 12s 118ms/step - loss: 0.0229 - acc: 0.9905 - val_loss: 0.2543 - val_acc: 0.9390
Epoch 82/100
100/100 [==============================] - 12s 118ms/step - loss: 0.0205 - acc: 0.9940 - val_loss: 0.2587 - val_acc: 0.9400
Epoch 83/100
100/100 [==============================] - 12s 117ms/step - loss: 0.0260 - acc: 0.9920 - val_loss: 0.3032 - val_acc: 0.9290
Epoch 84/100
100/100 [==============================] - 12s 118ms/step - loss: 0.0253 - acc: 0.9930 - val_loss: 0.2701 - val_acc: 0.9400
Epoch 85/100
100/100 [==============================] - 12s 118ms/step - loss: 0.0244 - acc: 0.9940 - val_loss: 0.2766 - val_acc: 0.9390
Epoch 86/100
100/100 [==============================] - 12s 118ms/step - loss: 0.0148 - acc: 0.9940 - val_loss: 0.2749 - val_acc: 0.9390
Epoch 87/100
100/100 [==============================] - 12s 118ms/step - loss: 0.0230 - acc: 0.9920 - val_loss: 0.2702 - val_acc: 0.9310
Epoch 88/100
100/100 [==============================] - 12s 118ms/step - loss: 0.0249 - acc: 0.9895 - val_loss: 0.2651 - val_acc: 0.9400
Epoch 89/100
100/100 [==============================] - 12s 119ms/step - loss: 0.0174 - acc: 0.9935 - val_loss: 0.4466 - val_acc: 0.9220
Epoch 90/100
100/100 [==============================] - 12s 119ms/step - loss: 0.0180 - acc: 0.9945 - val_loss: 0.3415 - val_acc: 0.9350
Epoch 91/100
100/100 [==============================] - 12s 118ms/step - loss: 0.0216 - acc: 0.9950 - val_loss: 0.2878 - val_acc: 0.9390
Epoch 92/100
100/100 [==============================] - 12s 118ms/step - loss: 0.0231 - acc: 0.9890 - val_loss: 0.5113 - val_acc: 0.9130
Epoch 93/100
100/100 [==============================] - 12s 119ms/step - loss: 0.0327 - acc: 0.9880 - val_loss: 0.3749 - val_acc: 0.9280
Epoch 94/100
100/100 [==============================] - 12s 119ms/step - loss: 0.0181 - acc: 0.9935 - val_loss: 0.3770 - val_acc: 0.9280
Epoch 95/100
100/100 [==============================] - 12s 117ms/step - loss: 0.0142 - acc: 0.9955 - val_loss: 0.4558 - val_acc: 0.9250
Epoch 96/100
100/100 [==============================] - 12s 118ms/step - loss: 0.0174 - acc: 0.9920 - val_loss: 0.3398 - val_acc: 0.9360
Epoch 97/100
100/100 [==============================] - 12s 119ms/step - loss: 0.0208 - acc: 0.9935 - val_loss: 0.2885 - val_acc: 0.9450
Epoch 98/100
100/100 [==============================] - 12s 118ms/step - loss: 0.0188 - acc: 0.9945 - val_loss: 0.3521 - val_acc: 0.9260
Epoch 99/100
100/100 [==============================] - 12s 117ms/step - loss: 0.0154 - acc: 0.9940 - val_loss: 0.3361 - val_acc: 0.9340
Epoch 100/100
100/100 [==============================] - 12s 118ms/step - loss: 0.0202 - acc: 0.9935 - val_loss: 0.2974 - val_acc: 0.9390
The issue you are pointing out is perfectly normal. In your case, the difference between the starting/final accuracies are negligible so don't worry. If there was a huge difference, i.e. more than 5-8%, then you should be worried. Overall, there are at least 3 possible explanations:
The hardware is different: clearly results in minor accuracy differences.
Software differences: Running codes on GPU and CPU will oftentimes result in different but similar results.
Weight initialization (WI) might be different. Obviously this does not apply to your situation as you loaded the pertained VGG with the preset weights. Overall, HOW you do WI is a very important thing to consider in training deep nets.

Resources