Poor results in audio denoising - audio

My first try with Keras is not really a success.
The aim is to try audio denoising with DNN.
Datas used, features/label & pre-processing
Input files came from CHIME3 Contest (clear file = *.CH0.wav, noisy files = *.CH[1-6].wav, CH2 is very noisy) => sampling freq = 16k
Training set uses STFT (Nfft=1024, overlap=512, hamming symmetric of size 1024).
Each sample :
input feature is composed with 5 FFTs (FFT[n-4] FFT[n-2] FFT[n] FFT[n+2] FFT[n+4]). Step = 2 because of overlapping, so each FFT represents different temporal datas ==> size=5*153
Label if clear FFT[n] ==> size=513
Over all the training set, i normalize by the max of all STFT points. I don't normalize each frequency bin separately!
Model
OUTPUT_SIZE = 513
N = 5
def myDNN():
INPUT_SIZE = N*OUTPUT_SIZE
N_HIDDEN = 3
HIDDEN_SIZE = N*OUTPUT_SIZE
OPTIMIZER = Adam()
INPUT_KERNEL_INITIALIZER = 'glorot_uniform'
HIDDEN_KERNEL_INITIALIZER = 'glorot_uniform'
model = Sequential()
model.add(Dense(HIDDEN_SIZE, input_shape=(INPUT_SIZE,), activation='relu', kernel_initializer=INPUT_KERNEL_INITIALIZER))
for _ in np.arange(1,N_HIDDEN):
model.add(Dense(HIDDEN_SIZE, activation='relu', kernel_initializer=HIDDEN_KERNEL_INITIALIZER))
model.add(Dense(OUTPUT_SIZE, activation=OUTPUT_ACTIVATION, kernel_initializer=HIDDEN_KERNEL_INITIALIZER))
model.summary()
model.compile(loss='mse', optimizer=OPTIMIZER, metrics=['mae', 'mape'])
return model
# end of "myDNN"
Training results
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
dense_1 (Dense) (None, 2565) 6581790
_________________________________________________________________
dense_2 (Dense) (None, 2565) 6581790
_________________________________________________________________
dense_3 (Dense) (None, 2565) 6581790
_________________________________________________________________
dense_4 (Dense) (None, 513) 1316358
=================================================================
Total params: 21,061,728
Trainable params: 21,061,728
Non-trainable params: 0
_________________________________________________________________
Window (Type=hamming / SubType=symmetric / M=1024 / R=512) is COLA: unity_factor(1.080000) distanceMax(0.000000)
Train on 677 samples, validate on 291 samples
Epoch 1/100
677/677 [==============================] - 2s 3ms/step - loss: 7.2885e-04 - mean_absolute_error: 0.0127 - mean_absolute_percentage_error: 161.7868 - val_loss: 6.6317e-04 - val_mean_absolute_error: 0.0121 - val_mean_absolute_percentage_error: 123.4798
Epoch 2/100
677/677 [==============================] - 2s 3ms/step - loss: 5.6384e-04 - mean_absolute_error: 0.0112 - mean_absolute_percentage_error: 121.1596 - val_loss: 5.9110e-04 - val_mean_absolute_error: 0.0114 - val_mean_absolute_percentage_error: 120.3071
Epoch 3/100
677/677 [==============================] - 2s 3ms/step - loss: 4.7981e-04 - mean_absolute_error: 0.0107 - mean_absolute_percentage_error: 122.1414 - val_loss: 5.3739e-04 - val_mean_absolute_error: 0.0111 - val_mean_absolute_percentage_error: 121.2984
Epoch 4/100
677/677 [==============================] - 2s 3ms/step - loss: 4.1580e-04 - mean_absolute_error: 0.0103 - mean_absolute_percentage_error: 119.0018 - val_loss: 5.0105e-04 - val_mean_absolute_error: 0.0109 - val_mean_absolute_percentage_error: 118.8770
Epoch 5/100
677/677 [==============================] - 3s 4ms/step - loss: 3.5760e-04 - mean_absolute_error: 0.0099 - mean_absolute_percentage_error: 115.1305 - val_loss: 4.6629e-04 - val_mean_absolute_error: 0.0108 - val_mean_absolute_percentage_error: 118.4515
Epoch 6/100
677/677 [==============================] - 2s 3ms/step - loss: 3.0869e-04 - mean_absolute_error: 0.0095 - mean_absolute_percentage_error: 110.9568 - val_loss: 4.4396e-04 - val_mean_absolute_error: 0.0107 - val_mean_absolute_percentage_error: 117.2114
Epoch 7/100
677/677 [==============================] - 2s 3ms/step - loss: 2.6181e-04 - mean_absolute_error: 0.0090 - mean_absolute_percentage_error: 109.0506 - val_loss: 4.2265e-04 - val_mean_absolute_error: 0.0106 - val_mean_absolute_percentage_error: 116.6763
Epoch 8/100
677/677 [==============================] - 2s 3ms/step - loss: 2.2784e-04 - mean_absolute_error: 0.0087 - mean_absolute_percentage_error: 105.7562 - val_loss: 4.1528e-04 - val_mean_absolute_error: 0.0106 - val_mean_absolute_percentage_error: 116.5105
Epoch 9/100
677/677 [==============================] - 2s 3ms/step - loss: 2.0152e-04 - mean_absolute_error: 0.0083 - mean_absolute_percentage_error: 103.1178 - val_loss: 4.0624e-04 - val_mean_absolute_error: 0.0104 - val_mean_absolute_percentage_error: 116.0003
Epoch 10/100
677/677 [==============================] - 2s 3ms/step - loss: 1.7865e-04 - mean_absolute_error: 0.0079 - mean_absolute_percentage_error: 101.4013 - val_loss: 3.9868e-04 - val_mean_absolute_error: 0.0105 - val_mean_absolute_percentage_error: 118.7820
Epoch 11/100
677/677 [==============================] - 2s 4ms/step - loss: 1.5592e-04 - mean_absolute_error: 0.0076 - mean_absolute_percentage_error: 98.5259 - val_loss: 3.8894e-04 - val_mean_absolute_error: 0.0103 - val_mean_absolute_percentage_error: 114.0532
Epoch 12/100
677/677 [==============================] - 2s 3ms/step - loss: 1.3583e-04 - mean_absolute_error: 0.0072 - mean_absolute_percentage_error: 96.4760 - val_loss: 3.8807e-04 - val_mean_absolute_error: 0.0103 - val_mean_absolute_percentage_error: 116.4706
Epoch 13/100
677/677 [==============================] - 2s 3ms/step - loss: 1.2042e-04 - mean_absolute_error: 0.0069 - mean_absolute_percentage_error: 94.3490 - val_loss: 3.8566e-04 - val_mean_absolute_error: 0.0102 - val_mean_absolute_percentage_error: 115.7845
Epoch 14/100
677/677 [==============================] - 2s 3ms/step - loss: 1.0864e-04 - mean_absolute_error: 0.0067 - mean_absolute_percentage_error: 92.4231 - val_loss: 3.8057e-04 - val_mean_absolute_error: 0.0102 - val_mean_absolute_percentage_error: 116.7057
Epoch 15/100
677/677 [==============================] - 2s 3ms/step - loss: 9.7562e-05 - mean_absolute_error: 0.0064 - mean_absolute_percentage_error: 90.5527 - val_loss: 3.7299e-04 - val_mean_absolute_error: 0.0101 - val_mean_absolute_percentage_error: 113.3712
Epoch 16/100
677/677 [==============================] - 2s 3ms/step - loss: 8.6729e-05 - mean_absolute_error: 0.0061 - mean_absolute_percentage_error: 88.2068 - val_loss: 3.7412e-04 - val_mean_absolute_error: 0.0102 - val_mean_absolute_percentage_error: 117.1941
Epoch 17/100
677/677 [==============================] - 2s 3ms/step - loss: 7.7495e-05 - mean_absolute_error: 0.0059 - mean_absolute_percentage_error: 86.4357 - val_loss: 3.7269e-04 - val_mean_absolute_error: 0.0101 - val_mean_absolute_percentage_error: 115.7683
Epoch 18/100
677/677 [==============================] - 2s 3ms/step - loss: 7.0167e-05 - mean_absolute_error: 0.0056 - mean_absolute_percentage_error: 84.3882 - val_loss: 3.7583e-04 - val_mean_absolute_error: 0.0101 - val_mean_absolute_percentage_error: 115.8617
Epoch 19/100
677/677 [==============================] - 2s 3ms/step - loss: 6.4919e-05 - mean_absolute_error: 0.0054 - mean_absolute_percentage_error: 82.7943 - val_loss: 3.7276e-04 - val_mean_absolute_error: 0.0102 - val_mean_absolute_percentage_error: 116.1259
Epoch 20/100
677/677 [==============================] - 2s 3ms/step - loss: 5.8948e-05 - mean_absolute_error: 0.0052 - mean_absolute_percentage_error: 81.2488 - val_loss: 3.7231e-04 - val_mean_absolute_error: 0.0101 - val_mean_absolute_percentage_error: 118.1728
Epoch 21/100
677/677 [==============================] - 2s 4ms/step - loss: 5.3392e-05 - mean_absolute_error: 0.0050 - mean_absolute_percentage_error: 79.3921 - val_loss: 3.7716e-04 - val_mean_absolute_error: 0.0102 - val_mean_absolute_percentage_error: 117.6861
Epoch 22/100
677/677 [==============================] - 2s 3ms/step - loss: 5.0231e-05 - mean_absolute_error: 0.0049 - mean_absolute_percentage_error: 78.1898 - val_loss: 3.7441e-04 - val_mean_absolute_error: 0.0101 - val_mean_absolute_percentage_error: 115.7151
Epoch 23/100
677/677 [==============================] - 2s 4ms/step - loss: 4.6693e-05 - mean_absolute_error: 0.0047 - mean_absolute_percentage_error: 76.6721 - val_loss: 3.7050e-04 - val_mean_absolute_error: 0.0101 - val_mean_absolute_percentage_error: 116.1317
Epoch 24/100
677/677 [==============================] - 2s 3ms/step - loss: 4.3750e-05 - mean_absolute_error: 0.0045 - mean_absolute_percentage_error: 75.3499 - val_loss: 3.7227e-04 - val_mean_absolute_error: 0.0101 - val_mean_absolute_percentage_error: 116.3584
Epoch 25/100
677/677 [==============================] - 2s 3ms/step - loss: 4.1647e-05 - mean_absolute_error: 0.0044 - mean_absolute_percentage_error: 74.2149 - val_loss: 3.7047e-04 - val_mean_absolute_error: 0.0101 - val_mean_absolute_percentage_error: 115.7465
Epoch 26/100
677/677 [==============================] - 2s 3ms/step - loss: 3.9346e-05 - mean_absolute_error: 0.0043 - mean_absolute_percentage_error: 72.9056 - val_loss: 3.7119e-04 - val_mean_absolute_error: 0.0101 - val_mean_absolute_percentage_error: 115.7475
Epoch 27/100
677/677 [==============================] - 2s 3ms/step - loss: 3.7841e-05 - mean_absolute_error: 0.0042 - mean_absolute_percentage_error: 71.9992 - val_loss: 3.7553e-04 - val_mean_absolute_error: 0.0101 - val_mean_absolute_percentage_error: 116.5080
Epoch 28/100
677/677 [==============================] - 2s 3ms/step - loss: 3.6194e-05 - mean_absolute_error: 0.0041 - mean_absolute_percentage_error: 70.8470 - val_loss: 3.6943e-04 - val_mean_absolute_error: 0.0101 - val_mean_absolute_percentage_error: 116.5066
Epoch 29/100
677/677 [==============================] - 2s 3ms/step - loss: 3.4750e-05 - mean_absolute_error: 0.0040 - mean_absolute_percentage_error: 70.1190 - val_loss: 3.7034e-04 - val_mean_absolute_error: 0.0101 - val_mean_absolute_percentage_error: 116.0104
Epoch 30/100
677/677 [==============================] - 2s 3ms/step - loss: 3.2534e-05 - mean_absolute_error: 0.0039 - mean_absolute_percentage_error: 68.7867 - val_loss: 3.7153e-04 - val_mean_absolute_error: 0.0101 - val_mean_absolute_percentage_error: 116.9624
Epoch 31/100
677/677 [==============================] - 2s 3ms/step - loss: 3.1987e-05 - mean_absolute_error: 0.0038 - mean_absolute_percentage_error: 67.8477 - val_loss: 3.7302e-04 - val_mean_absolute_error: 0.0102 - val_mean_absolute_percentage_error: 120.9623
Epoch 32/100
677/677 [==============================] - 2s 3ms/step - loss: 3.3293e-05 - mean_absolute_error: 0.0039 - mean_absolute_percentage_error: 67.7346 - val_loss: 3.7536e-04 - val_mean_absolute_error: 0.0102 - val_mean_absolute_percentage_error: 120.4248
Epoch 33/100
677/677 [==============================] - 2s 3ms/step - loss: 3.4490e-05 - mean_absolute_error: 0.0039 - mean_absolute_percentage_error: 68.0149 - val_loss: 3.6965e-04 - val_mean_absolute_error: 0.0100 - val_mean_absolute_percentage_error: 117.3170
Epoch 34/100
677/677 [==============================] - 2s 4ms/step - loss: 3.1875e-05 - mean_absolute_error: 0.0038 - mean_absolute_percentage_error: 67.0567 - val_loss: 3.6949e-04 - val_mean_absolute_error: 0.0100 - val_mean_absolute_percentage_error: 115.7549
Epoch 35/100
677/677 [==============================] - 3s 4ms/step - loss: 3.0422e-05 - mean_absolute_error: 0.0037 - mean_absolute_percentage_error: 66.2323 - val_loss: 3.7725e-04 - val_mean_absolute_error: 0.0100 - val_mean_absolute_percentage_error: 117.7353
Epoch 36/100
677/677 [==============================] - 2s 3ms/step - loss: 2.8864e-05 - mean_absolute_error: 0.0036 - mean_absolute_percentage_error: 65.7103 - val_loss: 3.7256e-04 - val_mean_absolute_error: 0.0100 - val_mean_absolute_percentage_error: 118.1804
Epoch 37/100
677/677 [==============================] - 2s 4ms/step - loss: 2.7293e-05 - mean_absolute_error: 0.0035 - mean_absolute_percentage_error: 64.6088 - val_loss: 3.7224e-04 - val_mean_absolute_error: 0.0101 - val_mean_absolute_percentage_error: 118.0151
Epoch 38/100
677/677 [==============================] - 2s 3ms/step - loss: 2.6166e-05 - mean_absolute_error: 0.0034 - mean_absolute_percentage_error: 63.8083 - val_loss: 3.7105e-04 - val_mean_absolute_error: 0.0100 - val_mean_absolute_percentage_error: 117.8928
Epoch 39/100
677/677 [==============================] - 2s 3ms/step - loss: 2.4192e-05 - mean_absolute_error: 0.0033 - mean_absolute_percentage_error: 62.7800 - val_loss: 3.6841e-04 - val_mean_absolute_error: 0.0100 - val_mean_absolute_percentage_error: 116.3326
Epoch 40/100
677/677 [==============================] - 2s 3ms/step - loss: 2.3085e-05 - mean_absolute_error: 0.0032 - mean_absolute_percentage_error: 61.6421 - val_loss: 3.6918e-04 - val_mean_absolute_error: 0.0100 - val_mean_absolute_percentage_error: 117.4563
Epoch 41/100
677/677 [==============================] - 2s 3ms/step - loss: 2.1701e-05 - mean_absolute_error: 0.0031 - mean_absolute_percentage_error: 60.6266 - val_loss: 3.6974e-04 - val_mean_absolute_error: 0.0100 - val_mean_absolute_percentage_error: 116.9184
Epoch 42/100
677/677 [==============================] - 2s 4ms/step - loss: 2.1066e-05 - mean_absolute_error: 0.0030 - mean_absolute_percentage_error: 59.7108 - val_loss: 3.6682e-04 - val_mean_absolute_error: 0.0100 - val_mean_absolute_percentage_error: 118.4704
Epoch 43/100
677/677 [==============================] - 2s 3ms/step - loss: 2.0509e-05 - mean_absolute_error: 0.0029 - mean_absolute_percentage_error: 58.9087 - val_loss: 3.6989e-04 - val_mean_absolute_error: 0.0100 - val_mean_absolute_percentage_error: 118.4606
Epoch 44/100
677/677 [==============================] - 2s 3ms/step - loss: 1.9852e-05 - mean_absolute_error: 0.0029 - mean_absolute_percentage_error: 58.2607 - val_loss: 3.6806e-04 - val_mean_absolute_error: 0.0100 - val_mean_absolute_percentage_error: 119.1714
Epoch 45/100
677/677 [==============================] - 2s 3ms/step - loss: 1.9317e-05 - mean_absolute_error: 0.0028 - mean_absolute_percentage_error: 57.7079 - val_loss: 3.6860e-04 - val_mean_absolute_error: 0.0100 - val_mean_absolute_percentage_error: 118.9338
Epoch 46/100
677/677 [==============================] - 2s 3ms/step - loss: 1.8855e-05 - mean_absolute_error: 0.0027 - mean_absolute_percentage_error: 56.8995 - val_loss: 3.6974e-04 - val_mean_absolute_error: 0.0100 - val_mean_absolute_percentage_error: 118.0525
Epoch 47/100
677/677 [==============================] - 2s 3ms/step - loss: 1.8299e-05 - mean_absolute_error: 0.0027 - mean_absolute_percentage_error: 56.0018 - val_loss: 3.6855e-04 - val_mean_absolute_error: 0.0100 - val_mean_absolute_percentage_error: 119.6616
Epoch 48/100
677/677 [==============================] - 2s 3ms/step - loss: 1.7817e-05 - mean_absolute_error: 0.0026 - mean_absolute_percentage_error: 55.3156 - val_loss: 3.6800e-04 - val_mean_absolute_error: 0.0100 - val_mean_absolute_percentage_error: 118.0859
Epoch 49/100
677/677 [==============================] - 2s 3ms/step - loss: 1.7699e-05 - mean_absolute_error: 0.0026 - mean_absolute_percentage_error: 54.8139 - val_loss: 3.6748e-04 - val_mean_absolute_error: 0.0100 - val_mean_absolute_percentage_error: 118.6320
Epoch 50/100
677/677 [==============================] - 2s 3ms/step - loss: 1.7672e-05 - mean_absolute_error: 0.0026 - mean_absolute_percentage_error: 54.6050 - val_loss: 3.6893e-04 - val_mean_absolute_error: 0.0100 - val_mean_absolute_percentage_error: 118.2951
Epoch 51/100
677/677 [==============================] - 2s 3ms/step - loss: 1.7555e-05 - mean_absolute_error: 0.0026 - mean_absolute_percentage_error: 54.3160 - val_loss: 3.6677e-04 - val_mean_absolute_error: 0.0100 - val_mean_absolute_percentage_error: 118.7873
Epoch 52/100
677/677 [==============================] - 2s 3ms/step - loss: 1.8390e-05 - mean_absolute_error: 0.0026 - mean_absolute_percentage_error: 54.5906 - val_loss: 3.7421e-04 - val_mean_absolute_error: 0.0102 - val_mean_absolute_percentage_error: 122.7614
Epoch 53/100
677/677 [==============================] - 2s 3ms/step - loss: 2.2907e-05 - mean_absolute_error: 0.0030 - mean_absolute_percentage_error: 57.1845 - val_loss: 3.6999e-04 - val_mean_absolute_error: 0.0101 - val_mean_absolute_percentage_error: 120.0913
Epoch 54/100
677/677 [==============================] - 2s 3ms/step - loss: 2.4980e-05 - mean_absolute_error: 0.0032 - mean_absolute_percentage_error: 59.5983 - val_loss: 3.6937e-04 - val_mean_absolute_error: 0.0101 - val_mean_absolute_percentage_error: 120.7630
Epoch 55/100
677/677 [==============================] - 2s 3ms/step - loss: 2.4200e-05 - mean_absolute_error: 0.0032 - mean_absolute_percentage_error: 59.6735 - val_loss: 3.7034e-04 - val_mean_absolute_error: 0.0100 - val_mean_absolute_percentage_error: 118.8989
Epoch 56/100
677/677 [==============================] - 2s 4ms/step - loss: 2.3686e-05 - mean_absolute_error: 0.0032 - mean_absolute_percentage_error: 59.4175 - val_loss: 3.7177e-04 - val_mean_absolute_error: 0.0100 - val_mean_absolute_percentage_error: 117.7602
Epoch 57/100
677/677 [==============================] - 2s 3ms/step - loss: 2.1863e-05 - mean_absolute_error: 0.0031 - mean_absolute_percentage_error: 58.5819 - val_loss: 3.7451e-04 - val_mean_absolute_error: 0.0100 - val_mean_absolute_percentage_error: 116.7871
Epoch 58/100
677/677 [==============================] - 2s 3ms/step - loss: 2.1782e-05 - mean_absolute_error: 0.0031 - mean_absolute_percentage_error: 58.2930 - val_loss: 3.7052e-04 - val_mean_absolute_error: 0.0100 - val_mean_absolute_percentage_error: 117.1842
Epoch 59/100
677/677 [==============================] - 2s 3ms/step - loss: 2.0084e-05 - mean_absolute_error: 0.0029 - mean_absolute_percentage_error: 57.0126 - val_loss: 3.6880e-04 - val_mean_absolute_error: 0.0100 - val_mean_absolute_percentage_error: 117.6743
Epoch 60/100
677/677 [==============================] - 2s 3ms/step - loss: 1.9072e-05 - mean_absolute_error: 0.0028 - mean_absolute_percentage_error: 56.1310 - val_loss: 3.7112e-04 - val_mean_absolute_error: 0.0100 - val_mean_absolute_percentage_error: 117.9328
Epoch 61/100
677/677 [==============================] - 2s 3ms/step - loss: 1.8008e-05 - mean_absolute_error: 0.0027 - mean_absolute_percentage_error: 55.2785 - val_loss: 3.6934e-04 - val_mean_absolute_error: 0.0100 - val_mean_absolute_percentage_error: 120.0999
Epoch 62/100
677/677 [==============================] - 2s 3ms/step - loss: 1.7170e-05 - mean_absolute_error: 0.0026 - mean_absolute_percentage_error: 54.4759 - val_loss: 3.6750e-04 - val_mean_absolute_error: 0.0100 - val_mean_absolute_percentage_error: 119.6225
Epoch 63/100
677/677 [==============================] - 2s 3ms/step - loss: 1.6601e-05 - mean_absolute_error: 0.0026 - mean_absolute_percentage_error: 53.8980 - val_loss: 3.6672e-04 - val_mean_absolute_error: 0.0100 - val_mean_absolute_percentage_error: 119.4439
Epoch 64/100
677/677 [==============================] - 2s 3ms/step - loss: 1.5888e-05 - mean_absolute_error: 0.0025 - mean_absolute_percentage_error: 53.1601 - val_loss: 3.6735e-04 - val_mean_absolute_error: 0.0100 - val_mean_absolute_percentage_error: 120.6926
Epoch 65/100
677/677 [==============================] - 2s 3ms/step - loss: 1.5325e-05 - mean_absolute_error: 0.0024 - mean_absolute_percentage_error: 52.4894 - val_loss: 3.6870e-04 - val_mean_absolute_error: 0.0100 - val_mean_absolute_percentage_error: 117.3724
Epoch 66/100
677/677 [==============================] - 2s 4ms/step - loss: 1.5193e-05 - mean_absolute_error: 0.0024 - mean_absolute_percentage_error: 52.2543 - val_loss: 3.6746e-04 - val_mean_absolute_error: 0.0099 - val_mean_absolute_percentage_error: 117.4701
Epoch 67/100
677/677 [==============================] - 2s 3ms/step - loss: 1.4864e-05 - mean_absolute_error: 0.0024 - mean_absolute_percentage_error: 51.5119 - val_loss: 3.6769e-04 - val_mean_absolute_error: 0.0100 - val_mean_absolute_percentage_error: 119.4155
Epoch 68/100
677/677 [==============================] - 2s 3ms/step - loss: 1.4551e-05 - mean_absolute_error: 0.0023 - mean_absolute_percentage_error: 51.0115 - val_loss: 3.7009e-04 - val_mean_absolute_error: 0.0100 - val_mean_absolute_percentage_error: 120.2584
Epoch 69/100
677/677 [==============================] - 2s 3ms/step - loss: 1.4419e-05 - mean_absolute_error: 0.0023 - mean_absolute_percentage_error: 50.8094 - val_loss: 3.6882e-04 - val_mean_absolute_error: 0.0099 - val_mean_absolute_percentage_error: 119.0246
Epoch 70/100
677/677 [==============================] - 2s 3ms/step - loss: 1.4328e-05 - mean_absolute_error: 0.0023 - mean_absolute_percentage_error: 50.5945 - val_loss: 3.6890e-04 - val_mean_absolute_error: 0.0099 - val_mean_absolute_percentage_error: 118.0751
Epoch 71/100
677/677 [==============================] - 2s 3ms/step - loss: 1.4189e-05 - mean_absolute_error: 0.0023 - mean_absolute_percentage_error: 50.5335 - val_loss: 3.6800e-04 - val_mean_absolute_error: 0.0099 - val_mean_absolute_percentage_error: 119.6180
Epoch 72/100
677/677 [==============================] - 2s 3ms/step - loss: 1.4037e-05 - mean_absolute_error: 0.0023 - mean_absolute_percentage_error: 50.2688 - val_loss: 3.6724e-04 - val_mean_absolute_error: 0.0099 - val_mean_absolute_percentage_error: 117.7447
Epoch 73/100
677/677 [==============================] - 2s 3ms/step - loss: 1.3913e-05 - mean_absolute_error: 0.0023 - mean_absolute_percentage_error: 50.1438 - val_loss: 3.6685e-04 - val_mean_absolute_error: 0.0099 - val_mean_absolute_percentage_error: 118.6204
Epoch 74/100
677/677 [==============================] - 2s 3ms/step - loss: 1.3639e-05 - mean_absolute_error: 0.0022 - mean_absolute_percentage_error: 49.7017 - val_loss: 3.6834e-04 - val_mean_absolute_error: 0.0099 - val_mean_absolute_percentage_error: 119.2991
Epoch 75/100
677/677 [==============================] - 2s 3ms/step - loss: 1.3579e-05 - mean_absolute_error: 0.0022 - mean_absolute_percentage_error: 49.4682 - val_loss: 3.6681e-04 - val_mean_absolute_error: 0.0099 - val_mean_absolute_percentage_error: 119.6100
Epoch 76/100
677/677 [==============================] - 2s 3ms/step - loss: 1.4235e-05 - mean_absolute_error: 0.0023 - mean_absolute_percentage_error: 49.6413 - val_loss: 3.6695e-04 - val_mean_absolute_error: 0.0100 - val_mean_absolute_percentage_error: 122.1114
Epoch 77/100
677/677 [==============================] - 2s 3ms/step - loss: 1.5278e-05 - mean_absolute_error: 0.0024 - mean_absolute_percentage_error: 50.7402 - val_loss: 3.6557e-04 - val_mean_absolute_error: 0.0099 - val_mean_absolute_percentage_error: 120.1452
Epoch 78/100
677/677 [==============================] - 2s 3ms/step - loss: 1.4826e-05 - mean_absolute_error: 0.0024 - mean_absolute_percentage_error: 50.7138 - val_loss: 3.6674e-04 - val_mean_absolute_error: 0.0099 - val_mean_absolute_percentage_error: 120.0096
Epoch 79/100
677/677 [==============================] - 2s 3ms/step - loss: 1.4832e-05 - mean_absolute_error: 0.0024 - mean_absolute_percentage_error: 50.8613 - val_loss: 3.6930e-04 - val_mean_absolute_error: 0.0099 - val_mean_absolute_percentage_error: 120.3776
Epoch 80/100
677/677 [==============================] - 2s 3ms/step - loss: 1.4744e-05 - mean_absolute_error: 0.0024 - mean_absolute_percentage_error: 50.7794 - val_loss: 3.6842e-04 - val_mean_absolute_error: 0.0099 - val_mean_absolute_percentage_error: 118.6103
Epoch 81/100
677/677 [==============================] - 2s 3ms/step - loss: 1.4683e-05 - mean_absolute_error: 0.0024 - mean_absolute_percentage_error: 50.8070 - val_loss: 3.6689e-04 - val_mean_absolute_error: 0.0100 - val_mean_absolute_percentage_error: 122.0331
Epoch 82/100
677/677 [==============================] - 2s 3ms/step - loss: 1.4900e-05 - mean_absolute_error: 0.0024 - mean_absolute_percentage_error: 50.9410 - val_loss: 3.6562e-04 - val_mean_absolute_error: 0.0099 - val_mean_absolute_percentage_error: 120.1546
Epoch 83/100
677/677 [==============================] - 2s 4ms/step - loss: 1.5309e-05 - mean_absolute_error: 0.0025 - mean_absolute_percentage_error: 51.2366 - val_loss: 3.6825e-04 - val_mean_absolute_error: 0.0099 - val_mean_absolute_percentage_error: 118.3575
Epoch 84/100
677/677 [==============================] - 2s 3ms/step - loss: 1.5005e-05 - mean_absolute_error: 0.0024 - mean_absolute_percentage_error: 51.3449 - val_loss: 3.6631e-04 - val_mean_absolute_error: 0.0099 - val_mean_absolute_percentage_error: 119.6261
Epoch 85/100
677/677 [==============================] - 2s 3ms/step - loss: 1.4770e-05 - mean_absolute_error: 0.0024 - mean_absolute_percentage_error: 51.0710 - val_loss: 3.6769e-04 - val_mean_absolute_error: 0.0099 - val_mean_absolute_percentage_error: 118.6728
Epoch 86/100
677/677 [==============================] - 2s 3ms/step - loss: 1.4455e-05 - mean_absolute_error: 0.0024 - mean_absolute_percentage_error: 50.8061 - val_loss: 3.6757e-04 - val_mean_absolute_error: 0.0099 - val_mean_absolute_percentage_error: 119.7171
Epoch 87/100
677/677 [==============================] - 2s 4ms/step - loss: 1.4352e-05 - mean_absolute_error: 0.0024 - mean_absolute_percentage_error: 50.4469 - val_loss: 3.6544e-04 - val_mean_absolute_error: 0.0099 - val_mean_absolute_percentage_error: 120.0981
Epoch 88/100
677/677 [==============================] - 2s 3ms/step - loss: 1.3743e-05 - mean_absolute_error: 0.0023 - mean_absolute_percentage_error: 49.9120 - val_loss: 3.6824e-04 - val_mean_absolute_error: 0.0099 - val_mean_absolute_percentage_error: 119.5353
Epoch 89/100
677/677 [==============================] - 2s 4ms/step - loss: 1.3597e-05 - mean_absolute_error: 0.0023 - mean_absolute_percentage_error: 49.3628 - val_loss: 3.6652e-04 - val_mean_absolute_error: 0.0099 - val_mean_absolute_percentage_error: 119.6446
Epoch 90/100
677/677 [==============================] - 2s 4ms/step - loss: 1.3599e-05 - mean_absolute_error: 0.0023 - mean_absolute_percentage_error: 49.3746 - val_loss: 3.6628e-04 - val_mean_absolute_error: 0.0099 - val_mean_absolute_percentage_error: 120.9546
Epoch 91/100
677/677 [==============================] - 2s 3ms/step - loss: 1.3597e-05 - mean_absolute_error: 0.0022 - mean_absolute_percentage_error: 49.1547 - val_loss: 3.6580e-04 - val_mean_absolute_error: 0.0099 - val_mean_absolute_percentage_error: 119.0343
Epoch 92/100
677/677 [==============================] - 2s 3ms/step - loss: 1.3619e-05 - mean_absolute_error: 0.0023 - mean_absolute_percentage_error: 49.0541 - val_loss: 3.6591e-04 - val_mean_absolute_error: 0.0099 - val_mean_absolute_percentage_error: 120.1974
Epoch 93/100
677/677 [==============================] - 2s 3ms/step - loss: 1.3998e-05 - mean_absolute_error: 0.0023 - mean_absolute_percentage_error: 49.1881 - val_loss: 3.6623e-04 - val_mean_absolute_error: 0.0099 - val_mean_absolute_percentage_error: 120.8473
Epoch 94/100
677/677 [==============================] - 2s 3ms/step - loss: 1.3483e-05 - mean_absolute_error: 0.0022 - mean_absolute_percentage_error: 49.0619 - val_loss: 3.6726e-04 - val_mean_absolute_error: 0.0099 - val_mean_absolute_percentage_error: 121.5829
Epoch 95/100
677/677 [==============================] - 2s 3ms/step - loss: 1.3479e-05 - mean_absolute_error: 0.0022 - mean_absolute_percentage_error: 48.9206 - val_loss: 3.6646e-04 - val_mean_absolute_error: 0.0099 - val_mean_absolute_percentage_error: 121.1559
Epoch 96/100
677/677 [==============================] - 2s 3ms/step - loss: 1.3106e-05 - mean_absolute_error: 0.0022 - mean_absolute_percentage_error: 48.5896 - val_loss: 3.6615e-04 - val_mean_absolute_error: 0.0099 - val_mean_absolute_percentage_error: 119.1754
Epoch 97/100
677/677 [==============================] - 2s 3ms/step - loss: 1.2860e-05 - mean_absolute_error: 0.0021 - mean_absolute_percentage_error: 48.1067 - val_loss: 3.6683e-04 - val_mean_absolute_error: 0.0099 - val_mean_absolute_percentage_error: 119.8193
Epoch 98/100
677/677 [==============================] - 2s 3ms/step - loss: 1.2812e-05 - mean_absolute_error: 0.0021 - mean_absolute_percentage_error: 47.8122 - val_loss: 3.6745e-04 - val_mean_absolute_error: 0.0099 - val_mean_absolute_percentage_error: 118.7239
Epoch 99/100
677/677 [==============================] - 2s 3ms/step - loss: 1.3072e-05 - mean_absolute_error: 0.0022 - mean_absolute_percentage_error: 48.0826 - val_loss: 3.6792e-04 - val_mean_absolute_error: 0.0099 - val_mean_absolute_percentage_error: 120.3317
Epoch 100/100
677/677 [==============================] - 2s 3ms/step - loss: 1.2905e-05 - mean_absolute_error: 0.0021 - mean_absolute_percentage_error: 47.6981 - val_loss: 3.6494e-04 - val_mean_absolute_error: 0.0099 - val_mean_absolute_percentage_error: 118.2943
One result after iSTFT [From top to bottom: Noisy input / Clear reference / DNN output]
If I only look temporal representation, it seems not so bad BUT if I look spectrogram ...
Temporal Representation
Spectrogram
Anyone can help me, please?
Ideas? Upgrades?
Enormous error from dummy?

First of all, I assume you are trying to predict the magnitude of the complex coefficients you obtain from the fft, right?
The picture of the predictions you posted doesn't look that bad. I observe that the main problem lies in the predictions of the high frequencies. This is a very common issue in processing audio signals with neural networks. Possibe solutions/ideas:
Take the logarithm of the magnitudes in both your inputs and your targets. Then the natural high energy difference between the lower and the higher frequencies is somehow equalized.
Furthermore, try to apply an emphasis weighting in your loss function. More emphasis to the higher frequency bins leads to a faster learning in that region. In keras it is quite easy to define a custom loss function (How to create custom objective function in Keras?)
Checkout residual neural networks. The applied skip connections can be extremly useful in your task. The layers can learn to only estimate the noisy part of your signal which can be subtracted later on. https://pdfs.semanticscholar.org/8361/8badf5f7b0a5819d08ab97b5b3573fd75fae.pdf

Related

Is it normal that val_mse is hundred times of training_mse in the timeseries_weather_forecasting?

I've been following the tutorial of forecasting timeseries data with keras.
https://keras.io/examples/timeseries/timeseries_weather_forecasting/
I wanted to compare the LSTM approach with the basic machine-learning approach.
So I created a Dense layer model as follow:
model2 = models.Sequential()
model2.add(layers.Input(shape=(inputs.shape[1], inputs.shape[2])))
model2.add(layers.Flatten())
model2.add(layers.Dense(64,activation='relu'))
model2.add(layers.Dense(8,activation='relu'))
model2.add(layers.Dense(1))
model2.summary()
model2.compile(optimizer=keras.optimizers.RMSprop(), loss="mae",metrics=['mse','mae'])
history = model2.fit(
dataset_train,
epochs=10,
validation_data=dataset_val,
)
I run all the keras tutorial sample code in google Colab and added model2 at the last.
However, in the result mae metric looks fine, but the mse metric looks strange. As shown, the training mse was about 0.1 while val_mse was over 100.
But I don't know whether this is normal? or where I did wrong?
Epoch 1/10
1172/1172 [==============================] - 68s 57ms/step - loss: 0.5176 - mse: 0.5927 - mae: 0.5176 - val_loss: 1.1439 - val_mse: 120.2718 - val_mae: 1.1439
Epoch 2/10
1172/1172 [==============================] - 64s 55ms/step - loss: 0.2998 - mse: 0.1554 - mae: 0.2998 - val_loss: 1.0518 - val_mse: 140.6306 - val_mae: 1.0518
Epoch 3/10
1172/1172 [==============================] - 65s 55ms/step - loss: 0.2767 - mse: 0.1299 - mae: 0.2767 - val_loss: 0.9180 - val_mse: 103.4829 - val_mae: 0.9180
Epoch 4/10
1172/1172 [==============================] - 65s 55ms/step - loss: 0.2667 - mse: 0.1215 - mae: 0.2667 - val_loss: 0.8420 - val_mse: 83.6165 - val_mae: 0.8420
Epoch 5/10
1172/1172 [==============================] - 65s 55ms/step - loss: 0.2628 - mse: 0.1185 - mae: 0.2628 - val_loss: 0.8389 - val_mse: 89.2020 - val_mae: 0.8389
Epoch 6/10
1172/1172 [==============================] - 64s 55ms/step - loss: 0.2573 - mse: 0.1140 - mae: 0.2573 - val_loss: 0.8562 - val_mse: 105.4153 - val_mae: 0.8562
Epoch 7/10
1172/1172 [==============================] - 65s 55ms/step - loss: 0.2539 - mse: 0.1108 - mae: 0.2539 - val_loss: 0.8436 - val_mse: 96.0179 - val_mae: 0.8436
Epoch 8/10
1172/1172 [==============================] - 69s 59ms/step - loss: 0.2514 - mse: 0.1096 - mae: 0.2514 - val_loss: 0.8834 - val_mse: 121.4520 - val_mae: 0.8834
Epoch 9/10
1172/1172 [==============================] - 65s 55ms/step - loss: 0.2491 - mse: 0.1081 - mae: 0.2491 - val_loss: 0.9360 - val_mse: 145.4284 - val_mae: 0.9360
Epoch 10/10
1172/1172 [==============================] - 65s 55ms/step - loss: 0.2487 - mse: 0.1112 - mae: 0.2487 - val_loss: 0.8668 - val_mse: 110.2743 - val_mae: 0.8668

Validation Accuracy doesnt improve at all from the beggining

I am trying to classify the severity of COVID XRay using 426 256x256 xray images and 4 classes present. However the validation accuracy doesnt improve at all. The validation loss also barely decreases from the start
This is the model I am using
from keras.models import Sequential
from keras.layers import Dense,Conv2D,MaxPooling2D,Dropout,Flatten
from tensorflow.keras.callbacks import EarlyStopping
from tensorflow.keras import regularizers
model=Sequential()
model.add(Conv2D(filters=64,kernel_size=(4,4),input_shape=image_shape,activation="relu"))
model.add(MaxPooling2D(pool_size=(2,2)))
model.add(Dropout(0.2))
model.add(Conv2D(filters=128,kernel_size=(6,6),input_shape=image_shape,activation="relu"))
model.add(MaxPooling2D(pool_size=(3,3)))
model.add(Dropout(0.2))
model.add(Flatten())
model.add(Dense(64,activation="relu"))
model.add(Dense(16,activation="relu"))
model.add(Dense(4,activation="softmax"))
model.compile(loss="categorical_crossentropy",optimizer="adam",metrics=["accuracy"])
These are the outputs I get
epochs = 20
batch_size = 8
model.fit(X_train, y_train, validation_data=(X_test, y_test),
epochs=epochs,
batch_size=batch_size
)
Epoch 1/20
27/27 [==============================] - 4s 143ms/step - loss: 0.1776 - accuracy: 0.9528 - val_loss: 3.7355 - val_accuracy: 0.2717
Epoch 2/20
27/27 [==============================] - 4s 142ms/step - loss: 0.1152 - accuracy: 0.9481 - val_loss: 4.0038 - val_accuracy: 0.2283
Epoch 3/20
27/27 [==============================] - 4s 142ms/step - loss: 0.0875 - accuracy: 0.9858 - val_loss: 4.1756 - val_accuracy: 0.2391
Epoch 4/20
27/27 [==============================] - 4s 142ms/step - loss: 0.0521 - accuracy: 0.9906 - val_loss: 4.1034 - val_accuracy: 0.2717
Epoch 5/20
27/27 [==============================] - 4s 142ms/step - loss: 0.0496 - accuracy: 0.9858 - val_loss: 4.8433 - val_accuracy: 0.3152
Epoch 6/20
27/27 [==============================] - 4s 142ms/step - loss: 0.0170 - accuracy: 0.9953 - val_loss: 5.6027 - val_accuracy: 0.3043
Epoch 7/20
27/27 [==============================] - 4s 142ms/step - loss: 0.2307 - accuracy: 0.9245 - val_loss: 4.2759 - val_accuracy: 0.3152
Epoch 8/20
27/27 [==============================] - 4s 142ms/step - loss: 0.6493 - accuracy: 0.7830 - val_loss: 3.8390 - val_accuracy: 0.3478
Epoch 9/20
27/27 [==============================] - 4s 142ms/step - loss: 0.2563 - accuracy: 0.9009 - val_loss: 5.0250 - val_accuracy: 0.2500
Epoch 10/20
27/27 [==============================] - 4s 142ms/step - loss: 0.0286 - accuracy: 1.0000 - val_loss: 4.6475 - val_accuracy: 0.2391
Epoch 11/20
27/27 [==============================] - 4s 142ms/step - loss: 0.0097 - accuracy: 1.0000 - val_loss: 5.2198 - val_accuracy: 0.2391
Epoch 12/20
27/27 [==============================] - 4s 142ms/step - loss: 0.0037 - accuracy: 1.0000 - val_loss: 5.7914 - val_accuracy: 0.2500
Epoch 13/20
27/27 [==============================] - 4s 142ms/step - loss: 0.0048 - accuracy: 1.0000 - val_loss: 5.4341 - val_accuracy: 0.2391
Epoch 14/20
27/27 [==============================] - 4s 142ms/step - loss: 0.0044 - accuracy: 1.0000 - val_loss: 5.6364 - val_accuracy: 0.2391
Epoch 15/20
27/27 [==============================] - 4s 143ms/step - loss: 0.0019 - accuracy: 1.0000 - val_loss: 5.8504 - val_accuracy: 0.2391
Epoch 16/20
27/27 [==============================] - 4s 143ms/step - loss: 0.0013 - accuracy: 1.0000 - val_loss: 5.9604 - val_accuracy: 0.2500
Epoch 17/20
27/27 [==============================] - 4s 149ms/step - loss: 0.0023 - accuracy: 1.0000 - val_loss: 6.0851 - val_accuracy: 0.2717
Epoch 18/20
27/27 [==============================] - 4s 142ms/step - loss: 0.0134 - accuracy: 0.9953 - val_loss: 4.9783 - val_accuracy: 0.2717
Epoch 19/20
27/27 [==============================] - 4s 141ms/step - loss: 0.0068 - accuracy: 1.0000 - val_loss: 5.7421 - val_accuracy: 0.2500
Epoch 20/20
27/27 [==============================] - 4s 142ms/step - loss: 0.0024 - accuracy: 1.0000 - val_loss: 5.8480 - val_accuracy: 0.2283
Any tips on how i can solve this or If i am doing something wrong?

Why is my training loss and validation loss decreasing but training accuracy and validation accuracy not increasing at all?

I am training a DNN model to classify an image in two class: perfect image or imperfect image. I have 60 image for training with 30 images of each class. As for the limited data, I decided to check the model by overfitting i.e. by providing the validation data same as the training data. Here, I hoped to achieve 100% accuracy on both training and validation data(since training data set and validation dataset are the same).The training loss and validation loss seems to decrease however both training and validation accuracy are constant.
import tensorflow as tf
import tensorflow.keras as keras
from sklearn.preprocessing import LabelBinarizer
from sklearn.metrics import classification_report
from keras.models import Sequential
from keras.layers import Dropout
from keras.layers.core import Dense
from keras.optimizers import SGD
from keras.datasets import cifar10
import matplotlib.pyplot as plt
import numpy as np
import argparse
import cv2
import glob
initial_lr=0.001
#getting labels from Directories
right_labels=[]
wrong_labels=[]
rightimage_path=glob.glob("images/right_location/*")
wrongimage_path=glob.glob("images/wrong_location/*")
for _ in rightimage_path:
right_labels.append(1)
#print(labels)
for _ in wrongimage_path:
wrong_labels.append(0)
labelNames=["right_location","wrong_location"]
right_images=[]
wrong_images=[]
#getting images data from Directories
for img in rightimage_path:
im=cv2.imread(img)
im2=cv2.resize(im,(64,64))
im2=np.expand_dims(im2,axis=0)
max_pool=keras.layers.MaxPooling2D(pool_size=(2, 2),strides=(1, 1))
output=max_pool(im2)
output=np.squeeze(output)
output=output.flatten()
output=output/255
right_images.append(output)
#wrong images
for img in wrongimage_path:
im=cv2.imread(img)
im2=cv2.resize(im,(64,64))
im2=np.expand_dims(im2,axis=0)
max_pool=keras.layers.MaxPooling2D(pool_size=(2, 2),strides=(1, 1))
output=max_pool(im2)
output=np.squeeze(output)
output=output.flatten()
output=output/255
wrong_images.append(output)
#print(len(wrong_images))
trainX=right_images[:30]+wrong_images[:30]
trainX=np.array(trainX)
trainY=right_labels[:30]+wrong_labels[:30]
trainY=np.array(trainY)
#print(trainX[0].shape)
testX=trainX
testY=trainY
#testX=right_images[31:]+wrong_images[31:]
#testX=np.array(testX)
#print(len(testX))
#print(len(right_labels[31:]))
#testY=right_labels[31:]+wrong_labels[31:]
#testY=np.array(testY)
#print(testY)
print(trainY)
print(testY)
#Contruction of Neural Network model
model = Sequential()
model.add(Dense(1024, input_shape=(11907,), activation="relu"))
model.add(Dense(512, activation="relu"))
model.add(Dense(256, activation="relu"))
model.add(Dense(1, activation="softmax"))
#Training model
print("[INFO] training network...")
decay_steps = 1000
sgd = SGD(initial_lr,momentum=0.8)
lr_decayed_fn = tf.keras.experimental.CosineDecay(initial_lr, decay_steps)
model.compile(loss="binary_crossentropy", optimizer=sgd,metrics=["accuracy"])
H = model.fit(trainX, trainY, validation_data=(testX, testY),epochs=100, batch_size=1)
#evaluating the model
print("[INFO] evaluating network...")
predictions = model.predict(testX, batch_size=32)
print(predictions)
print(classification_report(testY,predictions, target_names=labelNames))
Training results:
[INFO] training network...
Epoch 1/100
60/60 [==============================] - 3s 43ms/step - loss: 0.8908 - accuracy: 0.4867 - val_loss: 0.6719 - val_accuracy: 0.5000
Epoch 2/100
60/60 [==============================] - 2s 41ms/step - loss: 0.6893 - accuracy: 0.4791 - val_loss: 0.8592 - val_accuracy: 0.5000
Epoch 3/100
60/60 [==============================] - 2s 41ms/step - loss: 0.7008 - accuracy: 0.5290 - val_loss: 0.6129 - val_accuracy: 0.5000
Epoch 4/100
60/60 [==============================] - 2s 41ms/step - loss: 0.6971 - accuracy: 0.5279 - val_loss: 0.5619 - val_accuracy: 0.5000
Epoch 5/100
60/60 [==============================] - 2s 41ms/step - loss: 0.6770 - accuracy: 0.4745 - val_loss: 0.5669 - val_accuracy: 0.5000
Epoch 6/100
60/60 [==============================] - 2s 41ms/step - loss: 0.5685 - accuracy: 0.5139 - val_loss: 0.4953 - val_accuracy: 0.5000
Epoch 7/100
60/60 [==============================] - 2s 41ms/step - loss: 0.5679 - accuracy: 0.5312 - val_loss: 0.8273 - val_accuracy: 0.5000
Epoch 8/100
60/60 [==============================] - 2s 41ms/step - loss: 0.4373 - accuracy: 0.6591 - val_loss: 0.8112 - val_accuracy: 0.5000
Epoch 9/100
60/60 [==============================] - 2s 41ms/step - loss: 0.7427 - accuracy: 0.5848 - val_loss: 0.5419 - val_accuracy: 0.5000
Epoch 10/100
60/60 [==============================] - 2s 40ms/step - loss: 0.4719 - accuracy: 0.5377 - val_loss: 0.3118 - val_accuracy: 0.5000
Epoch 11/100
60/60 [==============================] - 2s 40ms/step - loss: 0.3253 - accuracy: 0.4684 - val_loss: 0.4851 - val_accuracy: 0.5000
Epoch 12/100
60/60 [==============================] - 3s 42ms/step - loss: 0.5194 - accuracy: 0.4514 - val_loss: 0.1976 - val_accuracy: 0.5000
Epoch 13/100
60/60 [==============================] - 2s 41ms/step - loss: 0.3114 - accuracy: 0.6019 - val_loss: 0.3483 - val_accuracy: 0.5000
Epoch 14/100
60/60 [==============================] - 2s 41ms/step - loss: 0.3794 - accuracy: 0.6003 - val_loss: 0.4723 - val_accuracy: 0.5000
Epoch 15/100
60/60 [==============================] - 2s 41ms/step - loss: 0.4172 - accuracy: 0.5873 - val_loss: 0.4992 - val_accuracy: 0.5000
Epoch 16/100
60/60 [==============================] - 2s 41ms/step - loss: 0.3110 - accuracy: 0.4338 - val_loss: 0.6209 - val_accuracy: 0.5000
Epoch 17/100
60/60 [==============================] - 2s 41ms/step - loss: 0.6362 - accuracy: 0.6615 - val_loss: 0.2337 - val_accuracy: 0.5000
Epoch 18/100
60/60 [==============================] - 3s 42ms/step - loss: 0.1652 - accuracy: 0.5617 - val_loss: 0.0841 - val_accuracy: 0.5000
Epoch 19/100
60/60 [==============================] - 3s 42ms/step - loss: 0.1050 - accuracy: 0.4714 - val_loss: 0.2853 - val_accuracy: 0.5000
Epoch 20/100
60/60 [==============================] - 2s 41ms/step - loss: 0.1031 - accuracy: 0.5254 - val_loss: 0.2085 - val_accuracy: 0.5000
Epoch 21/100
60/60 [==============================] - 2s 42ms/step - loss: 0.0375 - accuracy: 0.5124 - val_loss: 0.0564 - val_accuracy: 0.5000
Epoch 22/100
60/60 [==============================] - 2s 41ms/step - loss: 0.0298 - accuracy: 0.5482 - val_loss: 0.5937 - val_accuracy: 0.5000
Epoch 23/100
60/60 [==============================] - 2s 41ms/step - loss: 0.3126 - accuracy: 0.3884 - val_loss: 0.0527 - val_accuracy: 0.5000
Epoch 24/100
60/60 [==============================] - 2s 41ms/step - loss: 0.1054 - accuracy: 0.5572 - val_loss: 0.0356 - val_accuracy: 0.5000
Epoch 25/100
60/60 [==============================] - 3s 42ms/step - loss: 0.1067 - accuracy: 0.4170 - val_loss: 0.1262 - val_accuracy: 0.5000
Epoch 26/100
60/60 [==============================] - 2s 40ms/step - loss: 0.0551 - accuracy: 0.5608 - val_loss: 0.0255 - val_accuracy: 0.5000
Epoch 27/100
60/60 [==============================] - 2s 41ms/step - loss: 0.0188 - accuracy: 0.5816 - val_loss: 0.3153 - val_accuracy: 0.5000
Epoch 28/100
60/60 [==============================] - 2s 40ms/step - loss: 0.1106 - accuracy: 0.4583 - val_loss: 0.3419 - val_accuracy: 0.5000
Epoch 29/100
60/60 [==============================] - 2s 40ms/step - loss: 0.1493 - accuracy: 0.5334 - val_loss: 0.0351 - val_accuracy: 0.5000
Epoch 30/100
60/60 [==============================] - 2s 41ms/step - loss: 0.1099 - accuracy: 0.4537 - val_loss: 0.1217 - val_accuracy: 0.5000
Epoch 31/100
60/60 [==============================] - 3s 43ms/step - loss: 0.0893 - accuracy: 0.4828 - val_loss: 0.1276 - val_accuracy: 0.5000
Epoch 32/100
60/60 [==============================] - 3s 43ms/step - loss: 0.1806 - accuracy: 0.4265 - val_loss: 0.0157 - val_accuracy: 0.5000
Epoch 33/100
60/60 [==============================] - 3s 44ms/step - loss: 0.0154 - accuracy: 0.3411 - val_loss: 0.0152 - val_accuracy: 0.5000
Epoch 34/100
60/60 [==============================] - 3s 42ms/step - loss: 0.0088 - accuracy: 0.4385 - val_loss: 0.0075 - val_accuracy: 0.5000
Epoch 35/100
60/60 [==============================] - 2s 41ms/step - loss: 0.0068 - accuracy: 0.5450 - val_loss: 0.0045 - val_accuracy: 0.5000
Epoch 36/100
60/60 [==============================] - 2s 41ms/step - loss: 0.0051 - accuracy: 0.4283 - val_loss: 0.0039 - val_accuracy: 0.5000
Epoch 37/100
60/60 [==============================] - 2s 40ms/step - loss: 0.0026 - accuracy: 0.3970 - val_loss: 0.0035 - val_accuracy: 0.5000
Epoch 38/100
60/60 [==============================] - 2s 41ms/step - loss: 0.0037 - accuracy: 0.4758 - val_loss: 0.0030 - val_accuracy: 0.5000
Epoch 39/100
60/60 [==============================] - 2s 41ms/step - loss: 0.0021 - accuracy: 0.5036 - val_loss: 0.0025 - val_accuracy: 0.5000
Epoch 40/100
60/60 [==============================] - 2s 41ms/step - loss: 0.0028 - accuracy: 0.6088 - val_loss: 0.0022 - val_accuracy: 0.5000
Epoch 41/100
60/60 [==============================] - 2s 40ms/step - loss: 0.0023 - accuracy: 0.3521 - val_loss: 0.0020 - val_accuracy: 0.5000
Epoch 42/100
60/60 [==============================] - 2s 39ms/step - loss: 0.0023 - accuracy: 0.4832 - val_loss: 0.0020 - val_accuracy: 0.5000
Epoch 43/100
60/60 [==============================] - 2s 39ms/step - loss: 0.0019 - accuracy: 0.6031 - val_loss: 0.0019 - val_accuracy: 0.5000
Epoch 44/100
60/60 [==============================] - 2s 39ms/step - loss: 0.0014 - accuracy: 0.4757 - val_loss: 0.0017 - val_accuracy: 0.5000
Epoch 45/100
60/60 [==============================] - 2s 39ms/step - loss: 0.0012 - accuracy: 0.5074 - val_loss: 0.0016 - val_accuracy: 0.5000
Epoch 46/100
60/60 [==============================] - 2s 39ms/step - loss: 0.0019 - accuracy: 0.4907 - val_loss: 0.0014 - val_accuracy: 0.5000
Epoch 47/100
60/60 [==============================] - 2s 39ms/step - loss: 0.0013 - accuracy: 0.5113 - val_loss: 0.0013 - val_accuracy: 0.5000
Epoch 48/100
60/60 [==============================] - 2s 42ms/step - loss: 0.0013 - accuracy: 0.4616 - val_loss: 0.0012 - val_accuracy: 0.5000
Epoch 49/100
60/60 [==============================] - 3s 43ms/step - loss: 9.2667e-04 - accuracy: 0.4932 - val_loss: 0.0012 - val_accuracy: 0.5000
Epoch 50/100
60/60 [==============================] - 2s 40ms/step - loss: 0.0012 - accuracy: 0.5685 - val_loss: 0.0011 - val_accuracy: 0.5000
Epoch 51/100
60/60 [==============================] - 2s 41ms/step - loss: 0.0014 - accuracy: 0.4952 - val_loss: 0.0011 - val_accuracy: 0.5000
Epoch 52/100
60/60 [==============================] - 3s 44ms/step - loss: 9.6710e-04 - accuracy: 0.4953 - val_loss: 0.0010 - val_accuracy: 0.5000
Epoch 53/100
60/60 [==============================] - 2s 40ms/step - loss: 0.0013 - accuracy: 0.5196 - val_loss: 9.4684e-04 - val_accuracy: 0.5000
Epoch 54/100
60/60 [==============================] - 2s 39ms/step - loss: 0.0012 - accuracy: 0.6033 - val_loss: 9.0767e-04 - val_accuracy: 0.5000
Epoch 55/100
60/60 [==============================] - 2s 39ms/step - loss: 0.0011 - accuracy: 0.5339 - val_loss: 8.7093e-04 - val_accuracy: 0.5000
Epoch 56/100
60/60 [==============================] - 2s 40ms/step - loss: 7.3141e-04 - accuracy: 0.4408 - val_loss: 8.4973e-04 - val_accuracy: 0.5000
Epoch 57/100
60/60 [==============================] - 2s 40ms/step - loss: 5.9006e-04 - accuracy: 0.5258 - val_loss: 8.1935e-04 - val_accuracy: 0.5000
Epoch 58/100
60/60 [==============================] - 3s 43ms/step - loss: 7.8818e-04 - accuracy: 0.5216 - val_loss: 7.8448e-04 - val_accuracy: 0.5000
Epoch 59/100
60/60 [==============================] - 3s 42ms/step - loss: 9.2272e-04 - accuracy: 0.4472 - val_loss: 7.5098e-04 - val_accuracy: 0.5000
Epoch 60/100
60/60 [==============================] - 3s 42ms/step - loss: 0.0011 - accuracy: 0.5485 - val_loss: 7.2444e-04 - val_accuracy: 0.5000
Epoch 61/100
60/60 [==============================] - 2s 41ms/step - loss: 5.5459e-04 - accuracy: 0.4393 - val_loss: 7.1711e-04 - val_accuracy: 0.5000
Epoch 62/100
60/60 [==============================] - 3s 43ms/step - loss: 7.3943e-04 - accuracy: 0.6748 - val_loss: 7.0446e-04 - val_accuracy: 0.5000
Epoch 63/100
60/60 [==============================] - 2s 41ms/step - loss: 6.0513e-04 - accuracy: 0.4365 - val_loss: 6.5710e-04 - val_accuracy: 0.5000
Epoch 64/100
60/60 [==============================] - 3s 43ms/step - loss: 7.1400e-04 - accuracy: 0.5855 - val_loss: 6.3535e-04 - val_accuracy: 0.5000
Epoch 65/100
60/60 [==============================] - 2s 40ms/step - loss: 4.1557e-04 - accuracy: 0.4226 - val_loss: 6.1638e-04 - val_accuracy: 0.5000
Epoch 66/100
60/60 [==============================] - 2s 39ms/step - loss: 0.0010 - accuracy: 0.5130 - val_loss: 5.9961e-04 - val_accuracy: 0.5000
Epoch 67/100
60/60 [==============================] - 2s 40ms/step - loss: 4.2256e-04 - accuracy: 0.5745 - val_loss: 5.8452e-04 - val_accuracy: 0.5000
Epoch 68/100
60/60 [==============================] - 3s 44ms/step - loss: 4.6930e-04 - accuracy: 0.4256 - val_loss: 5.6929e-04 - val_accuracy: 0.5000
Epoch 69/100
60/60 [==============================] - 3s 43ms/step - loss: 5.0537e-04 - accuracy: 0.5201 - val_loss: 5.5308e-04 - val_accuracy: 0.5000
Epoch 70/100
60/60 [==============================] - 2s 40ms/step - loss: 4.2207e-04 - accuracy: 0.5162 - val_loss: 5.3811e-04 - val_accuracy: 0.5000
Epoch 71/100
60/60 [==============================] - 3s 42ms/step - loss: 4.2835e-04 - accuracy: 0.5187 - val_loss: 5.2421e-04 - val_accuracy: 0.5000
Epoch 72/100
60/60 [==============================] - 2s 41ms/step - loss: 6.9296e-04 - accuracy: 0.5396 - val_loss: 5.1115e-04 - val_accuracy: 0.5000
Epoch 73/100
60/60 [==============================] - 2s 42ms/step - loss: 6.4352e-04 - accuracy: 0.4772 - val_loss: 4.9949e-04 - val_accuracy: 0.5000
Epoch 74/100
60/60 [==============================] - 2s 41ms/step - loss: 4.0728e-04 - accuracy: 0.4406 - val_loss: 4.8785e-04 - val_accuracy: 0.5000
Epoch 75/100
60/60 [==============================] - 2s 41ms/step - loss: 6.5099e-04 - accuracy: 0.4769 - val_loss: 4.7489e-04 - val_accuracy: 0.5000
Epoch 76/100
60/60 [==============================] - 2s 40ms/step - loss: 5.3847e-04 - accuracy: 0.5610 - val_loss: 4.6401e-04 - val_accuracy: 0.5000
Epoch 77/100
60/60 [==============================] - 3s 43ms/step - loss: 3.2081e-04 - accuracy: 0.5025 - val_loss: 4.5471e-04 - val_accuracy: 0.5000
Epoch 78/100
60/60 [==============================] - 2s 41ms/step - loss: 4.1042e-04 - accuracy: 0.4055 - val_loss: 4.4509e-04 - val_accuracy: 0.5000
Epoch 79/100
60/60 [==============================] - 3s 46ms/step - loss: 4.0072e-04 - accuracy: 0.5982 - val_loss: 4.3807e-04 - val_accuracy: 0.5000
Epoch 80/100
60/60 [==============================] - 2s 40ms/step - loss: 3.6314e-04 - accuracy: 0.4305 - val_loss: 4.2492e-04 - val_accuracy: 0.5000
Epoch 81/100
60/60 [==============================] - 3s 42ms/step - loss: 4.9497e-04 - accuracy: 0.4644 - val_loss: 4.2099e-04 - val_accuracy: 0.5000
Epoch 82/100
60/60 [==============================] - 3s 42ms/step - loss: 4.3963e-04 - accuracy: 0.4163 - val_loss: 4.0970e-04 - val_accuracy: 0.5000
Epoch 83/100
60/60 [==============================] - 3s 42ms/step - loss: 2.3065e-04 - accuracy: 0.5292 - val_loss: 4.0007e-04 - val_accuracy: 0.5000
Epoch 84/100
60/60 [==============================] - 2s 40ms/step - loss: 3.6344e-04 - accuracy: 0.4781 - val_loss: 3.9164e-04 - val_accuracy: 0.5000
Epoch 85/100
60/60 [==============================] - 2s 41ms/step - loss: 3.2347e-04 - accuracy: 0.4355 - val_loss: 3.8515e-04 - val_accuracy: 0.5000

Weird Model Summary

I am getting weird model summary using keras and ImageDataGenerator when used with Cats and dogs classification.
I am using Google Colab+GPU.
The problem is model summary seems to throw weird values and looks like loss function is not working.
Kindly suggest what is the problem.
My code is as below
train_datagen=ImageDataGenerator(rescale=1./255)
test_datagen=ImageDataGenerator(rescale=1./255)
train_generator=train_datagen.flow_from_directory(
train_dir,
target_size=(150,150),
batch_size=32,
class_mode='binary')
validation_generator=train_datagen.flow_from_directory(validation_dir,target_size=
(150,150),batch_size=50,class_mode='binary')
history=model.fit(train_generator,steps_per_epoch=31,epochs=20,validation_data=validation_generator,validation_steps=20)
Model Summary is as below
Epoch 1/20
31/31 [==============================] - 10s 241ms/step - loss: 0.1302 - acc: 1.0000 -
val_loss: 5.0506 - val_acc: 0.5000
Epoch 2/20
31/31 [==============================] - 6s 215ms/step - loss: 4.4286e-05 - acc: 1.0000 -
val_loss: 6.8281 - val_acc: 0.5000
Epoch 3/20
31/31 [==============================] - 7s 212ms/step - loss: 4.6900e-06 - acc: 1.0000 -
val_loss: 8.1907 - val_acc: 0.5000
Epoch 4/20
31/31 [==============================] - 6s 211ms/step - loss: 5.8646e-07 - acc: 1.0000 -
val_loss: 9.3841 - val_acc: 0.5000
Epoch 5/20
31/31 [==============================] - 6s 212ms/step - loss: 2.0634e-07 - acc: 1.0000 -
val_loss: 10.3554 - val_acc: 0.5000
Epoch 6/20
31/31 [==============================] - 6s 211ms/step - loss: 2.8432e-08 - acc: 1.0000 -
val_loss: 11.3546 - val_acc: 0.5000
Epoch 7/20
31/31 [==============================] - 6s 211ms/step - loss: 1.3657e-08 - acc: 1.0000 -
val_loss: 12.1012 - val_acc: 0.5000
Epoch 8/20
31/31 [==============================] - 7s 215ms/step - loss: 4.8156e-09 - acc: 1.0000 -
val_loss: 12.6892 - val_acc: 0.5000
Epoch 9/20
31/31 [==============================] - 7s 219ms/step - loss: 2.9152e-09 - acc: 1.0000 -
val_loss: 13.1079 - val_acc: 0.5000
Epoch 10/20
31/31 [==============================] - 7s 216ms/step - loss: 1.6705e-09 - acc: 1.0000 -
val_loss: 13.4230 - val_acc: 0.5000
Epoch 11/20
31/31 [==============================] - 7s 218ms/step - loss: 1.2603e-09 - acc: 1.0000 -
val_loss: 13.6259 - val_acc: 0.5000
Epoch 12/20
31/31 [==============================] - 7s 218ms/step - loss: 1.7701e-09 - acc: 1.0000 - val_loss:
13.7718 - val_acc: 0.5000
Epoch 13/20
31/31 [==============================] - 7s 218ms/step - loss: 1.6043e-09 - acc: 1.0000 - val_loss:
13.9099 - val_acc: 0.5000
Epoch 14/20
31/31 [==============================] - 7s 219ms/step - loss: 3.8831e-10 - acc: 1.0000 -
val_loss: 14.0405 - val_acc: 0.5000
Epoch 15/20
31/31 [==============================] - 7s 216ms/step - loss: 8.9113e-10 - acc: 1.0000 - val_loss:
14.1567 - val_acc: 0.5000
Epoch 16/20
31/31 [==============================] - 7s 218ms/step - loss: 8.5343e-10 - acc: 1.0000 -
val_loss: 14.2485 - val_acc: 0.5000
Epoch 17/20
31/31 [==============================] - 7s 217ms/step - loss: 2.8638e-10 - acc: 1.0000 -
val_loss: 14.3410 - val_acc: 0.5000
Epoch 18/20
31/31 [==============================] - 7s 218ms/step - loss: 5.3467e-10 - acc: 1.0000
- val_loss: 14.4225 - val_acc: 0.5000
Epoch 19/20
31/31 [==============================] - 7s 217ms/step - loss: 4.5269e-10 - acc: 1.0000
- val_loss: 14.4895 - val_acc: 0.5000
Epoch 20/20
31/31 [==============================] - 7s 216ms/step - loss: 3.4228e-10 - acc:
1.0000 - val_loss: 14.5428 - val_acc: 0.5000
You should use model.summary() instead of history = model.fit...

Same code run on two different machines, disparity in accuracy; From Deep Learning with Python Chapter 5.3 pretrained convnet

I'm following along with Chollet's book Deep Learning with Python and in chapter 5.3 I've come across a weird accuracy disparity between myself and the author.
After running the exact code pulled from the github, obtainable here I'm getting
test acc: 0.9409999930858612
while the author is getting
test acc: 0.967999992371
Also, when initially starting to train the models I am usually 10% behind versus when the author starts. Here are all of my outputs in the order in which they appear on that github link.
I'm looking for any pointers as to why running the same code is leaving such a huge gap. Thanks for taking a look!
First
Train on 2000 samples, validate on 1000 samples
Epoch 1/30
2000/2000 [==============================] - 1s 392us/step - loss: 0.6145 - acc: 0.6570 - val_loss: 0.4502 - val_acc: 0.8250
Epoch 2/30
2000/2000 [==============================] - 1s 260us/step - loss: 0.4402 - acc: 0.7980 - val_loss: 0.3596 - val_acc: 0.8600
Epoch 3/30
2000/2000 [==============================] - 1s 258us/step - loss: 0.3559 - acc: 0.8420 - val_loss: 0.3238 - val_acc: 0.8710
Epoch 4/30
2000/2000 [==============================] - 1s 257us/step - loss: 0.3149 - acc: 0.8655 - val_loss: 0.2945 - val_acc: 0.8800
Epoch 5/30
2000/2000 [==============================] - 1s 259us/step - loss: 0.2895 - acc: 0.8850 - val_loss: 0.2905 - val_acc: 0.8710
Epoch 6/30
2000/2000 [==============================] - 1s 257us/step - loss: 0.2627 - acc: 0.8970 - val_loss: 0.2695 - val_acc: 0.8950
Epoch 7/30
2000/2000 [==============================] - 1s 265us/step - loss: 0.2450 - acc: 0.9040 - val_loss: 0.2608 - val_acc: 0.8930
Epoch 8/30
2000/2000 [==============================] - 1s 259us/step - loss: 0.2328 - acc: 0.9150 - val_loss: 0.2937 - val_acc: 0.8670
Epoch 9/30
2000/2000 [==============================] - 1s 260us/step - loss: 0.2208 - acc: 0.9170 - val_loss: 0.2933 - val_acc: 0.8660
Epoch 10/30
2000/2000 [==============================] - 1s 254us/step - loss: 0.2026 - acc: 0.9225 - val_loss: 0.2471 - val_acc: 0.9040
Epoch 11/30
2000/2000 [==============================] - 1s 259us/step - loss: 0.1954 - acc: 0.9260 - val_loss: 0.2461 - val_acc: 0.9000
Epoch 12/30
2000/2000 [==============================] - 1s 260us/step - loss: 0.1786 - acc: 0.9360 - val_loss: 0.2414 - val_acc: 0.9070
Epoch 13/30
2000/2000 [==============================] - 0s 248us/step - loss: 0.1781 - acc: 0.9305 - val_loss: 0.2410 - val_acc: 0.9080
Epoch 14/30
2000/2000 [==============================] - 0s 249us/step - loss: 0.1701 - acc: 0.9380 - val_loss: 0.2372 - val_acc: 0.9080
Epoch 15/30
2000/2000 [==============================] - 1s 257us/step - loss: 0.1624 - acc: 0.9450 - val_loss: 0.2403 - val_acc: 0.9050
Epoch 16/30
2000/2000 [==============================] - 1s 258us/step - loss: 0.1580 - acc: 0.9465 - val_loss: 0.2448 - val_acc: 0.9060
Epoch 17/30
2000/2000 [==============================] - 1s 256us/step - loss: 0.1467 - acc: 0.9520 - val_loss: 0.2347 - val_acc: 0.9050
Epoch 18/30
2000/2000 [==============================] - 1s 255us/step - loss: 0.1421 - acc: 0.9505 - val_loss: 0.2366 - val_acc: 0.9020
Epoch 19/30
2000/2000 [==============================] - 1s 258us/step - loss: 0.1375 - acc: 0.9540 - val_loss: 0.2327 - val_acc: 0.9080
Epoch 20/30
2000/2000 [==============================] - 0s 248us/step - loss: 0.1268 - acc: 0.9545 - val_loss: 0.2395 - val_acc: 0.9030
Epoch 21/30
2000/2000 [==============================] - 1s 255us/step - loss: 0.1216 - acc: 0.9565 - val_loss: 0.2436 - val_acc: 0.9040
Epoch 22/30
2000/2000 [==============================] - 1s 255us/step - loss: 0.1220 - acc: 0.9565 - val_loss: 0.2340 - val_acc: 0.9040
Epoch 23/30
2000/2000 [==============================] - 1s 261us/step - loss: 0.1152 - acc: 0.9630 - val_loss: 0.2328 - val_acc: 0.9030
Epoch 24/30
2000/2000 [==============================] - 1s 251us/step - loss: 0.1111 - acc: 0.9605 - val_loss: 0.2506 - val_acc: 0.8990
Epoch 25/30
2000/2000 [==============================] - 1s 257us/step - loss: 0.1024 - acc: 0.9665 - val_loss: 0.2391 - val_acc: 0.9040
Epoch 26/30
2000/2000 [==============================] - 0s 250us/step - loss: 0.0999 - acc: 0.9680 - val_loss: 0.2573 - val_acc: 0.8980
Epoch 27/30
2000/2000 [==============================] - 1s 261us/step - loss: 0.0996 - acc: 0.9680 - val_loss: 0.2365 - val_acc: 0.9060
Epoch 28/30
2000/2000 [==============================] - 0s 250us/step - loss: 0.0873 - acc: 0.9765 - val_loss: 0.2444 - val_acc: 0.9020
Epoch 29/30
2000/2000 [==============================] - 0s 244us/step - loss: 0.0904 - acc: 0.9730 - val_loss: 0.2494 - val_acc: 0.9020
Epoch 30/30
2000/2000 [==============================] - 0s 245us/step - loss: 0.0876 - acc: 0.9745 - val_loss: 0.2426 - val_acc: 0.9020
Second
Found 2000 images belonging to 2 classes.
Found 1000 images belonging to 2 classes.
Epoch 1/30
- 13s - loss: 0.6106 - acc: 0.6725 - val_loss: 0.4488 - val_acc: 0.8300
Epoch 2/30
- 12s - loss: 0.4856 - acc: 0.7820 - val_loss: 0.3938 - val_acc: 0.8290
Epoch 3/30
- 12s - loss: 0.4271 - acc: 0.8125 - val_loss: 0.3307 - val_acc: 0.8690
Epoch 4/30
- 12s - loss: 0.4046 - acc: 0.8215 - val_loss: 0.3040 - val_acc: 0.8780
Epoch 5/30
- 12s - loss: 0.3809 - acc: 0.8275 - val_loss: 0.2999 - val_acc: 0.8670
Epoch 6/30
- 12s - loss: 0.3592 - acc: 0.8510 - val_loss: 0.2794 - val_acc: 0.8890
Epoch 7/30
- 12s - loss: 0.3709 - acc: 0.8350 - val_loss: 0.2703 - val_acc: 0.8950
Epoch 8/30
- 12s - loss: 0.3460 - acc: 0.8525 - val_loss: 0.2683 - val_acc: 0.8940
Epoch 9/30
- 12s - loss: 0.3532 - acc: 0.8430 - val_loss: 0.2660 - val_acc: 0.8820
Epoch 10/30
- 12s - loss: 0.3277 - acc: 0.8545 - val_loss: 0.2641 - val_acc: 0.8950
Epoch 11/30
- 12s - loss: 0.3236 - acc: 0.8685 - val_loss: 0.2705 - val_acc: 0.8770
Epoch 12/30
- 12s - loss: 0.3123 - acc: 0.8740 - val_loss: 0.2533 - val_acc: 0.8960
Epoch 13/30
- 12s - loss: 0.3279 - acc: 0.8605 - val_loss: 0.2718 - val_acc: 0.8740
Epoch 14/30
- 12s - loss: 0.3088 - acc: 0.8595 - val_loss: 0.2510 - val_acc: 0.9000
Epoch 15/30
- 12s - loss: 0.2999 - acc: 0.8700 - val_loss: 0.2468 - val_acc: 0.9010
Epoch 16/30
- 12s - loss: 0.3128 - acc: 0.8600 - val_loss: 0.2496 - val_acc: 0.9020
Epoch 17/30
- 12s - loss: 0.3064 - acc: 0.8605 - val_loss: 0.2496 - val_acc: 0.9010
Epoch 18/30
- 12s - loss: 0.3090 - acc: 0.8660 - val_loss: 0.2467 - val_acc: 0.8980
Epoch 19/30
- 12s - loss: 0.2903 - acc: 0.8710 - val_loss: 0.2709 - val_acc: 0.8790
Epoch 20/30
- 12s - loss: 0.3012 - acc: 0.8700 - val_loss: 0.2499 - val_acc: 0.8940
Epoch 21/30
- 12s - loss: 0.2944 - acc: 0.8820 - val_loss: 0.2593 - val_acc: 0.8960
Epoch 22/30
- 12s - loss: 0.2978 - acc: 0.8670 - val_loss: 0.2421 - val_acc: 0.9040
Epoch 23/30
- 12s - loss: 0.2942 - acc: 0.8695 - val_loss: 0.2378 - val_acc: 0.9050
Epoch 24/30
- 12s - loss: 0.2809 - acc: 0.8830 - val_loss: 0.2447 - val_acc: 0.8920
Epoch 25/30
- 12s - loss: 0.2963 - acc: 0.8765 - val_loss: 0.2420 - val_acc: 0.8950
Epoch 26/30
- 12s - loss: 0.2869 - acc: 0.8725 - val_loss: 0.2620 - val_acc: 0.8910
Epoch 27/30
- 12s - loss: 0.2789 - acc: 0.8820 - val_loss: 0.2447 - val_acc: 0.8950
Epoch 28/30
- 12s - loss: 0.2852 - acc: 0.8745 - val_loss: 0.2488 - val_acc: 0.8990
Epoch 29/30
- 12s - loss: 0.2821 - acc: 0.8810 - val_loss: 0.2402 - val_acc: 0.9010
Epoch 30/30
- 12s - loss: 0.2810 - acc: 0.8815 - val_loss: 0.2392 - val_acc: 0.9040
Third
Epoch 1/100
100/100 [==============================] - 13s 130ms/step - loss: 0.2866 - acc: 0.8735 - val_loss: 0.2175 - val_acc: 0.9080
Epoch 2/100
100/100 [==============================] - 12s 119ms/step - loss: 0.2588 - acc: 0.8925 - val_loss: 0.2073 - val_acc: 0.9200
Epoch 3/100
100/100 [==============================] - 12s 121ms/step - loss: 0.2464 - acc: 0.8985 - val_loss: 0.2072 - val_acc: 0.9200
Epoch 4/100
100/100 [==============================] - 12s 121ms/step - loss: 0.2127 - acc: 0.9085 - val_loss: 0.2032 - val_acc: 0.9230
Epoch 5/100
100/100 [==============================] - 12s 120ms/step - loss: 0.2147 - acc: 0.9100 - val_loss: 0.1972 - val_acc: 0.9200
Epoch 6/100
100/100 [==============================] - 12s 118ms/step - loss: 0.1998 - acc: 0.9130 - val_loss: 0.1975 - val_acc: 0.9240
Epoch 7/100
100/100 [==============================] - 12s 120ms/step - loss: 0.1977 - acc: 0.9235 - val_loss: 0.2052 - val_acc: 0.9170
Epoch 8/100
100/100 [==============================] - 12s 120ms/step - loss: 0.1748 - acc: 0.9270 - val_loss: 0.1890 - val_acc: 0.9270
Epoch 9/100
100/100 [==============================] - 12s 119ms/step - loss: 0.1724 - acc: 0.9325 - val_loss: 0.2060 - val_acc: 0.9230
Epoch 10/100
100/100 [==============================] - 12s 120ms/step - loss: 0.1412 - acc: 0.9435 - val_loss: 0.1968 - val_acc: 0.9190
Epoch 11/100
100/100 [==============================] - 12s 119ms/step - loss: 0.1455 - acc: 0.9450 - val_loss: 0.1805 - val_acc: 0.9350
Epoch 12/100
100/100 [==============================] - 12s 119ms/step - loss: 0.1462 - acc: 0.9450 - val_loss: 0.1814 - val_acc: 0.9340
Epoch 13/100
100/100 [==============================] - 12s 120ms/step - loss: 0.1243 - acc: 0.9535 - val_loss: 0.2028 - val_acc: 0.9250
Epoch 14/100
100/100 [==============================] - 12s 119ms/step - loss: 0.1306 - acc: 0.9500 - val_loss: 0.1753 - val_acc: 0.9310
Epoch 15/100
100/100 [==============================] - 12s 120ms/step - loss: 0.1222 - acc: 0.9525 - val_loss: 0.1981 - val_acc: 0.9310
Epoch 16/100
100/100 [==============================] - 12s 119ms/step - loss: 0.1221 - acc: 0.9500 - val_loss: 0.2299 - val_acc: 0.9160
Epoch 17/100
100/100 [==============================] - 12s 120ms/step - loss: 0.1019 - acc: 0.9625 - val_loss: 0.2630 - val_acc: 0.9160
Epoch 18/100
100/100 [==============================] - 12s 120ms/step - loss: 0.0970 - acc: 0.9630 - val_loss: 0.1876 - val_acc: 0.9250
Epoch 19/100
100/100 [==============================] - 12s 120ms/step - loss: 0.0961 - acc: 0.9620 - val_loss: 0.2018 - val_acc: 0.9300
Epoch 20/100
100/100 [==============================] - 12s 121ms/step - loss: 0.1085 - acc: 0.9570 - val_loss: 0.1957 - val_acc: 0.9320
Epoch 21/100
100/100 [==============================] - 12s 119ms/step - loss: 0.0937 - acc: 0.9630 - val_loss: 0.1920 - val_acc: 0.9290
Epoch 22/100
100/100 [==============================] - 12s 119ms/step - loss: 0.0953 - acc: 0.9605 - val_loss: 0.2289 - val_acc: 0.9260
Epoch 23/100
100/100 [==============================] - 12s 120ms/step - loss: 0.0808 - acc: 0.9700 - val_loss: 0.2148 - val_acc: 0.9260
Epoch 24/100
100/100 [==============================] - 12s 120ms/step - loss: 0.0927 - acc: 0.9645 - val_loss: 0.2542 - val_acc: 0.9230
Epoch 25/100
100/100 [==============================] - 12s 119ms/step - loss: 0.0924 - acc: 0.9580 - val_loss: 0.2366 - val_acc: 0.9250
Epoch 26/100
100/100 [==============================] - 12s 119ms/step - loss: 0.0686 - acc: 0.9760 - val_loss: 0.2021 - val_acc: 0.9370
Epoch 27/100
100/100 [==============================] - 12s 119ms/step - loss: 0.0761 - acc: 0.9735 - val_loss: 0.2552 - val_acc: 0.9190
Epoch 28/100
100/100 [==============================] - 12s 119ms/step - loss: 0.0713 - acc: 0.9740 - val_loss: 0.1946 - val_acc: 0.9330
Epoch 29/100
100/100 [==============================] - 12s 120ms/step - loss: 0.0670 - acc: 0.9735 - val_loss: 0.2767 - val_acc: 0.9140
Epoch 30/100
100/100 [==============================] - 12s 120ms/step - loss: 0.0562 - acc: 0.9780 - val_loss: 0.2539 - val_acc: 0.9300
Epoch 31/100
100/100 [==============================] - 12s 119ms/step - loss: 0.0723 - acc: 0.9750 - val_loss: 0.2265 - val_acc: 0.9270
Epoch 32/100
100/100 [==============================] - 12s 119ms/step - loss: 0.0661 - acc: 0.9755 - val_loss: 0.1973 - val_acc: 0.9340
Epoch 33/100
100/100 [==============================] - 12s 119ms/step - loss: 0.0683 - acc: 0.9740 - val_loss: 0.1937 - val_acc: 0.9330
Epoch 34/100
100/100 [==============================] - 12s 121ms/step - loss: 0.0575 - acc: 0.9800 - val_loss: 0.2816 - val_acc: 0.9250
Epoch 35/100
100/100 [==============================] - 12s 123ms/step - loss: 0.0602 - acc: 0.9795 - val_loss: 0.2012 - val_acc: 0.9300
Epoch 36/100
100/100 [==============================] - 12s 122ms/step - loss: 0.0550 - acc: 0.9790 - val_loss: 0.2138 - val_acc: 0.9360
Epoch 37/100
100/100 [==============================] - 12s 124ms/step - loss: 0.0546 - acc: 0.9750 - val_loss: 0.2061 - val_acc: 0.9400
Epoch 38/100
100/100 [==============================] - 12s 121ms/step - loss: 0.0638 - acc: 0.9780 - val_loss: 0.2375 - val_acc: 0.9290
Epoch 39/100
100/100 [==============================] - 12s 122ms/step - loss: 0.0520 - acc: 0.9785 - val_loss: 0.2437 - val_acc: 0.9260
Epoch 40/100
100/100 [==============================] - 12s 119ms/step - loss: 0.0522 - acc: 0.9775 - val_loss: 0.1932 - val_acc: 0.9430
Epoch 41/100
100/100 [==============================] - 12s 119ms/step - loss: 0.0512 - acc: 0.9800 - val_loss: 0.2903 - val_acc: 0.9200
Epoch 42/100
100/100 [==============================] - 12s 120ms/step - loss: 0.0546 - acc: 0.9790 - val_loss: 0.2127 - val_acc: 0.9410
Epoch 43/100
100/100 [==============================] - 12s 120ms/step - loss: 0.0558 - acc: 0.9805 - val_loss: 0.2027 - val_acc: 0.9410
Epoch 44/100
100/100 [==============================] - 12s 119ms/step - loss: 0.0408 - acc: 0.9875 - val_loss: 0.2138 - val_acc: 0.9380
Epoch 45/100
100/100 [==============================] - 12s 120ms/step - loss: 0.0451 - acc: 0.9810 - val_loss: 0.2076 - val_acc: 0.9390
Epoch 46/100
100/100 [==============================] - 12s 119ms/step - loss: 0.0529 - acc: 0.9820 - val_loss: 0.2035 - val_acc: 0.9420
Epoch 47/100
100/100 [==============================] - 12s 119ms/step - loss: 0.0375 - acc: 0.9850 - val_loss: 0.1965 - val_acc: 0.9430
Epoch 48/100
100/100 [==============================] - 12s 119ms/step - loss: 0.0407 - acc: 0.9870 - val_loss: 0.2131 - val_acc: 0.9410
Epoch 49/100
100/100 [==============================] - 12s 118ms/step - loss: 0.0387 - acc: 0.9840 - val_loss: 0.2467 - val_acc: 0.9350
Epoch 50/100
100/100 [==============================] - 12s 119ms/step - loss: 0.0412 - acc: 0.9860 - val_loss: 0.1852 - val_acc: 0.9430
Epoch 51/100
100/100 [==============================] - 12s 119ms/step - loss: 0.0350 - acc: 0.9855 - val_loss: 0.3657 - val_acc: 0.9200
Epoch 52/100
100/100 [==============================] - 12s 119ms/step - loss: 0.0337 - acc: 0.9850 - val_loss: 0.2103 - val_acc: 0.9450
Epoch 53/100
100/100 [==============================] - 12s 120ms/step - loss: 0.0478 - acc: 0.9815 - val_loss: 0.2192 - val_acc: 0.9440
Epoch 54/100
100/100 [==============================] - 12s 119ms/step - loss: 0.0446 - acc: 0.9820 - val_loss: 0.2293 - val_acc: 0.9360
Epoch 55/100
100/100 [==============================] - 12s 120ms/step - loss: 0.0318 - acc: 0.9885 - val_loss: 0.2361 - val_acc: 0.9390
Epoch 56/100
100/100 [==============================] - 12s 120ms/step - loss: 0.0317 - acc: 0.9865 - val_loss: 0.2123 - val_acc: 0.9450
Epoch 57/100
100/100 [==============================] - 12s 120ms/step - loss: 0.0337 - acc: 0.9905 - val_loss: 0.2219 - val_acc: 0.9420
Epoch 58/100
100/100 [==============================] - 12s 120ms/step - loss: 0.0390 - acc: 0.9895 - val_loss: 0.2046 - val_acc: 0.9380
Epoch 59/100
100/100 [==============================] - 12s 119ms/step - loss: 0.0295 - acc: 0.9905 - val_loss: 0.2522 - val_acc: 0.9410
Epoch 60/100
100/100 [==============================] - 12s 119ms/step - loss: 0.0315 - acc: 0.9890 - val_loss: 0.2451 - val_acc: 0.9330
Epoch 61/100
100/100 [==============================] - 12s 118ms/step - loss: 0.0251 - acc: 0.9935 - val_loss: 0.2584 - val_acc: 0.9300
Epoch 62/100
100/100 [==============================] - 12s 118ms/step - loss: 0.0338 - acc: 0.9860 - val_loss: 0.1990 - val_acc: 0.9440
Epoch 63/100
100/100 [==============================] - 12s 118ms/step - loss: 0.0301 - acc: 0.9885 - val_loss: 0.2289 - val_acc: 0.9330
Epoch 64/100
100/100 [==============================] - 12s 118ms/step - loss: 0.0255 - acc: 0.9900 - val_loss: 0.2251 - val_acc: 0.9440
Epoch 65/100
100/100 [==============================] - 12s 118ms/step - loss: 0.0302 - acc: 0.9880 - val_loss: 0.2312 - val_acc: 0.9440
Epoch 66/100
100/100 [==============================] - 12s 118ms/step - loss: 0.0198 - acc: 0.9925 - val_loss: 0.2832 - val_acc: 0.9360
Epoch 67/100
100/100 [==============================] - 12s 119ms/step - loss: 0.0257 - acc: 0.9890 - val_loss: 0.3406 - val_acc: 0.9230
Epoch 68/100
100/100 [==============================] - 12s 118ms/step - loss: 0.0261 - acc: 0.9885 - val_loss: 0.2148 - val_acc: 0.9410
Epoch 69/100
100/100 [==============================] - 12s 118ms/step - loss: 0.0414 - acc: 0.9850 - val_loss: 0.2319 - val_acc: 0.9370
Epoch 70/100
100/100 [==============================] - 12s 119ms/step - loss: 0.0286 - acc: 0.9910 - val_loss: 0.2229 - val_acc: 0.9400
Epoch 71/100
100/100 [==============================] - 12s 118ms/step - loss: 0.0275 - acc: 0.9905 - val_loss: 0.2303 - val_acc: 0.9360
Epoch 72/100
100/100 [==============================] - 12s 118ms/step - loss: 0.0293 - acc: 0.9895 - val_loss: 0.2329 - val_acc: 0.9400
Epoch 73/100
100/100 [==============================] - 12s 118ms/step - loss: 0.0262 - acc: 0.9925 - val_loss: 0.2768 - val_acc: 0.9350
Epoch 74/100
100/100 [==============================] - 12s 119ms/step - loss: 0.0258 - acc: 0.9895 - val_loss: 0.2277 - val_acc: 0.9410
Epoch 75/100
100/100 [==============================] - 12s 119ms/step - loss: 0.0293 - acc: 0.9900 - val_loss: 0.3432 - val_acc: 0.9270
Epoch 76/100
100/100 [==============================] - 12s 118ms/step - loss: 0.0245 - acc: 0.9895 - val_loss: 0.2557 - val_acc: 0.9460
Epoch 77/100
100/100 [==============================] - 12s 119ms/step - loss: 0.0242 - acc: 0.9920 - val_loss: 0.3263 - val_acc: 0.9310
Epoch 78/100
100/100 [==============================] - 12s 118ms/step - loss: 0.0269 - acc: 0.9925 - val_loss: 0.2669 - val_acc: 0.9390
Epoch 79/100
100/100 [==============================] - 12s 118ms/step - loss: 0.0277 - acc: 0.9895 - val_loss: 0.3285 - val_acc: 0.9330
Epoch 80/100
100/100 [==============================] - 12s 118ms/step - loss: 0.0211 - acc: 0.9930 - val_loss: 0.2640 - val_acc: 0.9300
Epoch 81/100
100/100 [==============================] - 12s 118ms/step - loss: 0.0229 - acc: 0.9905 - val_loss: 0.2543 - val_acc: 0.9390
Epoch 82/100
100/100 [==============================] - 12s 118ms/step - loss: 0.0205 - acc: 0.9940 - val_loss: 0.2587 - val_acc: 0.9400
Epoch 83/100
100/100 [==============================] - 12s 117ms/step - loss: 0.0260 - acc: 0.9920 - val_loss: 0.3032 - val_acc: 0.9290
Epoch 84/100
100/100 [==============================] - 12s 118ms/step - loss: 0.0253 - acc: 0.9930 - val_loss: 0.2701 - val_acc: 0.9400
Epoch 85/100
100/100 [==============================] - 12s 118ms/step - loss: 0.0244 - acc: 0.9940 - val_loss: 0.2766 - val_acc: 0.9390
Epoch 86/100
100/100 [==============================] - 12s 118ms/step - loss: 0.0148 - acc: 0.9940 - val_loss: 0.2749 - val_acc: 0.9390
Epoch 87/100
100/100 [==============================] - 12s 118ms/step - loss: 0.0230 - acc: 0.9920 - val_loss: 0.2702 - val_acc: 0.9310
Epoch 88/100
100/100 [==============================] - 12s 118ms/step - loss: 0.0249 - acc: 0.9895 - val_loss: 0.2651 - val_acc: 0.9400
Epoch 89/100
100/100 [==============================] - 12s 119ms/step - loss: 0.0174 - acc: 0.9935 - val_loss: 0.4466 - val_acc: 0.9220
Epoch 90/100
100/100 [==============================] - 12s 119ms/step - loss: 0.0180 - acc: 0.9945 - val_loss: 0.3415 - val_acc: 0.9350
Epoch 91/100
100/100 [==============================] - 12s 118ms/step - loss: 0.0216 - acc: 0.9950 - val_loss: 0.2878 - val_acc: 0.9390
Epoch 92/100
100/100 [==============================] - 12s 118ms/step - loss: 0.0231 - acc: 0.9890 - val_loss: 0.5113 - val_acc: 0.9130
Epoch 93/100
100/100 [==============================] - 12s 119ms/step - loss: 0.0327 - acc: 0.9880 - val_loss: 0.3749 - val_acc: 0.9280
Epoch 94/100
100/100 [==============================] - 12s 119ms/step - loss: 0.0181 - acc: 0.9935 - val_loss: 0.3770 - val_acc: 0.9280
Epoch 95/100
100/100 [==============================] - 12s 117ms/step - loss: 0.0142 - acc: 0.9955 - val_loss: 0.4558 - val_acc: 0.9250
Epoch 96/100
100/100 [==============================] - 12s 118ms/step - loss: 0.0174 - acc: 0.9920 - val_loss: 0.3398 - val_acc: 0.9360
Epoch 97/100
100/100 [==============================] - 12s 119ms/step - loss: 0.0208 - acc: 0.9935 - val_loss: 0.2885 - val_acc: 0.9450
Epoch 98/100
100/100 [==============================] - 12s 118ms/step - loss: 0.0188 - acc: 0.9945 - val_loss: 0.3521 - val_acc: 0.9260
Epoch 99/100
100/100 [==============================] - 12s 117ms/step - loss: 0.0154 - acc: 0.9940 - val_loss: 0.3361 - val_acc: 0.9340
Epoch 100/100
100/100 [==============================] - 12s 118ms/step - loss: 0.0202 - acc: 0.9935 - val_loss: 0.2974 - val_acc: 0.9390
The issue you are pointing out is perfectly normal. In your case, the difference between the starting/final accuracies are negligible so don't worry. If there was a huge difference, i.e. more than 5-8%, then you should be worried. Overall, there are at least 3 possible explanations:
The hardware is different: clearly results in minor accuracy differences.
Software differences: Running codes on GPU and CPU will oftentimes result in different but similar results.
Weight initialization (WI) might be different. Obviously this does not apply to your situation as you loaded the pertained VGG with the preset weights. Overall, HOW you do WI is a very important thing to consider in training deep nets.

Resources