Keras binary classification probabilities to labels - keras

Keras predicted output for binary classification is probabilities. Not classes, i.e., 1 or 0.
for example the following code generates probabilities.
import numpy as np
from keras.models import Sequential
from keras.layers import Dense, Dropout
# Generate dummy data
x_train = np.random.random((100, 20))
y_train = np.random.randint(2, size=(100, 1))
x_test = np.random.random((10, 20))
y_test = np.random.randint(2, size=(10, 1))
model = Sequential()
model.add(Dense(64, input_dim=20, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(64, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(1, activation='sigmoid'))
model.compile(loss='binary_crossentropy',optimizer='rmsprop',metrics=['accuracy'])
model.fit(x_train, y_train, epochs=20, batch_size=128)
y_predicted = model.predict(x_test)
print(y_predicted)
and the output is:
Epoch 1/20
100/100 [==============================] - 1s 5ms/step - loss: 0.8134 - acc: 0.4300
Epoch 2/20
100/100 [==============================] - 0s 17us/step - loss: 0.7429 - acc: 0.4600
Epoch 3/20
100/100 [==============================] - 0s 20us/step - loss: 0.7511 - acc: 0.4300
Epoch 4/20
100/100 [==============================] - 0s 18us/step - loss: 0.7408 - acc: 0.5000
Epoch 5/20
100/100 [==============================] - 0s 21us/step - loss: 0.6922 - acc: 0.5700
Epoch 6/20
100/100 [==============================] - 0s 31us/step - loss: 0.6874 - acc: 0.5600
Epoch 7/20
100/100 [==============================] - 0s 29us/step - loss: 0.7005 - acc: 0.5600
Epoch 8/20
100/100 [==============================] - 0s 23us/step - loss: 0.6960 - acc: 0.5200
Epoch 9/20
100/100 [==============================] - 0s 24us/step - loss: 0.6988 - acc: 0.5200
Epoch 10/20
100/100 [==============================] - 0s 26us/step - loss: 0.7276 - acc: 0.4000
Epoch 11/20
100/100 [==============================] - 0s 20us/step - loss: 0.6967 - acc: 0.5000
Epoch 12/20
100/100 [==============================] - 0s 30us/step - loss: 0.7085 - acc: 0.5000
Epoch 13/20
100/100 [==============================] - 0s 24us/step - loss: 0.6993 - acc: 0.5500
Epoch 14/20
100/100 [==============================] - 0s 26us/step - loss: 0.7278 - acc: 0.4600
Epoch 15/20
100/100 [==============================] - 0s 27us/step - loss: 0.6665 - acc: 0.5500
Epoch 16/20
100/100 [==============================] - 0s 24us/step - loss: 0.6784 - acc: 0.5500
Epoch 17/20
100/100 [==============================] - 0s 24us/step - loss: 0.7259 - acc: 0.4800
Epoch 18/20
100/100 [==============================] - 0s 26us/step - loss: 0.7093 - acc: 0.5500
Epoch 19/20
100/100 [==============================] - 0s 28us/step - loss: 0.6911 - acc: 0.5700
Epoch 20/20
100/100 [==============================] - 0s 34us/step - loss: 0.6771 - acc: 0.5500
[[0.4875336 ]
[0.47847825]
[0.4808622 ]
[0.5032022 ]
[0.4556646 ]
[0.48644704]
[0.4600153 ]
[0.47782585]
[0.49664593]
[0.5001673 ]]
Now, how can I get the classes from that probabilities? I tried manually setting a threshold like this:
print([1 if x >0.4 else 0 for x in y_predicted])
Is there any other method to do that from Keras API? I could not find any.

Yes, model.predict_classes.
model.predict_classes(x_test)
https://github.com/keras-team/keras/blob/f0eb8d538c82798944346b4b2df917a06bf5e9d4/keras/engine/sequential.py#L254 (predict_classes)
which uses a threshold of 0.5 in case of binary classification or argmax in case of multi-class.

Related

LSTM model giving only one label and accuracy in binary

My model giving this answer while training. The accuracy is coming only in binary - 0 or 1. Also, my model is predicting only D label([0 0 0 1]).
WARNING:absl:Found untraced functions such as lstm_cell_layer_call_fn, lstm_cell_layer_call_and_return_conditional_losses, lstm_cell_1_layer_call_fn, lstm_cell_1_layer_call_and_return_conditional_losses, lstm_cell_2_layer_call_fn while saving (showing 5 of 15). These functions will not be directly callable after loading.
INFO:tensorflow:Assets written to: /content/drive/MyDrive/SavedModels/OpenfaceRNN_final_7/assets
INFO:tensorflow:Assets written to: /content/drive/MyDrive/SavedModels/OpenfaceRNN_final_7/assets
[0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0]
[[0. 0. 1. 0.]]
(1, 2362, 715)
Epoch 1/50
1/1 [==============================] - 1s 729ms/step - loss: 1.4395 - accuracy: 0.0000e+00
Epoch 2/50
1/1 [==============================] - 1s 696ms/step - loss: 1.4322 - accuracy: 0.0000e+00
Epoch 3/50
1/1 [==============================] - 1s 705ms/step - loss: 1.4250 - accuracy: 0.0000e+00
Epoch 4/50
1/1 [==============================] - 1s 707ms/step - loss: 1.4178 - accuracy: 0.0000e+00
Epoch 5/50
1/1 [==============================] - 1s 699ms/step - loss: 1.4107 - accuracy: 0.0000e+00
Epoch 6/50
1/1 [==============================] - 1s 703ms/step - loss: 1.4036 - accuracy: 0.0000e+00
Epoch 7/50
1/1 [==============================] - 1s 708ms/step - loss: 1.3965 - accuracy: 0.0000e+00
Epoch 8/50
1/1 [==============================] - 1s 712ms/step - loss: 1.3895 - accuracy: 0.0000e+00
Epoch 9/50
1/1 [==============================] - 1s 700ms/step - loss: 1.3826 - accuracy: 0.0000e+00
Epoch 10/50
1/1 [==============================] - 1s 709ms/step - loss: 1.3756 - accuracy: 0.0000e+00
Epoch 11/50
1/1 [==============================] - 1s 697ms/step - loss: 1.3688 - accuracy: 0.0000e+00
Epoch 12/50
1/1 [==============================] - 1s 698ms/step - loss: 1.3619 - accuracy: 0.0000e+00
Epoch 13/50
1/1 [==============================] - 1s 796ms/step - loss: 1.3551 - accuracy: 0.0000e+00
Epoch 14/50
1/1 [==============================] - 1s 783ms/step - loss: 1.3483 - accuracy: 0.0000e+00
Epoch 15/50
1/1 [==============================] - 1s 727ms/step - loss: 1.3416 - accuracy: 0.0000e+00
Epoch 16/50
1/1 [==============================] - 1s 705ms/step - loss: 1.3349 - accuracy: 0.0000e+00
Epoch 17/50
1/1 [==============================] - 1s 702ms/step - loss: 1.3282 - accuracy: 0.0000e+00
Epoch 18/50
1/1 [==============================] - 1s 709ms/step - loss: 1.3215 - accuracy: 0.0000e+00
Epoch 19/50
1/1 [==============================] - 1s 695ms/step - loss: 1.3149 - accuracy: 0.0000e+00
Epoch 20/50
1/1 [==============================] - 1s 700ms/step - loss: 1.3083 - accuracy: 0.0000e+00
Epoch 21/50
1/1 [==============================] - 1s 706ms/step - loss: 1.3017 - accuracy: 0.0000e+00
Epoch 22/50
1/1 [==============================] - 1s 703ms/step - loss: 1.2952 - accuracy: 0.0000e+00
Epoch 23/50
1/1 [==============================] - 1s 701ms/step - loss: 1.2887 - accuracy: 0.0000e+00
Epoch 24/50
1/1 [==============================] - 1s 717ms/step - loss: 1.2822 - accuracy: 0.0000e+00
Epoch 25/50
1/1 [==============================] - 1s 709ms/step - loss: 1.2757 - accuracy: 0.0000e+00
Epoch 26/50
1/1 [==============================] - 1s 702ms/step - loss: 1.2692 - accuracy: 0.0000e+00
Epoch 27/50
1/1 [==============================] - 1s 707ms/step - loss: 1.2628 - accuracy: 0.0000e+00
Epoch 28/50
1/1 [==============================] - 1s 732ms/step - loss: 1.2564 - accuracy: 0.0000e+00
Epoch 29/50
1/1 [==============================] - 1s 705ms/step - loss: 1.2500 - accuracy: 0.0000e+00
Epoch 30/50
1/1 [==============================] - 1s 702ms/step - loss: 1.2436 - accuracy: 0.0000e+00
Epoch 31/50
1/1 [==============================] - 1s 707ms/step - loss: 1.2373 - accuracy: 0.0000e+00
Epoch 32/50
1/1 [==============================] - 1s 708ms/step - loss: 1.2309 - accuracy: 0.0000e+00
Epoch 33/50
1/1 [==============================] - 1s 696ms/step - loss: 1.2246 - accuracy: 0.0000e+00
Epoch 34/50
1/1 [==============================] - 1s 710ms/step - loss: 1.2183 - accuracy: 0.0000e+00
Epoch 35/50
1/1 [==============================] - 1s 703ms/step - loss: 1.2120 - accuracy: 0.0000e+00
Epoch 36/50
1/1 [==============================] - 1s 715ms/step - loss: 1.2058 - accuracy: 1.0000
Epoch 37/50
1/1 [==============================] - 1s 711ms/step - loss: 1.1995 - accuracy: 1.0000
Epoch 38/50
1/1 [==============================] - 1s 709ms/step - loss: 1.1933 - accuracy: 1.0000
Epoch 39/50
1/1 [==============================] - 1s 701ms/step - loss: 1.1870 - accuracy: 1.0000
Epoch 40/50
1/1 [==============================] - 1s 706ms/step - loss: 1.1808 - accuracy: 1.0000
Epoch 41/50
1/1 [==============================] - 1s 725ms/step - loss: 1.1746 - accuracy: 1.0000
Epoch 42/50
1/1 [==============================] - 1s 710ms/step - loss: 1.1685 - accuracy: 1.0000
Epoch 43/50
1/1 [==============================] - 1s 697ms/step - loss: 1.1623 - accuracy: 1.0000
Epoch 44/50
1/1 [==============================] - 1s 697ms/step - loss: 1.1561 - accuracy: 1.0000
Epoch 45/50
1/1 [==============================] - 1s 712ms/step - loss: 1.1500 - accuracy: 1.0000
Epoch 46/50
1/1 [==============================] - 1s 701ms/step - loss: 1.1439 - accuracy: 1.0000
Epoch 47/50
1/1 [==============================] - 1s 710ms/step - loss: 1.1378 - accuracy: 1.0000
Epoch 48/50
1/1 [==============================] - 1s 711ms/step - loss: 1.1317 - accuracy: 1.0000
Epoch 49/50
1/1 [==============================] - 1s 699ms/step - loss: 1.1256 - accuracy: 1.0000
Epoch 50/50
1/1 [==============================] - 1s 704ms/step - loss: 1.1195 - accuracy: 1.0000
This is my model:
classifier = Sequential()
#input_tensor = Input(shape=(236115, 714))
classifier.add(LSTM(units=1024, return_sequences=True, dropout=0.1,input_shape = (2362, 715) ))
classifier.add(Dense(1024, activation='relu'))
classifier.add(BatchNormalization())
classifier.add((Dropout(0.1)))
classifier.add(LSTM(units=512, return_sequences=True, dropout=0.1))
classifier.add(Dense(512, activation='relu'))
classifier.add(BatchNormalization())
classifier.add((Dropout(0.1)))
classifier.add(LSTM(units=256, dropout=0.1))
classifier.add(Dense(256, activation='relu'))
classifier.add(BatchNormalization())
classifier.add((Dropout(0.1)))
classifier.add(Dense(128, activation='relu'))
classifier.add(BatchNormalization())
classifier.add((Dropout(0.1)))
classifier.add(Dense(64, activation='relu'))
classifier.add(BatchNormalization())
classifier.add(Dropout(0.1))
classifier.add(Dense(32, activation='relu'))
classifier.add(BatchNormalization())
classifier.add(Dense(16, activation='relu'))
classifier.add(BatchNormalization())
classifier.add(Dense(8, activation='relu'))
classifier.add(BatchNormalization())
classifier.add(Dense(4, activation='softmax'))

InceptionV3+LSTM activity recognition, accuracy grows for 10 epochs and then drops down

I'm trying to build model to do activity recognition.
Using InceptionV3 and backbone and LSTM for the detection, using pre-trained weights.
train_generator = datagen.flow_from_directory(
'dataset/train',
target_size=(1,224, 224),
batch_size=batch_size,
class_mode='categorical', # this means our generator will only yield batches of data, no labels
shuffle=True,
classes=['PlayingPiano','HorseRiding','Skiing', 'Basketball','BaseballPitch'])
validation_generator = datagen.flow_from_directory(
'dataset/validate',
target_size=(1,224, 224),
batch_size=batch_size,
class_mode='categorical', # this means our generator will only yield batches of data, no labels
shuffle=True,
classes=['PlayingPiano','HorseRiding','Skiing', 'Basketball','BaseballPitch'])
return train_generator,validation_generator
I train 5 classes so split my data into folders for train and validate.
This is my CNN+LSTM architecture
image = Input(shape=(None,224,224,3),name='image_input')
cnn = applications.inception_v3.InceptionV3(
weights='imagenet',
include_top=False,
pooling='avg')
cnn.trainable = False
encoded_frame = TimeDistributed(Lambda(lambda x: cnn(x)))(image)
encoded_vid = LSTM(256)(encoded_frame)
layer1 = Dense(512, activation='relu')(encoded_vid)
dropout1 = Dropout(0.5)(layer1)
layer2 = Dense(256, activation='relu')(dropout1)
dropout2 = Dropout(0.5)(layer2)
layer3 = Dense(64, activation='relu')(dropout2)
dropout3 = Dropout(0.5)(layer3)
outputs = Dense(5, activation='softmax')(dropout3)
model = Model(inputs=[image],outputs=outputs)
sgd = SGD(lr=0.001, decay = 1e-6, momentum=0.9, nesterov=True)
model.compile(optimizer=sgd,loss='categorical_crossentropy', metrics=['accuracy'])
model.fit_generator(train_generator,validation_data = validation_generator,steps_per_epoch=300, epochs=nb_epoch,callbacks=callbacks,shuffle=True,verbose=1)
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
image_input (InputLayer) (None, None, 224, 224, 3) 0
_________________________________________________________________
time_distributed_1 (TimeDist (None, None, 2048) 0
_________________________________________________________________
lstm_1 (LSTM) (None, 256) 2360320
_________________________________________________________________
dense_1 (Dense) (None, 512) 131584
_________________________________________________________________
dropout_1 (Dropout) (None, 512) 0
_________________________________________________________________
dense_2 (Dense) (None, 256) 131328
_________________________________________________________________
dropout_2 (Dropout) (None, 256) 0
_________________________________________________________________
dense_3 (Dense) (None, 64) 16448
_________________________________________________________________
dropout_3 (Dropout) (None, 64) 0
_________________________________________________________________
dense_4 (Dense) (None, 5) 325
_________________________________________________________________
Model compiles normally without problem.
Problem starts during the training. It reaches val_acc=0.50 and then drops back to val_acc=0.30 and the loss just freeze on 0.80 and mostly don't move.
Here the logs from training, as you see the model for some tome improves and then just slowly drops down and later just freeze.
Any idea what can be the reason?
Epoch 00002: val_loss improved from 1.56471 to 1.55652, saving model to ./weights_inception/Inception_V3.02-0.28.h5
Epoch 3/500
300/300 [==============================] - 66s 219ms/step - loss: 1.5436 - acc: 0.3281 - val_loss: 1.5476 - val_acc: 0.2981
Epoch 00003: val_loss improved from 1.55652 to 1.54757, saving model to ./weights_inception/Inception_V3.03-0.30.h5
Epoch 4/500
300/300 [==============================] - 66s 220ms/step - loss: 1.5109 - acc: 0.3593 - val_loss: 1.5284 - val_acc: 0.3588
Epoch 00004: val_loss improved from 1.54757 to 1.52841, saving model to ./weights_inception/Inception_V3.04-0.36.h5
Epoch 5/500
300/300 [==============================] - 66s 221ms/step - loss: 1.4167 - acc: 0.4167 - val_loss: 1.4945 - val_acc: 0.3553
Epoch 00005: val_loss improved from 1.52841 to 1.49446, saving model to ./weights_inception/Inception_V3.05-0.36.h5
Epoch 6/500
300/300 [==============================] - 66s 221ms/step - loss: 1.2941 - acc: 0.4683 - val_loss: 1.4735 - val_acc: 0.4443
Epoch 00006: val_loss improved from 1.49446 to 1.47345, saving model to ./weights_inception/Inception_V3.06-0.44.h5
Epoch 7/500
300/300 [==============================] - 66s 221ms/step - loss: 1.2096 - acc: 0.5116 - val_loss: 1.3738 - val_acc: 0.5186
Epoch 00007: val_loss improved from 1.47345 to 1.37381, saving model to ./weights_inception/Inception_V3.07-0.52.h5
Epoch 8/500
300/300 [==============================] - 66s 221ms/step - loss: 1.1477 - acc: 0.5487 - val_loss: 1.2337 - val_acc: 0.5788
Epoch 00008: val_loss improved from 1.37381 to 1.23367, saving model to ./weights_inception/Inception_V3.08-0.58.h5
Epoch 9/500
300/300 [==============================] - 66s 221ms/step - loss: 1.0809 - acc: 0.5831 - val_loss: 1.2247 - val_acc: 0.5658
Epoch 00009: val_loss improved from 1.23367 to 1.22473, saving model to ./weights_inception/Inception_V3.09-0.57.h5
Epoch 10/500
300/300 [==============================] - 66s 221ms/step - loss: 1.0362 - acc: 0.6089 - val_loss: 1.1704 - val_acc: 0.5774
Epoch 00010: val_loss improved from 1.22473 to 1.17035, saving model to ./weights_inception/Inception_V3.10-0.58.h5
Epoch 11/500
300/300 [==============================] - 66s 221ms/step - loss: 0.9811 - acc: 0.6317 - val_loss: 1.1612 - val_acc: 0.5616
Epoch 00011: val_loss improved from 1.17035 to 1.16121, saving model to ./weights_inception/Inception_V3.11-0.56.h5
Epoch 12/500
300/300 [==============================] - 66s 221ms/step - loss: 0.9444 - acc: 0.6471 - val_loss: 1.1533 - val_acc: 0.5613
Epoch 00012: val_loss improved from 1.16121 to 1.15330, saving model to ./weights_inception/Inception_V3.12-0.56.h5
Epoch 13/500
300/300 [==============================] - 66s 221ms/step - loss: 0.9072 - acc: 0.6650 - val_loss: 1.1843 - val_acc: 0.5361
Epoch 00013: val_loss did not improve from 1.15330
Epoch 14/500
300/300 [==============================] - 66s 221ms/step - loss: 0.8747 - acc: 0.6744 - val_loss: 1.2135 - val_acc: 0.5258
Epoch 00014: val_loss did not improve from 1.15330
Epoch 15/500
300/300 [==============================] - 67s 222ms/step - loss: 0.8666 - acc: 0.6829 - val_loss: 1.1585 - val_acc: 0.5443
Epoch 00015: val_loss did not improve from 1.15330
Epoch 16/500
300/300 [==============================] - 66s 222ms/step - loss: 0.8386 - acc: 0.6926 - val_loss: 1.1503 - val_acc: 0.5482
Epoch 00016: val_loss improved from 1.15330 to 1.15026, saving model to ./weights_inception/Inception_V3.16-0.55.h5
Epoch 17/500
300/300 [==============================] - 66s 221ms/step - loss: 0.8199 - acc: 0.7023 - val_loss: 1.2162 - val_acc: 0.5288
Epoch 00017: val_loss did not improve from 1.15026
Epoch 18/500
300/300 [==============================] - 66s 222ms/step - loss: 0.8018 - acc: 0.7150 - val_loss: 1.1995 - val_acc: 0.5179
Epoch 00018: val_loss did not improve from 1.15026
Epoch 19/500
300/300 [==============================] - 66s 221ms/step - loss: 0.7923 - acc: 0.7186 - val_loss: 1.2218 - val_acc: 0.5137
Epoch 00019: val_loss did not improve from 1.15026
Epoch 20/500
300/300 [==============================] - 67s 222ms/step - loss: 0.7748 - acc: 0.7268 - val_loss: 1.2880 - val_acc: 0.4574
Epoch 00020: val_loss did not improve from 1.15026
Epoch 21/500
300/300 [==============================] - 66s 221ms/step - loss: 0.7604 - acc: 0.7330 - val_loss: 1.2658 - val_acc: 0.4861
The model is starting to overfit. Ideally as you increase number of epochs training loss will decrease(depends on learning rate), if its not able to decrease may be your model can have a high bias for the data. You can use bigger model(more parameters or deeper model).
you can also to reduce the learning rate, if it still freezes then model may have a low bias.
Thank you for the help. Yes, the problem was overfitting, so i made more aggresive dropout on LSTM, and it helped. But the accuracy on val_loss and acc_val still very low
video = Input(shape=(None, 224,224,3))
cnn_base = VGG16(input_shape=(224,224,3),
weights="imagenet",
include_top=False)
cnn_out = GlobalAveragePooling2D()(cnn_base.output)
cnn = Model(inputs=cnn_base.input, outputs=cnn_out)
cnn.trainable = False
encoded_frames = TimeDistributed(cnn)(video)
encoded_sequence = LSTM(32, dropout=0.5, W_regularizer=l2(0.01), recurrent_dropout=0.5)(encoded_frames)
hidden_layer = Dense(units=64, activation="relu")(encoded_sequence)
dropout = Dropout(0.2)(hidden_layer)
outputs = Dense(5, activation="softmax")(dropout)
model = Model([video], outputs)
Here the logs
Epoch 00033: val_loss improved from 1.62041 to 1.57951, saving model to
./weights_inception/Inception_V3.33-0.76.h5
Epoch 34/500
100/100 [==============================] - 54s 537ms/step - loss: 0.6301 - acc:
0.9764 - val_loss: 1.6190 - val_acc: 0.7627
Epoch 00034: val_loss did not improve from 1.57951
Epoch 35/500
100/100 [==============================] - 54s 537ms/step - loss: 0.5907 - acc:
0.9840 - val_loss: 1.5927 - val_acc: 0.7608
Epoch 00035: val_loss did not improve from 1.57951
Epoch 36/500
100/100 [==============================] - 54s 537ms/step - loss: 0.5783 - acc:
0.9812 - val_loss: 1.3477 - val_acc: 0.7769
Epoch 00036: val_loss improved from 1.57951 to 1.34772, saving model to
./weights_inception/Inception_V3.36-0.78.h5
Epoch 37/500
100/100 [==============================] - 54s 537ms/step - loss: 0.5618 - acc:
0.9802 - val_loss: 1.6545 - val_acc: 0.7384
Epoch 00037: val_loss did not improve from 1.34772
Epoch 38/500
100/100 [==============================] - 54s 537ms/step - loss: 0.5382 - acc:
0.9818 - val_loss: 1.8298 - val_acc: 0.7421
Epoch 00038: val_loss did not improve from 1.34772
Epoch 39/500
100/100 [==============================] - 54s 536ms/step - loss: 0.5080 - acc:
0.9844 - val_loss: 1.7948 - val_acc: 0.7290
Epoch 00039: val_loss did not improve from 1.34772
Epoch 40/500
100/100 [==============================] - 54s 537ms/step - loss: 0.4800 - acc:
0.9892 - val_loss: 1.8036 - val_acc: 0.7522

Neural Network does not learn after 50 epochs gives the same training error and same validation error

I am working on a computer vision problem which differentiates between forged and original signatures.The inputs to the network are two images
For that I extract features from the 'block3_pool' layer of VGG 16 (shape (None,28,28,256)) and I have created a layer that calculates the square root of the absolute difference of each of the 256 filters of the two images
However the network fails to learn and gives the same training and validation error for every epoch, tweaking the learning rate does not work,changing the architecture does not work. The inputs to the network are two images :anchor and the data whose shapes are (224,224,3)
I have trained it on a very small dataset ,expected it to overfit but the network does not learn even after tinkering the dataset size
Data is of the format,
for each user there are 24 original and 24 forged signatures.Choosing one randomly from the original set gives the anchor data and choosing randomly between original and forged gives the data array.Thus for each user
I have an anchor array (24 images of the same sample) and the data (24 total images of different original and forged samples) and shape of the anchor and the data array for one user is : (24,224,224,3)
#model hyper parameters ,using sgd optimizer
epochs=50
learning_rate=0.1
decay=learning_rate/epochs
batch_size=8
keep_prob=0.8
#This is the function for lambda layer 'Layer_distance'
def root_diff(x):
diff=K.sqrt(K.sum(K.abs(x[:,:,:,:,0]-x[:,:,:,:,1]),axis=(1,2)))
return diff
#This creates an instance of a pre trained VGG-16 model
def base_model(input_dims=(224,224,3),output_dims=128):
base_model=VGG16(include_top=False,weights='imagenet',input_shape=input_dims)
for layers in base_model.layers:
layers.trainable=False
x=base_model.get_layer('block3_pool').output
model=Model(inputs=base_model.input,outputs=x)
return model
def siamese_model(anchor,data,label,anchor_valid,data_valid,label_valid,input_shape=(224,224,3)):
anchor_input=Input(input_shape)
data_input=Input(input_shape)
#----------------------------Model begins from here-------------------------------------------------------#
model_resnet=base_model(input_dims=input_shape)
encodings_anchor=model_resnet(anchor_input)
encodings_data=model_resnet(data_input)
layer_expand_dims=Lambda(lambda x:K.expand_dims(x,axis=4))
anchor_expanded=layer_expand_dims(encodings_anchor)
data_expanded=layer_expand_dims(encodings_data)
encodings=concatenate([anchor_expanded,data_expanded],axis=4) #gives the shape as (None,28,28,256,2)
Layer_distance=Lambda(root_diff)(encodings) #Should give a vector of (256)
dense_1=Dense(256,activation=None,kernel_initializer='glorot_uniform',bias_initializer='zeros')(Layer_distance)
prediction=Dense(1,activation='sigmoid',kernel_initializer='glorot_uniform')(dense_1)
# Connect the inputs with the outputs
siamese_net = Model(inputs=[anchor_input,data_input],outputs=prediction)
print(siamese_net.summary())
for layer in siamese_net.layers:
print("Input shape: "+str(layer.input_shape)+". Output shape: "+str(layer.output_shape))
sgd= optimizers.SGD(lr=learning_rate, decay=1e-9, momentum=0.9, nesterov=True)
siamese_net.compile(loss='binary_crossentropy', optimizer=sgd,metrics=['accuracy'])
history=siamese_net.fit(x=[anchor,data],y=label,batch_size=batch_size,epochs=epochs
,validation_data=([anchor_valid,data_valid],label_valid))
The summary of the model (siamese_net)
Layer (type) Output Shape Param # Connected to
==================================================================================================
input_1 (InputLayer) (None, 224, 224, 3) 0
__________________________________________________________________________________________________
input_2 (InputLayer) (None, 224, 224, 3) 0
__________________________________________________________________________________________________
model_1 (Model) (None, 28, 28, 256) 1735488 input_1[0][0]
input_2[0][0]
__________________________________________________________________________________________________
lambda_1 (Lambda) (None, 28, 28, 256, 0 model_1[1][0]
model_1[2][0]
__________________________________________________________________________________________________
concatenate_1 (Concatenate) (None, 28, 28, 256, 0 lambda_1[0][0]
lambda_1[1][0]
__________________________________________________________________________________________________
lambda_2 (Lambda) (None, 256) 0 concatenate_1[0][0]
__________________________________________________________________________________________________
dense_1 (Dense) (None, 256) 65792 lambda_2[0][0]
__________________________________________________________________________________________________
dense_2 (Dense) (None, 1) 257 dense_1[0][0]
==================================================================================================
Total params: 1,801,537
Trainable params: 66,049
Non-trainable params: 1,735,488
__________________________________________________________________________________________________
The training result
WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/math_ops.py:3066: to_int32 (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use tf.cast instead.
Train on 48 samples, validate on 48 samples
Epoch 1/50
2019-04-21 06:10:00.354542: I tensorflow/stream_executor/dso_loader.cc:152] successfully opened CUDA library libcublas.so.10.0 locally
48/48 [==============================] - 4s 90ms/step - loss: 8.9676 - acc: 0.4375 - val_loss: 8.6355 - val_acc: 0.4583
Epoch 2/50
48/48 [==============================] - 1s 18ms/step - loss: 8.9676 - acc: 0.4375 - val_loss: 8.6355 - val_acc: 0.4583
Epoch 3/50
48/48 [==============================] - 1s 18ms/step - loss: 8.9676 - acc: 0.4375 - val_loss: 8.6355 - val_acc: 0.4583
Epoch 4/50
48/48 [==============================] - 1s 18ms/step - loss: 8.9676 - acc: 0.4375 - val_loss: 8.6355 - val_acc: 0.4583
Epoch 5/50
48/48 [==============================] - 1s 18ms/step - loss: 8.9676 - acc: 0.4375 - val_loss: 8.6355 - val_acc: 0.4583
Epoch 6/50
48/48 [==============================] - 1s 18ms/step - loss: 8.9676 - acc: 0.4375 - val_loss: 8.6355 - val_acc: 0.4583
Epoch 7/50
48/48 [==============================] - 1s 18ms/step - loss: 8.9676 - acc: 0.4375 - val_loss: 8.6355 - val_acc: 0.4583
Epoch 8/50
48/48 [==============================] - 1s 18ms/step - loss: 8.9676 - acc: 0.4375 - val_loss: 8.6355 - val_acc: 0.4583
Epoch 9/50
48/48 [==============================] - 1s 18ms/step - loss: 8.9676 - acc: 0.4375 - val_loss: 8.6355 - val_acc: 0.4583
Epoch 10/50
48/48 [==============================] - 1s 18ms/step - loss: 8.9676 - acc: 0.4375 - val_loss: 8.6355 - val_acc: 0.4583
Epoch 11/50
48/48 [==============================] - 1s 18ms/step - loss: 8.9676 - acc: 0.4375 - val_loss: 8.6355 - val_acc: 0.4583
Epoch 12/50
48/48 [==============================] - 1s 18ms/step - loss: 8.9676 - acc: 0.4375 - val_loss: 8.6355 - val_acc: 0.4583
Epoch 13/50
48/48 [==============================] - 1s 18ms/step - loss: 8.9676 - acc: 0.4375 - val_loss: 8.6355 - val_acc: 0.4583
Epoch 14/50
48/48 [==============================] - 1s 18ms/step - loss: 8.9676 - acc: 0.4375 - val_loss: 8.6355 - val_acc: 0.4583
Epoch 15/50
48/48 [==============================] - 1s 18ms/step - loss: 8.9676 - acc: 0.4375 - val_loss: 8.6355 - val_acc: 0.4583
Epoch 16/50
48/48 [==============================] - 1s 18ms/step - loss: 8.9676 - acc: 0.4375 - val_loss: 8.6355 - val_acc: 0.4583
Epoch 17/50
48/48 [==============================] - 1s 18ms/step - loss: 8.9676 - acc: 0.4375 - val_loss: 8.6355 - val_acc: 0.4583
Epoch 18/50
48/48 [==============================] - 1s 18ms/step - loss: 8.9676 - acc: 0.4375 - val_loss: 8.6355 - val_acc: 0.4583
Epoch 19/50
48/48 [==============================] - 1s 18ms/step - loss: 8.9676 - acc: 0.4375 - val_loss: 8.6355 - val_acc: 0.4583
Epoch 20/50
48/48 [==============================] - 1s 18ms/step - loss: 8.9676 - acc: 0.4375 - val_loss: 8.6355 - val_acc: 0.4583
Epoch 21/50
48/48 [==============================] - 1s 18ms/step - loss: 8.9676 - acc: 0.4375 - val_loss: 8.6355 - val_acc: 0.4583
Epoch 22/50
48/48 [==============================] - 1s 18ms/step - loss: 8.9676 - acc: 0.4375 - val_loss: 8.6355 - val_acc: 0.4583
Epoch 23/50
48/48 [==============================] - 1s 18ms/step - loss: 8.9676 - acc: 0.4375 - val_loss: 8.6355 - val_acc: 0.4583
Epoch 24/50
48/48 [==============================] - 1s 18ms/step - loss: 8.9676 - acc: 0.4375 - val_loss: 8.6355 - val_acc: 0.4583
Epoch 25/50
48/48 [==============================] - 1s 18ms/step - loss: 8.9676 - acc: 0.4375 - val_loss: 8.6355 - val_acc: 0.4583
Epoch 26/50
48/48 [==============================] - 1s 18ms/step - loss: 8.9676 - acc: 0.4375 - val_loss: 8.6355 - val_acc: 0.4583
Epoch 27/50
48/48 [==============================] - 1s 18ms/step - loss: 8.9676 - acc: 0.4375 - val_loss: 8.6355 - val_acc: 0.4583
Epoch 28/50
48/48 [==============================] - 1s 18ms/step - loss: 8.9676 - acc: 0.4375 - val_loss: 8.6355 - val_acc: 0.4583
Epoch 29/50
48/48 [==============================] - 1s 18ms/step - loss: 8.9676 - acc: 0.4375 - val_loss: 8.6355 - val_acc: 0.4583
Epoch 30/50
48/48 [==============================] - 1s 18ms/step - loss: 8.9676 - acc: 0.4375 - val_loss: 8.6355 - val_acc: 0.4583
Epoch 31/50
48/48 [==============================] - 1s 18ms/step - loss: 8.9676 - acc: 0.4375 - val_loss: 8.6355 - val_acc: 0.4583
Epoch 32/50
48/48 [==============================] - 1s 18ms/step - loss: 8.9676 - acc: 0.4375 - val_loss: 8.6355 - val_acc: 0.4583
Epoch 33/50
48/48 [==============================] - 1s 18ms/step - loss: 8.9676 - acc: 0.4375 - val_loss: 8.6355 - val_acc: 0.4583
Epoch 34/50
48/48 [==============================] - 1s 18ms/step - loss: 8.9676 - acc: 0.4375 - val_loss: 8.6355 - val_acc: 0.4583
Epoch 35/50
48/48 [==============================] - 1s 18ms/step - loss: 8.9676 - acc: 0.4375 - val_loss: 8.6355 - val_acc: 0.4583
Epoch 36/50
48/48 [==============================] - 1s 18ms/step - loss: 8.9676 - acc: 0.4375 - val_loss: 8.6355 - val_acc: 0.4583
Epoch 37/50
48/48 [==============================] - 1s 18ms/step - loss: 8.9676 - acc: 0.4375 - val_loss: 8.6355 - val_acc: 0.4583
Epoch 38/50
48/48 [==============================] - 1s 18ms/step - loss: 8.9676 - acc: 0.4375 - val_loss: 8.6355 - val_acc: 0.4583
Epoch 39/50
48/48 [==============================] - 1s 18ms/step - loss: 8.9676 - acc: 0.4375 - val_loss: 8.6355 - val_acc: 0.4583
Epoch 40/50
48/48 [==============================] - 1s 18ms/step - loss: 8.9676 - acc: 0.4375 - val_loss: 8.6355 - val_acc: 0.4583
Epoch 41/50
48/48 [==============================] - 1s 18ms/step - loss: 8.9676 - acc: 0.4375 - val_loss: 8.6355 - val_acc: 0.4583
Epoch 42/50
48/48 [==============================] - 1s 18ms/step - loss: 8.9676 - acc: 0.4375 - val_loss: 8.6355 - val_acc: 0.4583
Epoch 43/50
48/48 [==============================] - 1s 18ms/step - loss: 8.9676 - acc: 0.4375 - val_loss: 8.6355 - val_acc: 0.4583
Epoch 44/50
48/48 [==============================] - 1s 18ms/step - loss: 8.9676 - acc: 0.4375 - val_loss: 8.6355 - val_acc: 0.4583
Epoch 45/50
48/48 [==============================] - 1s 18ms/step - loss: 8.9676 - acc: 0.4375 - val_loss: 8.6355 - val_acc: 0.4583
Epoch 46/50
48/48 [==============================] - 1s 18ms/step - loss: 8.9676 - acc: 0.4375 - val_loss: 8.6355 - val_acc: 0.4583
Epoch 47/50
48/48 [==============================] - 1s 18ms/step - loss: 8.9676 - acc: 0.4375 - val_loss: 8.6355 - val_acc: 0.4583
Epoch 48/50
48/48 [==============================] - 1s 18ms/step - loss: 8.9676 - acc: 0.4375 - val_loss: 8.6355 - val_acc: 0.4583
Epoch 49/50
48/48 [==============================] - 1s 18ms/step - loss: 8.9676 - acc: 0.4375 - val_loss: 8.6355 - val_acc: 0.4583
Epoch 50/50
48/48 [==============================] - 1s 19ms/step - loss: 8.9676 - acc: 0.4375 - val_loss: 8.6355 - val_acc: 0.4583
Saved model to disk
#Paras doshi
I think you're freezing entire base network. So the training process did not update parameters. You're using the feature vector of third VGG block, so you need to unfreeze at least one of these 3 blocks.
def base_model(input_dims=(224,224,3),output_dims=128):
base_model=VGG16(include_top=False,weights='imagenet',input_shape=input_dims)
for layers in base_model.layers:
layers.trainable=False <---- Freeze training
Hope that help

Increase the efficiency of an object detection model in keras

Model I am using:
num_classes = 20
INIT_LR = 1e-3
model = Sequential()
model.add(Conv2D(32, (3, 3), input_shape=(3, 56, 56), activation='relu', padding='same'))
model.add(Dropout(0.2))
model.add(Conv2D(32, (3, 3), activation='relu', padding='same'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Conv2D(64, (3, 3), activation='relu', padding='same'))
model.add(Dropout(0.2))
model.add(Conv2D(64, (3, 3), activation='relu', padding='same'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Conv2D(128, (3, 3), activation='relu', padding='same'))
model.add(Dropout(0.2))
model.add(Conv2D(128, (3, 3), activation='relu', padding='same'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Conv2D(256, (3, 3), activation='relu', padding='same'))
model.add(Dropout(0.2))
model.add(Conv2D(256, (3, 3), activation='relu', padding='same'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Flatten())
model.add(Dropout(0.2))
model.add(Dense(1024, activation='relu', kernel_constraint=maxnorm(3)))
model.add(Dropout(0.2))
model.add(Dense(512, activation='relu', kernel_constraint=maxnorm(3)))
model.add(Dropout(0.2))
model.add(Dense(num_classes, activation='softmax'))
epochs = 40
lrate = 0.01
decay = lrate/epochs
opt = Adam(lr=INIT_LR, decay=INIT_LR / epochs)
sgd = SGD(lr=lrate, momentum=0.9, decay=decay, nesterov=False)
model.compile(loss='categorical_crossentropy', optimizer=opt, metrics=['accuracy'])
print(model.summary())
Accuracy that I got:
Train on 36124 samples, validate on 4014 samples
Epoch 1/40
36124/36124 [==============================] - 2161s 60ms/step - loss: 2.1642 - acc: 0.4387 - val_loss: 1.8971 - val_acc: 0.4584
Epoch 2/40
36124/36124 [==============================] - 2185s 60ms/step - loss: 1.8403 - acc: 0.4813 - val_loss: 1.6874 - val_acc: 0.4983
Epoch 3/40
36124/36124 [==============================] - 3774s 104ms/step - loss: 1.6476 - acc: 0.5231 - val_loss: 1.5375 - val_acc: 0.5451
Epoch 4/40
36124/36124 [==============================] - 2194s 61ms/step - loss: 1.5143 - acc: 0.5572 - val_loss: 1.4662 - val_acc: 0.5688
Epoch 5/40
36124/36124 [==============================] - 2079s 58ms/step - loss: 1.4169 - acc: 0.5792 - val_loss: 1.3685 - val_acc: 0.5952
Epoch 6/40
36124/36124 [==============================] - 2203s 61ms/step - loss: 1.3441 - acc: 0.6011 - val_loss: 1.4403 - val_acc: 0.5850
Epoch 7/40
36124/36124 [==============================] - 2212s 61ms/step - loss: 1.2922 - acc: 0.6140 - val_loss: 1.2964 - val_acc: 0.6168
Epoch 8/40
36124/36124 [==============================] - 2179s 60ms/step - loss: 1.2490 - acc: 0.6254 - val_loss: 1.2622 - val_acc: 0.6243
Epoch 9/40
36124/36124 [==============================] - 2169s 60ms/step - loss: 1.2033 - acc: 0.6377 - val_loss: 1.2622 - val_acc: 0.6206
Epoch 10/40
36124/36124 [==============================] - 2171s 60ms/step - loss: 1.1762 - acc: 0.6460 - val_loss: 1.3887 - val_acc: 0.6001
Epoch 11/40
36124/36124 [==============================] - 2168s 60ms/step - loss: 1.1313 - acc: 0.6577 - val_loss: 1.1599 - val_acc: 0.6452
Epoch 12/40
36124/36124 [==============================] - 2168s 60ms/step - loss: 1.1002 - acc: 0.6658 - val_loss: 1.2067 - val_acc: 0.6390
Epoch 13/40
36124/36124 [==============================] - 2170s 60ms/step - loss: 1.0932 - acc: 0.6676 - val_loss: 1.2386 - val_acc: 0.6335
Epoch 14/40
36124/36124 [==============================] - 2169s 60ms/step - loss: 1.0518 - acc: 0.6768 - val_loss: 1.1448 - val_acc: 0.6490
Epoch 15/40
36124/36124 [==============================] - 2168s 60ms/step - loss: 1.0342 - acc: 0.6832 - val_loss: 1.1420 - val_acc: 0.6522
Epoch 16/40
36124/36124 [==============================] - 2170s 60ms/step - loss: 1.0104 - acc: 0.6894 - val_loss: 1.2271 - val_acc: 0.6385
Epoch 17/40
36124/36124 [==============================] - 2168s 60ms/step - loss: 0.9855 - acc: 0.6964 - val_loss: 1.1793 - val_acc: 0.6517
Epoch 18/40
36124/36124 [==============================] - 2184s 60ms/step - loss: 0.9635 - acc: 0.7029 - val_loss: 1.1647 - val_acc: 0.6574
Epoch 19/40
36124/36124 [==============================] - 2074s 57ms/step - loss: 0.9517 - acc: 0.7071 - val_loss: 1.1118 - val_acc: 0.6639
Epoch 20/40
36124/36124 [==============================] - 2063s 57ms/step - loss: 0.9276 - acc: 0.7144 - val_loss: 1.1187 - val_acc: 0.6662
Epoch 21/40
36124/36124 [==============================] - 2104s 58ms/step - loss: 0.9111 - acc: 0.7202 - val_loss: 1.1444 - val_acc: 0.6637
Epoch 22/40
36124/36124 [==============================] - 2156s 60ms/step - loss: 0.8872 - acc: 0.7231 - val_loss: 1.1062 - val_acc: 0.6684
Epoch 23/40
36124/36124 [==============================] - 2181s 60ms/step - loss: 0.8716 - acc: 0.7279 - val_loss: 1.1912 - val_acc: 0.6540
Epoch 24/40
36124/36124 [==============================] - 2100s 58ms/step - loss: 0.8596 - acc: 0.7336 - val_loss: 1.1339 - val_acc: 0.6664
Epoch 25/40
36124/36124 [==============================] - 3357s 93ms/step - loss: 0.8412 - acc: 0.7380 - val_loss: 1.1295 - val_acc: 0.6627
Epoch 26/40
36124/36124 [==============================] - 2170s 60ms/step - loss: 0.8104 - acc: 0.7475 - val_loss: 1.1511 - val_acc: 0.6572
Epoch 27/40
36124/36124 [==============================] - 2131s 59ms/step - loss: 0.8091 - acc: 0.7468 - val_loss: 1.1501 - val_acc: 0.6679
Epoch 28/40
36124/36124 [==============================] - 2107s 58ms/step - loss: 0.7791 - acc: 0.7569 - val_loss: 1.1579 - val_acc: 0.6637
Epoch 29/40
36124/36124 [==============================] - 2247s 62ms/step - loss: 0.7665 - acc: 0.7598 - val_loss: 1.1310 - val_acc: 0.6724
Epoch 30/40
36124/36124 [==============================] - 2019s 56ms/step - loss: 0.7575 - acc: 0.7615 - val_loss: 1.1065 - val_acc: 0.6766
Epoch 31/40
36124/36124 [==============================] - 2098s 58ms/step - loss: 0.7344 - acc: 0.7705 - val_loss: 1.1025 - val_acc: 0.6751
Epoch 32/40
36124/36124 [==============================] - 2170s 60ms/step - loss: 0.7246 - acc: 0.7726 - val_loss: 1.1563 - val_acc: 0.6694
Epoch 33/40
36124/36124 [==============================] - 4057s 112ms/step - loss: 0.7133 - acc: 0.7777 - val_loss: 1.1328 - val_acc: 0.6714
Epoch 34/40
36124/36124 [==============================] - 2177s 60ms/step - loss: 0.6873 - acc: 0.7832 - val_loss: 1.1047 - val_acc: 0.6886
Epoch 35/40
36124/36124 [==============================] - 2175s 60ms/step - loss: 0.6816 - acc: 0.7860 - val_loss: 1.1477 - val_acc: 0.6662
Epoch 36/40
36124/36124 [==============================] - 2177s 60ms/step - loss: 0.6684 - acc: 0.7885 - val_loss: 1.1006 - val_acc: 0.6886
Epoch 37/40
36124/36124 [==============================] - 2179s 60ms/step - loss: 0.6622 - acc: 0.7951 - val_loss: 1.1352 - val_acc: 0.6814
Epoch 38/40
36124/36124 [==============================] - 2177s 60ms/step - loss: 0.6393 - acc: 0.7976 - val_loss: 1.1688 - val_acc: 0.6707
Epoch 39/40
36124/36124 [==============================] - 2137s 59ms/step - loss: 0.6263 - acc: 0.8018 - val_loss: 1.1279 - val_acc: 0.6896
Epoch 40/40
8160/36124 [=====>........................] - ETA: 26:35 - loss: 0.5668 - acc: 0.8205
Can any one suggest a way to improve the model efficiency I tried increasing no of layers and no of epochs but the efficiency i got is around 65 to 68 percent.
in the ng's course on courser he says that:
*if have **HIGH BIAS so you have to:***
build a bigger NN or
train larger or
change your architecture
*if have **HIGH VARIANCE then:***
collect more data
regularize your NN
change your architecture

TimeDistributed Model Not learning

I am trying to train model to play Chrome Dino (the offline game).
The idea was to have 6 last screenshots of the game, use CNN on each separately (to extract features) and then put those features as timesteps into LSTM.
My training data is X = [6 timestep game screenshots] -> y=[1,0] (keep running, jump)
Timestep example
I have even split the database so it has 50% jump examples and 50% keep running examples.
Sadly I am stuck at 50% accuracy and loss is stuck too.
198/198 [==============================] - 0s - loss: 0.6944 - acc: 0.4596
Epoch 91/100
198/198 [==============================] - 0s - loss: 0.6932 - acc: 0.5000
Epoch 92/100
198/198 [==============================] - 0s - loss: 0.6932 - acc: 0.5000
Epoch 93/100
198/198 [==============================] - 0s - loss: 0.6932 - acc: 0.5000
Epoch 94/100
198/198 [==============================] - 0s - loss: 0.6933 - acc: 0.5000
Epoch 95/100
198/198 [==============================] - 0s - loss: 0.6942 - acc: 0.5000
Epoch 96/100
198/198 [==============================] - 0s - loss: 0.6939 - acc: 0.5000
Epoch 97/100
198/198 [==============================] - 0s - loss: 0.6935 - acc: 0.5000
I have tried many model hyperparams with different layers, but I always get the same result.
Current model
model = Sequential()
model.add(TimeDistributed(Convolution2D(64, 3, 3, activation='relu'), input_shape=(FRAMES_TO_PROCESS, FRAME_HEIGHT,FRAME_WIDTH, FRAME_FILTERS )))
model.add(TimeDistributed(MaxPooling2D(pool_size=(2, 2))))
model.add(TimeDistributed(ZeroPadding2D((1,1))))
model.add(TimeDistributed(Convolution2D(64, 3, 3, activation='relu')))
model.add(TimeDistributed(MaxPooling2D(pool_size=(2, 2), strides=(2,2))))
model.add(TimeDistributed(ZeroPadding2D((1,1))))
model.add(TimeDistributed(Convolution2D(128, 3, 3, activation='relu')))
model.add(TimeDistributed(ZeroPadding2D((1,1))))
model.add(TimeDistributed(Convolution2D(128, 3, 3, activation='relu')))
model.add(TimeDistributed(MaxPooling2D(pool_size=(2, 2), strides=(2,2))))
model.add(Activation('relu'))
model.add(TimeDistributed(Flatten()))
model.add(Dropout(0.1))
model.add(LSTM(120, return_sequences=False))
model.add(Dense(2, activation='softmax'))
model.compile(loss="categorical_crossentropy", optimizer="adam", metrics=['accuracy'])
Any idea what went wrong?

Resources