I have a problem of applying masking layer to CNNs in RNN/LSTM model.
My data is not original image, but I converted into a shape of (16, 34, 4)(channels_first). The data is sequential, and the longest step length is 22. So for invariant way, I set the timestep as 22. Since it may be shorter than 22 steps, I fill others with np.zeros. However, for 0 padding data, it's about half among all dataset, so with 0 paddings, the training cannot reach a very good result with so much useless data. Then I want to add a mask to cancel these 0 padding data.
Here is my code.
mask = np.zeros((16,34,4), dtype = np.int8)
input_shape = (22, 16, 34, 4)
model = Sequential()
model.add(TimeDistributed(Masking(mask_value=mask), input_shape=input_shape, name = 'mask'))
model.add(TimeDistributed(Conv2D(100, (5, 2), data_format = 'channels_first', activation = relu), name = 'conv1'))
model.add(TimeDistributed(BatchNormalization(), name = 'bn1'))
model.add(Dropout(0.5, name = 'drop1'))
model.add(TimeDistributed(Conv2D(100, (5, 2), data_format = 'channels_first', activation = relu), name ='conv2'))
model.add(TimeDistributed(BatchNormalization(), name = 'bn2'))
model.add(Dropout(0.5, name = 'drop2'))
model.add(TimeDistributed(Conv2D(100, (5, 2), data_format = 'channels_first', activation = relu), name ='conv3'))
model.add(TimeDistributed(BatchNormalization(), name = 'bn3'))
model.add(Dropout(0.5, name = 'drop3'))
model.add(TimeDistributed(Flatten(), name = 'flatten'))
model.add(GRU(256, activation='tanh', return_sequences=True, name = 'gru'))
model.add(Dropout(0.4, name = 'drop_gru'))
model.add(Dense(35, activation = 'softmax', name = 'softmax'))
model.compile(optimizer='Adam',loss='categorical_crossentropy',metrics=['acc'])
Here's the model structure.
model.summary():
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
mask (TimeDist (None, 22, 16, 34, 4) 0
_________________________________________________________________
conv1 (TimeDistributed) (None, 22, 100, 30, 3) 16100
_________________________________________________________________
bn1 (TimeDistributed) (None, 22, 100, 30, 3) 12
_________________________________________________________________
drop1 (Dropout) (None, 22, 100, 30, 3) 0
_________________________________________________________________
conv2 (TimeDistributed) (None, 22, 100, 26, 2) 100100
_________________________________________________________________
bn2 (TimeDistributed) (None, 22, 100, 26, 2) 8
_________________________________________________________________
drop2 (Dropout) (None, 22, 100, 26, 2) 0
_________________________________________________________________
conv3 (TimeDistributed) (None, 22, 100, 22, 1) 100100
_________________________________________________________________
bn3 (TimeDistributed) (None, 22, 100, 22, 1) 4
_________________________________________________________________
drop3 (Dropout) (None, 22, 100, 22, 1) 0
_________________________________________________________________
flatten (TimeDistributed) (None, 22, 2200) 0
_________________________________________________________________
gru (GRU) (None, 22, 256) 1886976
_________________________________________________________________
drop_gru (Dropout) (None, 22, 256) 0
_________________________________________________________________
softmax (Dense) (None, 22, 35) 8995
=================================================================
Total params: 2,112,295
Trainable params: 2,112,283
Non-trainable params: 12
_________________________________________________________________
For mask_value, I tried with either 0 or this mask structure, but neither works and it still trains through all the data with half 0 paddings in it.
Can anyone help me?
B.T.W., I used TimeDistributed here to connect RNN, and I know another one called ConvLSTM2D. Does anyone know the difference? ConvLSTM2D takes much more params for the model, and get training much slower than TimeDistributed...
Unfortunately masking is not yet supported by the Keras Conv layers. There have been several issues posted about this on the Keras Github page, here is the one with the most substantial conversation on the topic. It appears that there was some hang up implementation details and the issue was never resolved.
The workaround proposed in the discussion is to have an explicit embedding for the padding character in sequences and do global pooling. Here is another workaround I found (not helpful for my use case but maybe helpful to you) - keeping a mask array to merge through multiplication.
You can also check out the conversation around this question which is similar to yours.
Related
As the title clearly describes the question, I want to display the layers of a pretained model instead of a single entry (please see the vgg19 (Functional) entry below) in model.summary() function output?
Here is a sample model that is implemented using the Keras Sequential API:
base_model = VGG16(include_top=False, weights=None, input_shape=(32, 32, 3), pooling='max', classes=10)
model = Sequential()
model.add(base_model)
model.add(Flatten())
model.add(Dense(1_000, activation='relu'))
model.add(Dense(10, activation='softmax'))
And here is the output of the model.summary() function call:
Model: "sequential_15"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
vgg19 (Functional) (None, 512) 20024384
_________________________________________________________________
flatten_15 (Flatten) (None, 512) 0
_________________________________________________________________
dense_21 (Dense) (None, 1000) 513000
_________________________________________________________________
dense_22 (Dense) (None, 10) 10010
=================================================================
Total params: 20,547,394
Trainable params: 523,010
Non-trainable params: 20,024,384
Edit: Here is the Functional API equivalent of the implemented Sequential API model - the result is the same:
base_model = VGG16(include_top=False, weights='imagenet', input_shape=(32, 32, 3), pooling='max', classes=10)
m_inputs = Input(shape=(32, 32, 3))
base_out = base_model(m_inputs)
x = Flatten()(base_out)
x = Dense(1_000, activation='relu')(x)
m_outputs = Dense(10, activation='softmax')(x)
model = Model(inputs=m_inputs, outputs=m_outputs)
Instead of using the Sequential, I tried using the Functional API i.e. the tf.keras.models.Model class, like,
import tensorflow as tf
base_model = tf.keras.applications.VGG16(include_top=False, weights=None, input_shape=(32, 32, 3), pooling='max', classes=10)
x = tf.keras.layers.Flatten()( base_model.output )
x = tf.keras.layers.Dense(1_000, activation='relu')( x )
outputs = tf.keras.layers.Dense(10, activation='softmax')( x )
model = tf.keras.models.Model( base_model.input , outputs )
model.summary()
The output of the above snippet,
Model: "model"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input_3 (InputLayer) [(None, 32, 32, 3)] 0
_________________________________________________________________
block1_conv1 (Conv2D) (None, 32, 32, 64) 1792
_________________________________________________________________
block1_conv2 (Conv2D) (None, 32, 32, 64) 36928
_________________________________________________________________
block1_pool (MaxPooling2D) (None, 16, 16, 64) 0
_________________________________________________________________
block2_conv1 (Conv2D) (None, 16, 16, 128) 73856
_________________________________________________________________
block2_conv2 (Conv2D) (None, 16, 16, 128) 147584
_________________________________________________________________
block2_pool (MaxPooling2D) (None, 8, 8, 128) 0
_________________________________________________________________
block3_conv1 (Conv2D) (None, 8, 8, 256) 295168
_________________________________________________________________
block3_conv2 (Conv2D) (None, 8, 8, 256) 590080
_________________________________________________________________
block3_conv3 (Conv2D) (None, 8, 8, 256) 590080
_________________________________________________________________
block3_pool (MaxPooling2D) (None, 4, 4, 256) 0
_________________________________________________________________
block4_conv1 (Conv2D) (None, 4, 4, 512) 1180160
_________________________________________________________________
block4_conv2 (Conv2D) (None, 4, 4, 512) 2359808
_________________________________________________________________
block4_conv3 (Conv2D) (None, 4, 4, 512) 2359808
_________________________________________________________________
block4_pool (MaxPooling2D) (None, 2, 2, 512) 0
_________________________________________________________________
block5_conv1 (Conv2D) (None, 2, 2, 512) 2359808
_________________________________________________________________
block5_conv2 (Conv2D) (None, 2, 2, 512) 2359808
_________________________________________________________________
block5_conv3 (Conv2D) (None, 2, 2, 512) 2359808
_________________________________________________________________
block5_pool (MaxPooling2D) (None, 1, 1, 512) 0
_________________________________________________________________
global_max_pooling2d_2 (Glob (None, 512) 0
_________________________________________________________________
flatten_1 (Flatten) (None, 512) 0
_________________________________________________________________
dense_2 (Dense) (None, 1000) 513000
_________________________________________________________________
dense_3 (Dense) (None, 10) 10010
=================================================================
Total params: 15,237,698
Trainable params: 15,237,698
Non-trainable params: 0
_________________________________________________________________
My understanding after going through the docs and running a few tests (via TF 2.5.0) is that when such a model is included in another model, Keras conceives of it as a "black box". It is not a simple layer, definitely no tensor, basically of complex type tensorflow.python.keras.engine.functional.Functional.
I reckon this is the underlying reason that you can not print it out in a detailed way as part of the model summary.
Now, if you'd like to just review the pre-trained model, have a sneak peak etc., you can simply run:
base_model.summary()
or after constructing your model (sequential or functional, doesn't matter at this point):
model.layers[i].summary() # i: the index of your pre-trained model
If you need to access the pre-trained model's layers, e.g. to use its weights separately etc., you can access them with this way as well.
If you'd like to print the layers of your model as a whole, then you need to trick Keras into beliving the "black box" is no stranger but just yet another KerasTensor. In order to do that, you can wrap the pre-trained model in another layer -in other words, connect them directly via Functional API-, which was suggested above and has worked fine for me.
x = tf.keras.layers.Flatten()( base_model.output )
I don't know if there is any specific reason that you'd like to pursue the new input route as in...
m_inputs = Input(shape=(32, 32, 3))
base_out = base_model(m_inputs)
Whenever you locate the pre-trained model in the middle of your new model, as coming after the new Input layer or adding it to a Sequential model per se, the layers within would disappear from the summary output.
Generating a new Input layer or just feeding the pre-trained model's output as input to the current model didn't make any difference for me in this case.
Hope this clarifies the topic a wee bit more, and helps.
This should do what you want to do
base_model = VGG16(include_top=False, weights=None, input_shape=(32, 32, 3), pooling='max', classes=10)
model = Sequential()
for layer in base_model.layers:
layer.trainable = False
model.add(layer)
model.add(Flatten())
model.add(Dense(1_000, activation='relu'))
model.add(Dense(10, activation='softmax'))
I trained and load a cnn+dense model:
# load model
cnn_model = load_model('my_cnn_model.h5')
cnn_model.summary()
The output is this (I have images dimension 2 X 3600):
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
conv2d_1 (Conv2D) (None, 2, 3600, 32) 128
_________________________________________________________________
conv2d_2 (Conv2D) (None, 2, 1800, 32) 3104
_________________________________________________________________
max_pooling2d_1 (MaxPooling2 (None, 2, 600, 32) 0
_________________________________________________________________
conv2d_3 (Conv2D) (None, 2, 600, 64) 6208
_________________________________________________________________
conv2d_4 (Conv2D) (None, 2, 300, 64) 12352
_________________________________________________________________
max_pooling2d_2 (MaxPooling2 (None, 2, 100, 64) 0
_________________________________________________________________
conv2d_5 (Conv2D) (None, 2, 100, 128) 24704
_________________________________________________________________
conv2d_6 (Conv2D) (None, 2, 50, 128) 49280
_________________________________________________________________
max_pooling2d_3 (MaxPooling2 (None, 2, 16, 128) 0
_________________________________________________________________
flatten_1 (Flatten) (None, 4096) 0
_________________________________________________________________
dense_1 (Dense) (None, 1024) 4195328
_________________________________________________________________
dense_2 (Dense) (None, 1024) 1049600
_________________________________________________________________
dense_3 (Dense) (None, 3) 3075
=================================================================
Total params: 5,343,779
Trainable params: 5,343,779
Non-trainable params: 0
Now, what I want is to leave weights up to flatten and replace dense layers with LSTM to train the added LSTM part.
I just wrote:
# freeze model
base_model = cnn_model(input_shape=(2, 3600, 1))
#base_model = cnn_model
base_model.trainable = False
# Adding the first lstm layer
x = LSTM(1024,activation='relu',return_sequences='True')(base_model.output)
# Adding the second lstm layer
x = LSTM(1024, activation='relu',return_sequences='False')(x)
# Adding the output
output = Dense(3,activation='linear')(x)
# Final model creation
model = Model(inputs=[base_model.input], outputs=[output])
But I obtained:
base_model = cnn_model(input_shape=(2, 3600, 1))
TypeError: __call__() missing 1 required positional argument: 'inputs'
I know I have to add TimeDistributed ideally in the Flatten layer, but I do not know how to do.
Moreover I'm not sure about base_model.trainable = False if it do exactly what I want.
Can you please help me to do the job?
Thank you very much!
You can't directly take the output from Flatten(), LSTM needs 2-d features (time, filters). You have to reshape your tensors.
You can take the output from the layer before flatten (max-pooling), let's say this layer has index i in the model, we can take the output from that layer and reshape it based on our needs and pass it to LSTM.
before_flatten = base_model.layers[i].output # i is the index of the layer from which you want to take the model output
conv2lstm_reshape = Reshape((-1, 2))(before_flatten) # you have to select it, the temporal dim and filters
# Adding the first lstm layer
x = LSTM(1024,activation='relu',return_sequences='True')(conv2lstm_reshape)
# Adding the second lstm layer
x = LSTM(1024, activation='relu',return_sequences='False')(x)
# Adding the output
output = Dense(3,activation='linear')(before_flatten)
# Final model creation
model = Model(inputs=[base_model.input], outputs=[output])
model.summary()
When I train my model it has a two-dimension output - it is (none, 1) - corresponding to the time series I'm trying to predict. But whenever I load the saved model in order to make predictions, it has a three-dimensional output - (none, 40, 1) - where 40 corresponds to the n_steps required to fit the conv1D network. What is wrong?
Here is the code:
df = np.load('Principal.npy')
# Conv1D
#model = load_model('ModeloConv1D.h5')
model = autoencoder_conv1D((2, 20, 17), n_steps=40)
model.load_weights('weights_35067.hdf5')
# summarize model.
model.summary()
# load dataset
df = df
# split into input (X) and output (Y) variables
X = f.separar_interface(df, n_steps=40)
# THE X INPUT SHAPE (59891, 17) length and attributes, respectively ##
# conv1D input format
X = X.reshape(X.shape[0], 2, 20, X.shape[2])
# Make predictions
test_predictions = model.predict(X)
## test_predictions.shape = (59891, 40, 1)
test_predictions = model.predict(X).flatten()
##test_predictions.shape = (2395640, 1)
plt.figure(3)
plt.plot(test_predictions)
plt.legend('Prediction')
plt.show()
In the plot below you can see that it is plotting the input format.
Here is the network architecture:
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
time_distributed_70 (TimeDis (None, 1, 31, 24) 4104
_________________________________________________________________
time_distributed_71 (TimeDis (None, 1, 4, 24) 0
_________________________________________________________________
time_distributed_72 (TimeDis (None, 1, 4, 48) 9264
_________________________________________________________________
time_distributed_73 (TimeDis (None, 1, 1, 48) 0
_________________________________________________________________
time_distributed_74 (TimeDis (None, 1, 1, 64) 12352
_________________________________________________________________
time_distributed_75 (TimeDis (None, 1, 1, 64) 0
_________________________________________________________________
time_distributed_76 (TimeDis (None, 1, 64) 0
_________________________________________________________________
lstm_17 (LSTM) (None, 100) 66000
_________________________________________________________________
repeat_vector_9 (RepeatVecto (None, 40, 100) 0
_________________________________________________________________
lstm_18 (LSTM) (None, 40, 100) 80400
_________________________________________________________________
time_distributed_77 (TimeDis (None, 40, 1024) 103424
_________________________________________________________________
dropout_9 (Dropout) (None, 40, 1024) 0
_________________________________________________________________
dense_18 (Dense) (None, 40, 1) 1025
=================================================================
As I've found my mistake, and as I think it may be useful for someone else, I'll reply to my own question:
In fact, the network output has the same format as the training dataset labels. It means, the saved model is generating an output with shape (None, 40, 1) since it is exactly the same shape you (me) have given to the training output labels.
You (i.e. me) appreciate a difference between the network output while training and the network while predicting because you are most probably using a method such as train_test_split while training, which randomize the network output. Therefore, What you see at end of training is the production of this randomized batch.
In order to correct your problem (my problem), you should change the shape of the dataset labels from (None, 40, 1) to (None, 1), as you have a regression problem for a time series. For fixing that in your above network, you'd better set a flatten layer before the dense output layer. Therefore, I'll get the result your are looking for.
everyone,
I have a question about how to modify the pre-trained VGG16 network in Keras. I try to remove the max-pooling layers at the end the last three convolutional layers and add the batch normalization layer at the end of each convolutional layer. At the same time, I want to keep the parameters. This means that the whole modification process will not only include removing some middle layers, adding some new layers, but also concatenating the modified layers with the rest layers.
I'm still very new in Keras. The only way I can find is as shown in
Removing then Inserting a New Middle Layer in a Keras Model
So the codes I edited are as below:
from keras import applications
from keras.models import Model
from keras.layers import Conv2D, MaxPooling2D
from keras.layers import Dense, Dropout, Activation, Flatten
from keras.layers.normalization import BatchNormalization
vgg_model = applications.VGG16(weights='imagenet',
include_top=False,
input_shape=(160, 80, 3))
# Disassemble layers
layers = [l for l in vgg_model.layers]
# Defining new convolutional layer.
# Important: the number of filters should be the same!
# Note: the receiptive field of two 3x3 convolutions is 5x5.
layer_dict = dict([(layer.name, layer) for layer in vgg_model.layers])
x = layer_dict['block3_conv3'].output
for i in range(11, len(layers)-5):
# layers[i].trainable = False
x = layers[i](x)
for j in range(15, len(layers)-1):
# layers[j].trainable = False
x = layers[j](x)
x = Conv2D(filters=128, kernel_size=(1, 1))(x)
x = BatchNormalization()(x)
x = Conv2D(filters=128, kernel_size=(1, 1))(x)
x = BatchNormalization()(x)
x = Conv2D(filters=128, kernel_size=(1, 1))(x)
x = BatchNormalization()(x)
x = Flatten()(x)
x = Dense(50, activation='softmax')(x)
custom_model = Model(inputs=vgg_model.input, outputs=x)
for layer in custom_model.layers[:16]:
layer.trainable = False
custom_model.summary()
However, the output shape of the convolutional layers in block 4 and block 5 are multiple. I tried to correct it by adding a layer MaxPool2D(batch_size=(1,1), stride=none), but the output shape is still multiple. Just like this:
Layer (type) Output Shape Param #
=================================================================
input_1 (InputLayer) (None, 160, 80, 3) 0
_________________________________________________________________
block1_conv1 (Conv2D) (None, 160, 80, 64) 1792
_________________________________________________________________
block1_conv2 (Conv2D) (None, 160, 80, 64) 36928
_________________________________________________________________
block1_pool (MaxPooling2D) (None, 80, 40, 64) 0
_________________________________________________________________
block2_conv1 (Conv2D) (None, 80, 40, 128) 73856
_________________________________________________________________
block2_conv2 (Conv2D) (None, 80, 40, 128) 147584
_________________________________________________________________
block2_pool (MaxPooling2D) (None, 40, 20, 128) 0
_________________________________________________________________
block3_conv1 (Conv2D) (None, 40, 20, 256) 295168
_________________________________________________________________
block3_conv2 (Conv2D) (None, 40, 20, 256) 590080
_________________________________________________________________
block3_conv3 (Conv2D) (None, 40, 20, 256) 590080
_________________________________________________________________
block4_conv1 (Conv2D) multiple 1180160
_________________________________________________________________
block4_conv2 (Conv2D) multiple 2359808
_________________________________________________________________
block4_conv3 (Conv2D) multiple 2359808
_________________________________________________________________
block5_conv1 (Conv2D) multiple 2359808
_________________________________________________________________
block5_conv2 (Conv2D) multiple 2359808
_________________________________________________________________
block5_conv3 (Conv2D) multiple 2359808
_________________________________________________________________
conv2d_1 (Conv2D) (None, 40, 20, 128) 65664
_________________________________________________________________
batch_normalization_1 (Batch (None, 40, 20, 128) 512
_________________________________________________________________
conv2d_2 (Conv2D) (None, 40, 20, 128) 16512
_________________________________________________________________
batch_normalization_2 (Batch (None, 40, 20, 128) 512
_________________________________________________________________
conv2d_3 (Conv2D) (None, 40, 20, 128) 16512
_________________________________________________________________
batch_normalization_3 (Batch (None, 40, 20, 128) 512
_________________________________________________________________
flatten_1 (Flatten) (None, 102400) 0
_________________________________________________________________
dense_1 (Dense) (None, 50) 5120050
=================================================================
Total params: 19,934,962
Trainable params: 5,219,506
Non-trainable params: 14,715,456
_________________________________________________________________
Can anyone provide some suggestions about how to reach my goal?
Thanks very much.
The multiple output shape is there because these layers were called two times so they have two output shapes.
You can see here that in case calling layer.output_shape raises an AttributeError, the printed output shape will be 'multiple'.
If you call custom_model.layers[10].output_shape, you will get this error :
AttributeError: The layer "block4_conv1 has multiple inbound nodes, with different output shapes. Hence the notion of "output shape" is ill-defined for the layer. Use `get_output_shape_at(node_index)` instead.
And if you then call custom_model.layers[10].get_output_shape_at(0), you will get the output shape corresponding to the initial network, and for custom_model.layers[10].get_output_shape_at(1), you will get the output shape that you are expecting.
Let me just express that I'm doubting your intention with this modification : if you remove the MaxPooling layer, and that you apply the next layer (number 11) to the output that came before the MaxPooling layer, the learnt filters are "expecting" an image with two times less resolution so they probably won't work.
Let's imagine that one filter is "looking" for eyes and that usually eyes are 10 pixels wide, you'll need an 20 pixels wide eye to trigger the same activation in the layer.
My example is obviously over-simplistic and not accurate but it's just to show that the original idea is wrong, you should either retrain the top of the model / keep the MaxPooling layer/ define a brand new model on the top off layer block3_conv3.
I am following a Keras tutorial and want to shadow it in Pytorch, so am translating. I'm not strongly familiar with either and am coming unstuck on the input size parameter especially, but also the final layer - do I need another Linear layer? Can anyone translate the following to a Pytorch sequential definition?
visible = Input(shape=(64,64,1))
conv1 = Conv2D(32, kernel_size=4, activation='relu')(visible)
pool1 = MaxPooling2D(pool_size=(2, 2))(conv1)
conv2 = Conv2D(16, kernel_size=4, activation='relu')(pool1)
pool2 = MaxPooling2D(pool_size=(2, 2))(conv2)
hidden1 = Dense(10, activation='relu')(pool2)
output = Dense(1, activation='sigmoid')(hidden1)
model = Model(inputs=visible, outputs=output)
This is the output of the model:
Layer (type) Output Shape Param #
_________________________________________________________________
input_1 (InputLayer) (None, 64, 64, 1) 0
conv2d_1 (Conv2D) (None, 61, 61, 32) 544
max_pooling2d_1 (MaxPooling2 (None, 30, 30, 32) 0
conv2d_2 (Conv2D) (None, 27, 27, 16) 8208
max_pooling2d_2 (MaxPooling2 (None, 13, 13, 16) 0
dense_1 (Dense) (None, 13, 13, 10) 170
dense_2 (Dense) (None, 13, 13, 1) 11
Total params: 8,933
Trainable params: 8,933
Non-trainable params: 0
What I have worked out lacks a specification for the shape of the input, and I am also a bit perplexed at the translation of stride in the specified Keras model as it uses stride 2 in the MaxPooling2D but doesn't specify this elsewhere - it is perhaps a toy example.
model = nn.Sequential(
nn.Conv2d(1, 32, 4),
nn.ReLU(),
nn.MaxPool2d(2, 2),
nn.Conv2d(1, 16, 4),
nn.ReLU(),
nn.MaxPool2d(2, 2),
nn.Linear(10, 1),
nn.Sigmoid(),
)