How to combine embedded columns with other input data in Keras - python-3.x

I have one column with categorical data with 1003 different categories, and I have a lot of columns with regular integer data. I want to embed the column with categorical data and have the embedded output together with all the other columns as input to my model. I am unsure of how to do this but have tried in the following code using merge. Unfortunately, this gives a Value error: '"concat" mode can only merge layers with matching output shapes except for the concat axis. Layer shapes: [(None, 1, 11), (None, 53)]'.
Any help would be greatly appreciated.
hidden_layers = [1000,500,500]
embedding = Sequential()
embedding.add(1003, 11, input_length = 1))
model1 = Sequential()
model1.add(Dense(53, input_dim=53, activation='relu'))
model = Sequential()
model = model.add(Merge([embedding, model1], mode = 'concat'))
for i, layer_size in enumerate(hidden_layers):
model.add(Dense(layer_size, activation='relu'))
model.add(Dense(self.output_layers, activation='linear'))
model.compile(optimizer = 'adam', loss = 'mse')

The Embedding layer produces a 3D tensor as you see in the error message (None, 1, 11) where 1 is the sequence length you are embedding. In order to merge with a 2D tensor you would have to Flatten it:
embedding = Sequential()
embedding.add(Embedding(1003, 11, input_length = 1))
embedding.add(Flatten())
which will give (None, 11) and can be merged with (None, 53).

Related

Training a pre-trained sequential model with different input shape

I have a pre-trained sequential CNN model which I trained on images of 224x224x3. The following is the architecture:
model = Sequential()
model.add(Conv2D(filters = 64, kernel_size = (5, 5), strides = 1, activation = 'relu', input_shape = (224, 224, 3)))
model.add(MaxPool2D(pool_size = (3, 3)))
model.add(Dropout(0.2))
model.add(Conv2D(filters = 128, kernel_size = (3, 3), strides = 1, activation = 'relu'))
model.add(MaxPool2D(pool_size = (2, 2)))
model.add(Dropout(0.2))
model.add(Conv2D(filters = 256, kernel_size = (2, 2), strides = 1, activation = 'relu'))
model.add(MaxPool2D(pool_size = (2, 2)))
model.add(Dropout(0.2))
model.add(Flatten())
model.add(Dense(128, activation = 'relu', use_bias=False))
model.add(Dense(num_classes, activation = 'softmax'))
model.summary()
For reference, here is the model summary: model summary
I want to retrain this model on images of size 40x40x3. However, I am facing the following error: "ValueError: Input 0 of layer dense_12 is incompatible with the layer: expected axis -1 of input shape to have value 200704 but received input with shape (None, 256)".
What should I do to resolve this error?
Note: I am using Tensorflow version 2.4.1
The problem is, in your pre-trained model you have a flattened shape of 200704 as input shape (line no 4 from last), and then the output size is 128 for the dense layer (line 3 from the last). And now you wanna use the same pre-trained model for the image of 40X40, it will not work. The reasons are :
1- Your model is input image shape-dependent. it's not an end-to-end conv model, as you use dense layers in between, which makes the model input image size-dependent.
2- The flatten size of the 40x40 image after all the conv layers are 256, not 200704.
Solution
1- Either you change the flatten part with adaptive average pooling layer and then your last dense layer with softmax is fine. And again retrain your old model on 224x224 images. Following that you can train on your 40x40 images.
2- Or the easiest way is to just use a subset of your pre-trained model till the flatten part (exclude the flatten part) and then add a flatten part with dense layer and classification layer (layer with softmax). For this method you have to write a custom model, like here, just the first part will be the subset of the pre-trained model, and flatten and classification part will be additional. And then you can train the whole model over the new dataset. You can also take the benefit of transfer-learning using this method, by allowing the backward gradient to flow only through the newly created linear layer and not through the pre-trained layers.

ValueError: Shapes (None, 1) and (None, 90) are incompatible

I want to build a deep RNN where my x_train and my y_train. When I execute the code below:
print(X_train_fea.shape, y_train_fea.shape)
X_train_res = np.reshape(X_train_fea,(10510,10,1))
y_train_res = np.reshape(y_train_fea.to_numpy(),(-1,1))
print(X_train_res.shape, y_train_res.shape)
result:
(10510, 10) (10510,)
(10510, 10, 1) (10510, 1)
and
model = Sequential([
LSTM(90, input_shape=(10,1)),
])
model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy'])
model.summary()
When I fit the model
history = model.fit(X_train_res, y_train_res,epochs=5)
I got
ValueError: Shapes (None, 1) and (None, 90) are incompatible
Looks like y_train_res comprise of integer indices not one-hot vectors. If so you have to use sparse_categorical_crossentropy:
model.compile(optimizer='adam', loss='sparse_categorical_crossentropy')
and change its shape to 1D:
y_train_res = np.reshape(y_train_fea.to_numpy(),(-1,))

Keras 2D Dense Layer for Output

I am playing with a model which should take a 8x8 chess board as input, encoded as a 224x224 grey image, and then output a 64x13 one-hot-encoded logistic regression = probabilities of pieces on the squares.
Now, after the Convolutional layers I don't quite know, how to proceed to get a 2D-Dense layer as a result/target.
I tried adding a Dense(64,13) as a layer to my Sequential model, but I get the error "Dense` can accept only 1 positional arguments ('units',)"
Is it even possible to train for 2D-targets?
EDIT1:
Here is the relevant part of my code, simplified:
# X.shape = (10000, 224, 224, 1)
# Y.shape = (10000, 64, 13)
model = Sequential([
Conv2D(8, (3,3), activation='relu', input_shape=(224, 224, 1)),
Conv2D(8, (3,3), activation='relu'),
# some more repetitive Conv + Pooling Layers here
Flatten(),
Dense(64,13)
])
TypeError: Dense can accept only 1 positional arguments ('units',), but you passed the following positional arguments: [64, 13]
EDIT2: As Anand V. Singh suggested, I changed Dense(64, 13) to Dense(832), which works fine. Loss = mse.
Wouldn't it be better to use "sparse_categorical_crossentropy" as loss and 64x1 encoding (instead of 64x13) ?
In Dense you only pass the number of layers you expect as output, if you want (64x13) as output, put the layer dimension as Dense(832) (64x13 = 832) and then reshape later. You will also need to reshape Y so as to accurately calculate loss, which will be used for back propagation.
# X.shape = (10000, 224, 224, 1)
# Y.shape = (10000, 64, 13)
Y = Y.reshape(10000, 64*13)
model = Sequential([
Conv2D(8, (3,3), activation='relu', input_shape=(224, 224, 1)),
Conv2D(8, (3,3), activation='relu'),
# some more repetitive Conv + Pooling Layers here
Flatten(),
Dense(64*13)
])
That should get the job done, if it doesn't post where it fails and we can proceed further.
A Reshape layer allows you to control the output shape.
Flatten(),
Dense(64*13),
Reshape((64,13))#2D

Error when checking input: expected time_distributed_136_input to have 5 dimensions, but got array with shape (16, 128, 128, 3)

I'm training an CNN with LSTM, where I use TimeDistributed but apparently it wants an extra dimension for the data. I don't know how to add it.
My thought is that the problem is in ImageGenerator, but I don't know how to reshape images generated from it.
cnn_model = Sequential()
cnn_model.add(Conv2D(32, (3, 3), activation='relu', input_shape=(128,128,3)))
cnn_model.add(MaxPooling2D(pool_size=(2, 2)))
cnn_model.add(Conv2D(32, (3, 3), activation='relu'))
cnn_model.add(MaxPooling2D(pool_size=(2, 2)))
cnn_model.add(Conv2D(64, (3, 3), activation='relu'))
cnn_model.add(MaxPooling2D(pool_size=(2, 2)))
cnn_model.add(Conv2D(128, (3, 3), activation='relu'))
cnn_model.add(MaxPooling2D(pool_size=(2, 2)))
cnn_model.add(Flatten())
model = Sequential()
model.add(TimeDistributed(cnn_model, input_shape=(16, 128, 128,3)))
model.add(LSTM(128, return_sequences=True, dropout=0.5))
# model.add(Dropout(0.2)) #added
model.add(Dense(4, activation='softmax'))
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
batch_size = 16
train_datagen = ImageDataGenerator(rescale=1. / 255)
test_datagen = ImageDataGenerator(rescale=1./255)
train_generator = train_datagen.flow_from_directory(
'train/', # this is the target directory
target_size=(128,128),
batch_size=batch_size,
class_mode='categorical',
shuffle=True,
classes=['class_0', 'class_1','class_2','class_3'])
validation_generator = test_datagen.flow_from_directory(
'test/',
target_size=(128,128),
batch_size=batch_size,
class_mode='categorical',
shuffle=True,
classes=['class_0', 'class_1','class_2','class_3'])
model.fit_generator(
train_generator,
steps_per_epoch=47549 // batch_size,
epochs=5,
validation_data=validation_generator,
validation_steps=5444 // batch_size)
But I'm getting the following error message
ValueError: Error when checking input: expected time_distributed_136_input to have 5 dimensions, but got array with shape (16, 128, 128, 3)
Data folder is as follows:
-- train
-- class 0
-- vid 1
-- frame1.jpg
-- frame2.jpg
-- frame3.jpg
-- class 1
-- frame1.jpg
-- frame2.jpg
-- frame3.jpg
-- class 2
-- class 3
-- test
(same as train)
Thanks for any help.
You're fogetting the first dimension of every tensor, which is the batch size. You don't define a batch size unless it's absolutely necessary, thus the input shapes don't consider it.
When you defined input_shape=(16,128,128,3), this means that your data must have five dimensions: (examples, 16, 128, 128, 3)
And the examples dimension is missing in your data.
If you say they're movies, you should have data like (movies, frames, height, width, channels), probably. Then this would be accepted by input_shape=(frames, height, width, channels).
After several trials, I ended up using the same code, but with a tweaked version of the Keras "ImageDataGenerator" Class to add an extra dimension to the data, so that it becomes 5D.
(This is also valid for the use of Conv3D)
For anyone who face the same problem, you can find my tweaked version of the ImageDataGenerator class here.
It's the same as the main Keras ImageDataGenerator but I added an option to take more than one image/frame on each iteration. This is by changing the parameter frames_per_step to specify the number of frames/images you want to include in each iteration.
So here's how to use it:
from tweaked_ImageGenerator_v2 import ImageDataGenerator
datagen = ImageDataGenerator()
train_data=datagen.flow_from_directory('path/to/data', target_size=(x, y), batch_size=32, frames_per_step=4)
I think your problem is with your model. You define the input shape of your TimeDistributed in your model as input_shape=(16, 128, 128,3) which I guess it should be input_shape=(128, 128,3).
change this line :
model.add(TimeDistributed(cnn_model, input_shape=(16, 128, 128,3)))
to:
model.add(TimeDistributed(cnn_model, input_shape=(128, 128,3)))
And I hope it will work.

Concatenation of Keras parallel layers changes wanted target shape

I'm a bit new to Keras and deep learning. I'm currently trying to replicate this paper but when I'm compiling the first model (without the LSTMs) I get the following error:
"ValueError: Error when checking target: expected dense_3 to have shape (None, 120, 40) but got array with shape (8, 40, 1)"
The description of the model is this:
Input (length T is appliance specific window size)
Parallel 1D convolution with filter size 3, 5, and 7
respectively, stride=1, number of filters=32,
activation type=linear, border mode=same
Merge layer which concatenates the output of
parallel 1D convolutions
Dense layer, output_dim=128, activation type=ReLU
Dense layer, output_dim=128, activation type=ReLU
Dense layer, output_dim=T , activation type=linear
My code is this:
from keras import layers, Input
from keras.models import Model
# the window sizes (seq_length?) are 40, 1075, 465, 72 and 1246 for the kettle, dish washer,
# fridge, microwave, oven and washing machine, respectively.
def ae_net(T):
input_layer = Input(shape= (T,))
branch_a = layers.Conv1D(32, 3, activation= 'linear', padding='same', strides=1)(input_layer)
branch_b = layers.Conv1D(32, 5, activation= 'linear', padding='same', strides=1)(input_layer)
branch_c = layers.Conv1D(32, 7, activation= 'linear', padding='same', strides=1)(input_layer)
merge_layer = layers.concatenate([branch_a, branch_b, branch_c], axis=1)
dense_1 = layers.Dense(128, activation='relu')(merge_layer)
dense_2 =layers.Dense(128, activation='relu')(dense_1)
output_dense = layers.Dense(T, activation='linear')(dense_2)
model = Model(input_layer, output_dense)
return model
model = ae_net(40)
model.compile(loss= 'mean_absolute_error', optimizer='rmsprop')
model.fit(X, y, batch_size= 8)
where X and y are numpy arrays of 8 sequences of a length of 40 values. So X.shape and y.shape are (8, 40, 1). It's actually one batch of data. The thing is I cannot understand how the output would be of shape (None, 120, 40) and what these sizes would mean.
As you noted, your shapes contain batch_size, length and channels: (8,40,1)
Your three convolutions are, each one, creating a tensor like (8,40,32).
Your concatenation in the axis=1 creates a tensor like (8,120,32), where 120 = 3*40.
Now, the dense layers only work on the last dimension (the channels in this case), leaving the length (now 120) untouched.
Solution
Now, it seems you do want to keep the length at the end. So you won't need any flatten or reshape layers. But you will need to keep the length 40, though.
You're probably doing the concatenation in the wrong axis. Instead of the length axis (1), you should concatenate in the channels axis (2 or -1).
So, this should be your concatenate layer:
merge_layer = layers.Concatenate()([branch_a, branch_b, branch_c])
#or layers.Concatenate(axis=-1)([branch_a, branch_b, branch_c])
This will output (8, 40, 96), and the dense layers will transform the 96 in something else.

Resources