I'm trying to build an NLP model that uses two different types of embedding, but I can't get the concatenation layer to function properly.
From what I can tell, the layer output shapes are correct, and they should be able to concatenate along the last axis. Am I calling something improperly?
Code sample:
from tensorflow.keras import layers
x_embed = layers.Embedding(90000, 100, input_length=9000)
feat_embed = layers.Embedding(90000, 8, input_length=9000)
layerlist = [x_embed, feat_embed]
concat = layers.Concatenate()(layerlist)
Error:
Exception has occurred: ValueError
A Concatenate layer should be called on a list of at least 2 inputs
You need to have inputs in your model, also need to specify correct concatenation axis.
from tensorflow.keras import layers
from tensorflow.keras import models
ip1 = layers.Input((9000))
ip2 = layers.Input((9000))
x_embed = layers.Embedding(90000, 100, input_length=9000)(ip1)
feat_embed = layers.Embedding(90000, 8, input_length=9000)(ip2)
layerlist = [x_embed, feat_embed]
concat = layers.Concatenate(axis = -1)(layerlist)
model = models.Model([ip1, ip2], concat)
model.summary()
Model: "model_1"
__________________________________________________________________________________________________
Layer (type) Output Shape Param # Connected to
==================================================================================================
input_5 (InputLayer) [(None, 9000)] 0
__________________________________________________________________________________________________
input_6 (InputLayer) [(None, 9000)] 0
__________________________________________________________________________________________________
embedding_6 (Embedding) (None, 9000, 100) 9000000 input_5[0][0]
__________________________________________________________________________________________________
embedding_7 (Embedding) (None, 9000, 8) 720000 input_6[0][0]
__________________________________________________________________________________________________
concatenate_3 (Concatenate) (None, 9000, 108) 0 embedding_6[0][0]
embedding_7[0][0]
==================================================================================================
Total params: 9,720,000
Trainable params: 9,720,000
Non-trainable params: 0
Related
I am testing something which includes building a FCNN network Dynamically. Idea is to build Number of layers and it's neurons based on a given list and the dummy code is:
neurons = [10,20,30] # First Dense has 10 neuron, 2nd has 20 and third has 30
inputs = keras.Input(shape=(1024,))
x = Dense(10,activation='relu')(inputs)
for n in neurons:
x = Dense(n,activation='relu')(x)
out = Dense(1,activation='sigmoid')(x)
model = Model(inputs,out)
model.summary()
keras.utils.plot_model(model,'model.png')
for layer in model.layers:
print(layer.name)
To my surprise, it is showing nothing.I even compiled and ran the functions again and nothing came out.
The model.summary always shows number of trainable and non trainable params but not the model structure and layer names. Why is this happening? Or is this normal?
About model.summary(), don't mix tf 2.x and standalone keras at a time. If I ran you model in tf 2.x, I get the expected results.
from tensorflow.keras.layers import *
from tensorflow.keras import Model
from tensorflow import keras
# your code ...
model.summary()
Model: "model"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input_1 (InputLayer) [(None, 1024)] 0
_________________________________________________________________
dense (Dense) (None, 10) 10250
_________________________________________________________________
dense_1 (Dense) (None, 10) 110
_________________________________________________________________
dense_2 (Dense) (None, 20) 220
_________________________________________________________________
dense_3 (Dense) (None, 30) 630
_________________________________________________________________
dense_4 (Dense) (None, 1) 31
=================================================================
Total params: 11,241
Trainable params: 11,241
Non-trainable params: 0
_________________________________
About plotting the model, there is a couple of option that can be used while you plot your keras model. Here is one example:
keras.utils.plot_model(model, show_dtype=True,
show_layer_names=True, show_shapes=True,
to_file='model.png')
Here is my model:
from keras.layers import Input, Embedding, Flatten
from keras.models import Model
n_teams = 10888
team_lookup = Embedding(input_dim=n_teams,
output_dim=1,
input_length=1,
name='Team-Strength')
teamid_in = Input(shape=(1,))
strength_lookup = team_lookup(teamid_in)
strength_lookup_flat = Flatten()(strength_lookup)
team_strength_model = Model(teamid_in, strength_lookup_flat, name='Team-Strength-Model')
team_in_1 = Input(shape=(1,), name='Team-1-In')
team_in_2 = Input(shape=(1,), name='Team-2-In')
home_in = Input(shape=(1,), name='Home-In')
team_1_strength = team_strength_model(team_in_1)
team_2_strength = team_strength_model(team_in_2)
out = Concatenate()([team_1_strength, team_2_strength, home_in])
out = Dense(1)(out)
When I am fitting the model with 10888 inputs and running summary I am getting total of 10892 parameters, please explain me:
1) Where do the 4 come from? and
2) If each of my outputs is 10888 why it counts only once?
Here is the summary of the model:
__________________________________________________________________________________________________
Layer (type) Output Shape Param # Connected to
==================================================================================================
Team-1-In (InputLayer) (None, 1) 0
__________________________________________________________________________________________________
Team-2-In (InputLayer) (None, 1) 0
__________________________________________________________________________________________________
Team-Strength (Model) (None, 1) 10888 Team-1-In[0][0]
Team-2-In[0][0]
__________________________________________________________________________________________________
Home-In (InputLayer) (None, 1) 0
__________________________________________________________________________________________________
concatenate_1 (Concatenate) (None, 3) 0 Team-Strength[1][0]
Team-Strength[2][0]
Home-In[0][0]
__________________________________________________________________________________________________
dense_1 (Dense) (None, 1) 4 concatenate_1[0][0]
==================================================================================================
Total params: 10,892
Trainable params: 10,892
Non-trainable params: 0
__________________________________________________________________________________________________
To answer your questions:
4 stems from the output_size * (input_size + 1) = number_parameters. From concatenate_1[0][0] you have 3 connections and 1 bias, hence 4.
10880 is the size of your embedding layer to which Team-1 and Team-2 are connected. It's the total "vocabulary" that is going to be used and has nothing to do with the output (which is the second parameter to the Embedding).
I hope it makes sense.
I am building an LSTM network for multivariate time series classification using 2 categorical features which I have created Embedding layers for in Keras. The model compiles and the architecture is displayed below with code. I am getting a ValueError: all the input array dimensions except for the concatenation axis must match exactly. This is strange to me because of model compiling and the output shapes seem to match (3D alignment concatenated along axis = -1). The model fit X parameters are a list of 3 inputs (first categorical variable array, second categorical variable array, and multivariate time series input 3-D for LSTM)
Layer (type) Output Shape Param # Connected to
==================================================================================================
input_4 (InputLayer) (None, 1) 0
__________________________________________________________________________________________________
input_5 (InputLayer) (None, 1) 0
__________________________________________________________________________________________________
VAR_1 (Embedding) (None, 46, 5) 50 input_4[0][0]
__________________________________________________________________________________________________
VAR_2 (Embedding) (None, 46, 13) 338 input_5[0][0]
__________________________________________________________________________________________________
time_series (InputLayer) (None, 46, 11) 0
__________________________________________________________________________________________________
concatenate_3 (Concatenate) (None, 46, 18) 0 VAR_1[0][0]
VAR_2[0][0]
__________________________________________________________________________________________________
concatenate_4 (Concatenate) (None, 46, 29) 0 time_series[0][0]
concatenate_3[0][0]
__________________________________________________________________________________________________
lstm_2 (LSTM) (None, 46, 100) 52000 concatenate_4[0][0]
__________________________________________________________________________________________________
attention_2 (Attention) (None, 100) 146 lstm_2[0][0]
__________________________________________________________________________________________________
dense_2 (Dense) (None, 1) 101 attention_2[0][0]
==================================================================================================
Total params: 52,635
Trainable params: 52,635
Non-trainable params: 0
n_timesteps = 46
n_features = 11
def EmbeddingNet(cat_vars,n_timesteps,n_features,embedding_sizes):
inputs = []
embed_layers = []
for (c, (in_size, out_size)) in zip(cat_vars, embedding_sizes):
i = Input(shape=(1,))
o = Embedding(in_size, out_size, input_length=n_timesteps, name=c)(i)
inputs.append(i)
embed_layers.append(o)
embed = Concatenate()(embed_layers)
time_series_input = Input(batch_shape=(None,n_timesteps,n_features ), name='time_series')
inputs.append(time_series_input)
concatenated_inputs = Concatenate(axis=-1)([time_series_input, embed])
lstm_layer1 = LSTM(units=100,return_sequences=True)(concatenated_inputs)
attention = Attention()(lstm_layer1)
output_layer = Dense(1, activation="sigmoid")(attention)
opt = Adam(lr=0.001)
model = Model(inputs=inputs, outputs=output_layer)
model.compile(loss='binary_crossentropy',optimizer=opt,metrics=['accuracy'])
model.summary()
return model
model = EmbeddingNet(cat_vars,n_timesteps,n_features,embedding_sizes)
history = model.fit(x=[x_train_cat_array[0],x_train_cat_array[1],x_train_input], y=y_train_input, batch_size=8, epochs=1, verbose=1, validation_data=([x_val_cat_array[0],x_val_cat_array[1],x_val_input], y_val_input),shuffle=False)
I'm trying to the the very same thing. You should concatenate over axis 2. Please check HERE
Let me know if this works in your dataset, because categorical features are not giving any benefit to me.
I need to identify if two fingerprints (from id card and sensor) match or not. Below some examples from my database (3000 pairs of images):
Example of matching images
Example of non-matching images
I am trying to train a siamese network which receives a pair of images and its output is [1, 0] if they don't match and [0, 1] if they match, then I created my model with Keras:
image_left = Input(shape=(200, 200, 1))
image_right = Input(shape=(200, 200, 1))
vector_left = conv_base(image_left)
vector_right = conv_base(image_right)
merged_features = concatenate([vector_left, vector_right], axis=-1)
fc1 = Dense(64, activation='relu')(merged_features)
fc1 = Dropout(0.2)(fc1)
# # fc2 = Dense(128, activation='relu')(fc1)
pred = Dense(2, activation='softmax')(fc1)
model = Model(inputs=[image_left, image_right], outputs=pred)
Where conv_base is a convolutional architecture. Actually, I have tried with ResNet, leNet, MobileNetV2 and NASNet from keras.applications, but they don't work.
conv_base = NASNetMobile(weights = None,
include_top=True,
classes=256)
My model summary is similar as shown below (depending on corresponding network used):
__________________________________________________________________________________________________
Layer (type) Output Shape Param # Connected to
==================================================================================================
input_2 (InputLayer) (None, 200, 200, 1) 0
__________________________________________________________________________________________________
input_3 (InputLayer) (None, 200, 200, 1) 0
__________________________________________________________________________________________________
NASNet (Model) (None, 256) 4539732 input_2[0][0]
input_3[0][0]
__________________________________________________________________________________________________
concatenate_5 (Concatenate) (None, 512) 0 NASNet[1][0]
NASNet[2][0]
__________________________________________________________________________________________________
dense_1 (Dense) (None, 64) 32832 concatenate_5[0][0]
__________________________________________________________________________________________________
dropout_1 (Dropout) (None, 64) 0 dense_1[0][0]
__________________________________________________________________________________________________
dense_2 (Dense) (None, 2) 130 dropout_1[0][0]
==================================================================================================
Total params: 4,572,694
Trainable params: 4,535,956
Non-trainable params: 36,738
Additional to convolutional architecture changes, I've tried with using pre-trained weights, setting all layers as trainable, setting last convolutional layers as trainable, data augmentation, using categorical_crossentropy and contrastive_loss functions, changing learning rate, but they all have same behavior. It is, training and validation accuracy are always 0.5.
Does anybody have an idea about what I am missing/doing wrong?
Thank you.
I am using keras 1.1.1 in windows 7 with tensorflow backend.
I am trying to prepend the stock Resnet50 pretained model with an image downsampler. Below is my code.
from keras.applications.resnet50 import ResNet50
import keras.layers
# this could also be the output a different Keras model or layer
input = keras.layers.Input(shape=(400, 400, 1)) # this assumes K.image_dim_ordering() == 'tf'
x1 = keras.layers.AveragePooling2D(pool_size=(2,2))(input)
x2 = keras.layers.Flatten()(x1)
x3 = keras.layers.RepeatVector(3)(x2)
x4 = keras.layers.Reshape((200, 200, 3))(x3)
x5 = keras.layers.ZeroPadding2D(padding=(12,12))(x4)
m = keras.models.Model(input, x5)
model = ResNet50(input_tensor=m.output, weights='imagenet', include_top=False)
but I get an error which I am unsure how to fix.
builtins.Exception: Graph disconnected: cannot obtain value for tensor
Output("input_2:0", shape=(?, 400, 400, 1), dtype=float32) at layer
"input_2". The following previous layers were accessed without issue:
[]
You can use both the Functional API and Sequential approaches to solve this. See working example for both approaches below:
from keras.applications.ResNet50 import ResNet50
from keras.models import Sequential, Model
from keras.layers import AveragePooling2D, Flatten, RepeatVector, Reshape, ZeroPadding2D, Input, Dense
pretrained = ResNet50(input_shape=(224, 224, 3), weights='imagenet', include_top=False)
# Sequential method
model_1 = Sequential()
model_1.add(AveragePooling2D(pool_size=(2,2),input_shape=(400, 400, 1)))
model_1.add(Flatten())
model_1.add(RepeatVector(3))
model_1.add(Reshape((200, 200, 3)))
model_1.add(ZeroPadding2D(padding=(12,12)))
model_1.add(pretrained)
model_1.add(Dense(1))
# functional API method
input = Input(shape=(400, 400, 1))
x = AveragePooling2D(pool_size=(2,2),input_shape=(400, 400, 1))(input)
x = Flatten()(x)
x = RepeatVector(3)(x)
x = Reshape((200, 200, 3))(x)
x = ZeroPadding2D(padding=(12,12))(x)
x = pretrained(x)
preds = Dense(1)(x)
model_2 = Model(input,preds)
model_1.summary()
model_2.summary()
The summaries (replace resnet for xception):
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
average_pooling2d_1 (Average (None, 200, 200, 1) 0
_________________________________________________________________
flatten_1 (Flatten) (None, 40000) 0
_________________________________________________________________
repeat_vector_1 (RepeatVecto (None, 3, 40000) 0
_________________________________________________________________
reshape_1 (Reshape) (None, 200, 200, 3) 0
_________________________________________________________________
zero_padding2d_1 (ZeroPaddin (None, 224, 224, 3) 0
_________________________________________________________________
xception (Model) (None, 7, 7, 2048) 20861480
_________________________________________________________________
dense_1 (Dense) (None, 7, 7, 1) 2049
=================================================================
Total params: 20,863,529
Trainable params: 20,809,001
Non-trainable params: 54,528
_________________________________________________________________
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input_2 (InputLayer) (None, 400, 400, 1) 0
_________________________________________________________________
average_pooling2d_2 (Average (None, 200, 200, 1) 0
_________________________________________________________________
flatten_2 (Flatten) (None, 40000) 0
_________________________________________________________________
repeat_vector_2 (RepeatVecto (None, 3, 40000) 0
_________________________________________________________________
reshape_2 (Reshape) (None, 200, 200, 3) 0
_________________________________________________________________
zero_padding2d_2 (ZeroPaddin (None, 224, 224, 3) 0
_________________________________________________________________
xception (Model) (None, 7, 7, 2048) 20861480
_________________________________________________________________
dense_2 (Dense) (None, 7, 7, 1) 2049
=================================================================
Total params: 20,863,529
Trainable params: 20,809,001
Non-trainable params: 54,528
_________________________________________________________________
Both approaches work fine. If you plan on freezing the pretrained model and letting pre/post layers learn -- and afterward finetuning the model, the approach I found to work goes like so:
# given the same resnet model as before...
model = load_model('modelname.h5')
# pull out the nested model
nested_model = model.layers[5] # assuming the model is the 5th layer
# loop over the nested model to allow training
for l in nested_model.layers:
l.trainable=True
# insert the trainable pretrained model back into the original
model.layer[5] = nested_model