The added layer must be an instance of class Layer - keras

I am merging two embding layers for two LSTM models as follows:
Code here in this image
When I was building the sequential model, it gave me an error.
model = Sequential()
merged = Concatenate(axis=1)([s1rnn.output,s2rnn.output])
model.add(merged)
model.add(Dense(1))
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
model.fit([X1,X2], Y,batch_size=128, nb_epoch=20, validation_split=0.05)
TypeError: The added layer must be an instance of class Layer. Received: layer=KerasTensor(type_spec=TensorSpec(shape=(None, 110, 1), dtype=tf.float32, name=None), name='concatenate/concat:0', description="created by layer 'concatenate'") of type <class 'keras.engine.keras_tensor.KerasTensor'>.

Your model in which you want to add the Concatenate() layer as input needs to be of Functional() not Sequential() type (that would be a first step to modify your code).
The structure should look something like (notice the brackets at the end: how you add layers in the Functional API):
input_s1rnn= Input(shape=(...))
input_s2rnn= Input(shape=(...))
merged = Concatenate([s1_rnnmodel(input_s1rnn), s2_rnnmodel(input_s2rnn)],axis=1)
layer_1_model = some_layer()(merged)
layer_2_model = some_layer()(layer_1_model)
...
output_layer = Dense(1,activation='sigmoid')(layer_2_model)
model= Model([input_s1rnn, input_s2rnn], output_layer)

Related

Can model.summarize() in Keras (Sequential) - Multi Input (Numerical + n Embeddings) work?

I'm having difficulty printing the model.summary() after using the Sequential class in keras to build a structure like so:
embedding_inputs* numerical_input
\ /
\ /
-- CONCATENATE--
|
DENSE (50) #1
DENSE (50) #2
DENSE (50) #3
DENSE (50) #4
DENSE (1) #output
* embedding_inputs are a bunch of concatenated sequential models from
categorical variables. For the sake of simplicity,
let's pretend there is only one.
I know without the embedding layer(s), my model works and looks fine. But following my addition of an embedding layer and a concatenate layer, I'm told I need to build the model or that my Output tensors "must be the output of a Keras Layer."
I'm just utterly confused at this point. (I'm used to using the functional api but embarrassingly am having so much trouble with the Sequential one and would like to learn).
categorical = Sequential()
categorical.add(Embedding(
input_dim=len(df_train['mon'].astype('category').cat.categories),
output_dim=2,
input_length=1))
categorical.add(Flatten())
numeric = Sequential()
numeric.add(InputLayer(input_shape(1,len(numeric_column_names)),dtype='float32',name='numerical_in'))
model = Sequential()
model.add(Concatenate([numeric,categoric]))
model.add(Dense(50, input_dim=50, kernel_initializer='normal', activation='relu'))
model.add(Dense(50, input_dim=50, kernel_initializer='normal', activation='relu'))
model.add(Dense(50, input_dim=50, kernel_initializer='normal', activation='relu'))
model.add(Dense(50, input_dim=50, kernel_initializer='normal', activation='relu'))
model.add(Dense(1, kernel_initializer='normal')) #output layer (1 number)
If I attempt to use model.summary() without a build:
ValueError: This model has not yet been built. Build the model first by calling build() or calling fit() with some data. Or specify input_shape or batch_input_shape in the first layer for automatic build.
If I attempt to use model.build() first, I get a message like:
ValueError: Output tensors to a Sequential must be the output of a Keras `Layer` (thus holding past layer metadata). Found: None

Adding an activation layer to Keras Add() layer and using this layer as output to model

I am trying to apply a softmax activation layer to the output of the Add() layer. I am trying to make this layer the output of my model and I am running into a few problems.
It seems Add() layer doesn't allow the usage of activations and if I do something like this:
predictions = Add()([x,y])
predictions = softmax(predictions)
model = Model(inputs = model.input, outputs = predictions)
I get:
ValueError: Output tensors to a Model must be the output of a Keras `Layer` (thus holding past layer metadata). Found: Tensor("Softmax:0", shape=(?, 6), dtype=float32)
It has nothing to do with the Add layer, you are using K.softmax directly on Keras tensors and this won't work, you need an actual layer. You can use the Activation layer for this:
from keras.layers import Activation
predictions = Add()([x,y])
predictions = Activation("softmax")(predictions)
model = Model(inputs = model.input, outputs = predictions)

Error in last keras layer of neural network

Model :
model = Sequential()
act = 'relu'
model.add(Dense(430, input_shape=(3,)))
model.add(Activation(act))
model.add(Dense(256))
model.add(Activation(act))
model.add(Dropout(0.4))
model.add(Dense(148))
model.add(Activation(act))
model.add(Dropout(0.3))
model.add(Dense(1))
model.add(Activation('softmax'))
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
#model.summary()
Error :
Error when checking target: expected activation_4 to have shape (None, 1) but got array with shape (1715, 2)
The error is in the last layer of the neural network. The neural network is trying to classify whether 2 drugs are synergic or not.
COMPLETE SOURCE CODE :
https://github.com/tanmay-edgelord/Drug-Synergy-Models/blob/master/Drug%20Synergy%20NN%20Classifier.ipynb
Data:
https://github.com/tanmay-edgelord/Drug-Synergy-Models/blob/master/train.csv
In your code you have the following lines:
y_binary = to_categorical(Y[:])
y_train = y_binary[:split]
y_test = y_binary[split:]
to_categorical transforms the vector into a 1-hot vector. So since you have two classes, it transforms every number to a vector of length 2 (0 is transformed to [1,0] and 1 is transformed to [0,1]). So your last layer needs to be defined as follows:
model.add(Dense(2))
model.add(Activation('softmax'))
(note that I replaced the 1 with 2).

How to change input shape in Sequential model in Keras

I have a sequential model that I built in Keras.
I try to figure out how to change the shape of the input. In the following example
model = Sequential()
model.add(Dense(32, input_shape=(500,)))
model.add(Dense(10, activation='softmax'))
model.compile(optimizer='rmsprop',
loss='categorical_crossentropy',
metrics=['accuracy'])
let's say that I want to build a new model with different input shape, conceptual this should looks like this:
model1 = model
model1.layers[0] = Dense(32, input_shape=(250,))
is there a way to modify the model input shape?
Somewhat related, so hopefully someone will find this useful: If you have an existing model where the input is a placeholder that looks like (None, None, None, 3) for example, you can load the model, replace the first layer with a concretely shaped input. Transformation of this kind is very useful when for example you want to use your model in iOS CoreML (In my case the input of the model was a MLMultiArray instead of CVPixelBuffer, and the model compilation failed)
from keras.models import load_model
from keras import backend as K
from keras.engine import InputLayer
import coremltools
model = load_model('your_model.h5')
# Create a new input layer to replace the (None,None,None,3) input layer :
input_layer = InputLayer(input_shape=(272, 480, 3), name="input_1")
# Save and convert :
model.layers[0] = input_layer
model.save("reshaped_model.h5")
coreml_model = coremltools.converters.keras.convert('reshaped_model.h5')
coreml_model.save('MyPredictor.mlmodel')
Think about what changing the input shape in that situation would mean.
Your first model
model.add(Dense(32, input_shape=(500,)))
Has a dense layer that really is a 500x32 matrix.
If you changed your input to 250 elements, your layers's matrix and input dimension would mismatch.
If, however, what you were trying to achieve was to reuse your last layer's trained parameters from your first 500 element input model, you could get those weights by get_weights. Then you could rebuild a new model and set values at the new model with set_weights.
model1 = Sequential()
model1.add(Dense(32, input_shape=(250,)))
model1.add(Dense(10, activation='softmax'))
model1.layers[1].set_weights(model1.layers[1].get_weights())
Keep in mind that model1 first layer (aka model1.layers[0]) would still be untrained
Here is another solution without defining each layer of the model from scratch. The key for me was to use "_layers" instead of "layers". The latter only seems to return a copy.
import keras
import numpy as np
def get_model():
old_input_shape = (20, 20, 3)
model = keras.models.Sequential()
model.add(keras.layers.Conv2D(9, (3, 3), padding="same", input_shape=old_input_shape))
model.add(keras.layers.MaxPooling2D((2, 2)))
model.add(keras.layers.Flatten())
model.add(keras.layers.Dense(1, activation="sigmoid"))
model.compile(loss='binary_crossentropy', optimizer=keras.optimizers.Adam(lr=0.0001), metrics=['acc'], )
model.summary()
return model
def change_model(model, new_input_shape=(None, 40, 40, 3)):
# replace input shape of first layer
model._layers[1].batch_input_shape = new_input_shape
# feel free to modify additional parameters of other layers, for example...
model._layers[2].pool_size = (8, 8)
model._layers[2].strides = (8, 8)
# rebuild model architecture by exporting and importing via json
new_model = keras.models.model_from_json(model.to_json())
new_model.summary()
# copy weights from old model to new one
for layer in new_model.layers:
try:
layer.set_weights(model.get_layer(name=layer.name).get_weights())
except:
print("Could not transfer weights for layer {}".format(layer.name))
# test new model on a random input image
X = np.random.rand(10, 40, 40, 3)
y_pred = new_model.predict(X)
print(y_pred)
return new_model
if __name__ == '__main__':
model = get_model()
new_model = change_model(model)

Keras translate example to use functional API

I have a working example in Keras which I want to translate to make use of the functional API. I am missing some detail I when I run the code I get ValueError: total size of new array must be unchanged when using embeddings.
Maybe someone sees what I am doing wrong.
The working code looks like this:
EMBEDDING_DIM=100
embeddingMatrix = mu.loadPretrainedEmbedding('englishEmb100.txt', EMBEDDING_DIM, wordMap)
###############
# Build Model #
###############
model = Sequential()
model.add(Embedding(vocabSize+1, EMBEDDING_DIM, weights=[embeddingMatrix], trainable=False))
#model.add(Embedding(vocabSize+1, EMBEDDING_DIM))
model.add(Bidirectional(LSTM(EMBEDDING_DIM, return_sequences=True)))
model.add(TimeDistributed(Dense(maximal_value)))
model.add(Activation('relu'))
# try using different optimizers and different optimizer configs
model.compile(loss='categorical_crossentropy',
optimizer='adam',
metrics=['accuracy'])
My attempt to re-write it with the functional API:
EMBEDDING_DIM=100
embeddingMatrix = mu.loadPretrainedEmbedding('englishEmb100.txt', EMBEDDING_DIM, wordMap)
input_layer = Input(shape=(longest_sequence,), dtype='int32')
emb = Embedding(vocabSize+1, EMBEDDING_DIM, weights=[embeddingMatrix]) (input_layer)
forwards = LSTM(EMBEDDING_DIM, return_sequences=True) (emb)
backwards = LSTM(EMBEDDING_DIM, return_sequences=True, go_backwards=True) (emb)
common = merge([forwards, backwards], mode='concat', concat_axis=-1)
dense = TimeDistributed(Dense(EMBEDDING_DIM, activation='tanh')) (common)
out = TimeDistributed(Dense(len(labelMap), activation='softmax')) (dense)
model = Model(input=input_layer, output=out)
model.compile(loss='categorical_crossentropy',
optimizer='adam',
metrics=['accuracy'])
There is somewhere an operation that change the size but I am not sure where or why this is happening. I would be happy if someone could help me with this.

Resources