NameError: name 'Model is not defined'-how to resolve this? - keras

I am trying to classify 2 categories with transfer learning. After preprocessing my data I want to apply 'InceptionResNetV2'. Where I want to remove the last layer of this Keras application and want to add a layer.
The following script I wrote to do this:
irv2 = tf.keras.applications.inception_resnet_v2.InceptionResNetV2()
irv2.summary()
x = irv2.layers[-1].output
x = Dropout(0.25)(x)
predictions = Dense(2, activation='softmax')(x)
model = Model(inputs=mobile.input, outputs=predictions)
Then an error occurred:
---------------------------------------------------------------------------
NameError Traceback (most recent call last)
<ipython-input-40-911de74d9eaf> in <module>()
5 predictions = Dense(2, activation='softmax')(x)
6
----> 7 model = Model(inputs=mobile.input, outputs=predictions)
NameError: name 'Model' is not defined
If is there another way to remove the last layer and add a new layer(predictions = Dense(2, activation='softmax')) please let me know.
This is my full code.

You can use this code snippet to define your transfer learning model.
Here, we are using weights trained on imagenet datsaset and are ignoring the final layer (the 1000 neuron layer that was used to train 1000 classes in imagenet dataset) and adding our custom layers. In this example we are adding a GAP layer followed by a dense layer for binary classification.
from tensorflow import keras
input_layer = keras.layers.Input(shape=(224, 224, 3))
irv2 = keras.applications.Xception(weights='imagenet',include_top=False,input_tensor = input_layer)
global_avg = keras.layers.GlobalAveragePooling2D()(irv2.output)
dense_1 = keras.layers.Dense(1,activation = 'sigmoid')(global_avg)
model = keras.Model(inputs=irv2.inputs,outputs=dense_1)
model.summary()
The error you faced could possibly be due to the import changes between tf 1.x and tf 2.x
Try out any one of the below import methods depending on your tensorflow version. It should fix the error.
from tensorflow.keras.models import Model
or
from tensorflow.keras import Model
And also make sure you either import everything from tensorflow or from keras. Using the functions which are imported from either of the libraries in the same script would cause incompatibility errors.

-1 will give you the last Dense layer, but what you really what it a layer above that which is -2
Input should be the inception model input layer
import tensorflow as tf
from tensorflow.keras.layers import Dense
from keras.models import Model
irv2 = tf.keras.applications.inception_resnet_v2.InceptionResNetV2()
predictions = Dense(2, activation='softmax')(irv2.layers[-2].output)
model = Model(inputs=irv2.input, outputs=predictions)
model.summary()

Related

Speed up the Keras sequential model in for loop

I am trying to decrease the execution time of the Keras sequential model that runs in a loop several times.
My training dataset shape: (1,9526,32736,1) (1,ntimes,ngrid,1)
and test data shape is (1,1059,32736,1)
The test data time dimension is not fixed (variable) but the ngrid is fixed.
I created a dummy dimension in the end so that when I call the training data in the for loop the dimension shape will be (1,ntimes,1)
This is the description of what model does:
First, the model does the convolution along the time axis for a single grid point.
Subtracts the output of the convolution from the input data.
Does the convolution (along the time axis) of the output from the second layer.
The above steps are repeated 32736 ngrid times.
Here is the code:
import tensorflow.keras as keras
from tensorflow.keras.models import Model
from tensorflow.keras.layers import Input,Conv1D,subtract
import tensorflow as tf
print(tf.__version__)
2.4.1
import tensorflow.keras as keras
print(keras.__version__)
2.4.0
no_epochs = 1000
validation_split = 0
verbosity = 0
pred = np.ones(xtest.shape[1:3])
for i in tqdm(range(ngrid)):
keras.backend.clear_session()
inputs = Input(shape=(None,1),batch_size=1,name='input_layer')
smoth1 = Conv1D(1, kernel_size=90,padding='same',activation='linear')(inputs)
diff = subtract([inputs, smoth1])
smoth2 = Conv1D(1, kernel_size=30,padding='same',activation='linear')(diff)
model = Model(inputs=inputs, outputs=smoth2)
model.compile(optimizer='adam', loss='mse')
model.fit(xtrain[:,:,i,:],ytrain[:,:,i,:],epochs=no_epochs,validation_split=validation_split,verbose=verbosity)
pred[:,i] = model.predict(xtest[:,:,i,:]).squeeze()
del model
I am looking for other alternatives that can speed up my code. Any suggestions are greatly appreciated.

Question about enabling/disabling dropout with keras functional API

I am using Keras functional API to build a classifier and I am using the training flag in the dropout layer to enable dropout when predicting new instances (in order to get an estimate of the uncertainty). In order to get the expected response one needs to repeat this prediction several times, with keras randomly activating links in the dense layer, and of course it is computational expensive. Therefore, I would also like to have the option to not use dropout at the prediction phase, i.e., use all the network links. Does anyone know how I can do this? Following is a sample code of what I am doing. I tried to look if predict has any relevant parameter but does not seem like it does (?). I can technically train the same model without the training flag at the dropout layer, but I do not want to do this (or better I want to have a more clean solution, rather than having 2 different models).
from sklearn.datasets import make_circles
from keras.models import Sequential
from keras.utils import to_categorical
from keras.layers import Dense
from keras.layers import Dropout
import numpy as np
import keras
# generate a 2d classification sample dataset
X, y = make_circles(n_samples=100, noise=0.1, random_state=1)
n_train = 30
trainX, testX = X[:n_train, :], X[n_train:, :]
trainy, testy = y[:n_train], y[n_train:]
trainy = to_categorical(trainy)
testy = to_categorical(testy)
inputlayer = keras.layers.Input((2,))
d = keras.layers.Dense(500, activation = 'relu')(inputlayer)
d1 = keras.layers.Dropout(rate = .3)(d,training = True)
out = keras.layers.Dense(2, activation = 'softmax')(d1)
model = keras.Model(inputs = inputlayer, outputs = out)
model.compile(loss = 'categorical_crossentropy',metrics = ['accuracy'],optimizer='adam')
model.fit(x = trainX, y = trainy, validation_data=(testX, testy),epochs=1000, verbose=1)
# another prediction on a specific sample
print(model.predict(testX[0:1,:]))
# another prediction on the same sample
print(model.predict(testX[0:1,:]))
Running the above example I get the following output:
[[0.9230819 0.07691813]]
[[0.8222245 0.17777553]]
which is as expected, different class probabilities for the same input, since there is a random (de)activation of the links from the dropout layer.
Any suggestions on how I can enable/disable dropout at the prediction phase with the functional API?
Sure, you do not need to set the training flag when building the Dropout layer. After training your model you define this function:
mc_func = K.function([model.input, K.learning_phase()],
[model.output])
Then you call mc_func with your input and flag 1 to enable dropout, or 0 to disable it:
stochastic_pred = mc_func([some_input, 1])
deterministic_pred = mc_func([some_input, 0])

is it possible to do continuous training in keras for multi-class classification problem?

I was tried to continues training in keras.
because I was build keras multiclass classification model after I have new labels and values. so I want to build a new model without retraining. that is why I tried continuous train in keras.
model.add(Dense(10, activation='sigmoid'))
model.compile(optimizer='rmsprop',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
model.fit(training_data, labels, epochs=20, batch_size=1)
model.save("keras_model.h5")
after completing save the model , i want to do continues training. so i tried,
model1 = load_model("keras_model.h5")
model1.fit(new_input, new_label, epochs=20, batch_size=1)
model1.save("keras_model.h5")
I tried this. but it was thrown an error. like previously 10 classes. but now we add new class means an error occurred.
so what is my question is, is it possible for continues training in keras for multiclass classification for a new class?
tensorflow.python.framework.errors_impl.InvalidArgumentError: Received
a label value of 10 which is outside the valid range of [0, 9). Label
values: 10 [[{{node
loss/dense_7_loss/SparseSoftmaxCrossEntropyWithLogits/SparseSoftmaxCrossEntropyWithLogits}}]]
The typical approach for this type of situations is to define a common model that contains most of the inner layers and is reusable; and then a second model that defines the output layer and thus the number of classes. The inner model can be reused in subsequent outer models.
Untested example:
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras.layers import *
from tensorflow.keras.models import Model
def make_inner_model():
""" An example model that takes 42 features and outputs a
transformation vector.
"""
inp = Input(shape=(42,), name='in')
h1 = Dense(80, activation='relu')(inp)
h2 = Dense(40)(h1)
h3 = Dense(60, activation='relu')(h2)
out = Dense(32)(h3)
return Model(inp, out)
def make_outer_model(inner_model, n_classes):
inp = Input(shape=(42,), name='inp')
hidden = inner_model(inp)
out = Dense(n_classes, activation='softmax')(hidden)
model = Model(inp, out)
model.compile('adam', 'categorical_crossentropy')
return model
inner_model = make_inner_model()
inner_model.save('inner_model_untrained.h5')
model1 = make_outer_model(inner_model, 10)
model1.summary()
# model1.fit()
# inner_model.save_weights('inner_model_weights_1.h5')
model2 = make_outer_model(inner_model, 12)
# model2.fit()
# inner_model.save_weights('inner_model_weights_2.h5')

Fine-tune InceptionV3 model not working as expected

Trying to use transfer learning (fine tuning) with InceptionV3, removing the last layer, keeping training for all the layers off, and adding a single dense layer. When I look at the summary again, I do not see my added layer, and getting expectation.
RuntimeError: You tried to call count_params on dense_7, but the
layer isn't built. You can build it manually via:
dense_7.build(batch_input_shape).
from keras import applications
pretrained_model = applications.inception_v3.InceptionV3(weights = "imagenet", include_top=False, input_shape = (299, 299, 3))
from keras.layers import Dense
for layer in pretrained_model.layers:
layer.trainable = False
pretrained_model.layers.pop()
layer = (Dense(2, activation='sigmoid'))
pretrained_model.layers.append(layer)
Looking at summary again gives above exception.
pretrained_model.summary()
Wanted to train compile and fit model, but
pretrained_model.compile(optimizer=RMSprop(lr=0.0001),
loss = 'sparse_categorical_crossentropy', metrics = ['acc'])
Above line gives this error,
Could not interpret optimizer identifier:
You are using pop to pop the fully connected layer like Dense at the end of the network. But this is already accomplished by the argument include top = False. So you just need to initialize Inception with include_top = False, add the final Dense layer. In addition, since it's InceptionV3, I suggest you to add GlobalAveragePooling2D() after output of InceptionV3 to reduce overfitting. Here is a code,
from keras import applications
from keras.models import Model
from keras.layers import Dense, GlobalAveragePooling2D
pretrained_model = applications.inception_v3.InceptionV3(weights = "imagenet", include_top=False, input_shape = (299, 299, 3))
x = pretrained_model.output
x = GlobalAveragePooling2D()(x) #Highly reccomended
layer = Dense(2, activation='sigmoid')(x)
model = Model(input=pretrained_model.input, output=layer)
for layer in pretrained_model.layers:
layer.trainable = False
model.summary()
This should give you the desired model to fine tune.

how to convert the Keras sequential API to functional API

i am new to nlp and trying to learn the skip gram from the site:
https://towardsdatascience.com/understanding-feature-engineering-part-4-deep-learning-methods-for-text-data-96c44370bbfa
I am trying to implement the skip gram and the problem i run into is that the code below is a sequential API of keras and it doesn't support the merge ( later in the code as show below)
word_model.add(Embedding(vocab_size, embed_size,
embeddings_initializer="glorot_uniform",
input_length=1))
word_model.add(Reshape((embed_size, )))
so i am trying to convert it to functional api
word_model = Embedding(input_dim=vocab_size, output_dim=embed_size,
embeddings_initializer="glorot_uniform",
input_length=1)
word_model = Reshape(target_shape= (embed_size,))(word_model)
however i am getting the below error
Unexpectedly found an instance of type <class 'keras.layers.embeddings.Embedding'>. Expected a symbolic tensor instance.
i have tried reshape of layer and also background but still not working.
please suggest how to convert this or make it work.
thanks in advance.
from keras.layers import Merge
from keras.layers.core import Dense, Reshape
from keras.layers.embeddings import Embedding
from keras.models import Sequential
# build skip-gram architecture
word_model = Sequential()
word_model.add(Embedding(vocab_size, embed_size,
embeddings_initializer="glorot_uniform",
input_length=1))
word_model.add(Reshape((embed_size, )))
context_model = Sequential()
context_model.add(Embedding(vocab_size, embed_size,
embeddings_initializer="glorot_uniform",
input_length=1))
context_model.add(Reshape((embed_size,)))
model = Sequential()
model.add(Merge([word_model, context_model], mode="dot"))
model.add(Dense(1, kernel_initializer="glorot_uniform", activation="sigmoid"))
model.compile(loss="mean_squared_error", optimizer="rmsprop")
# view model summary
print(model.summary())
# visualize model structure
from IPython.display import SVG
from keras.utils.vis_utils import model_to_dot
SVG(model_to_dot(model, show_shapes=True, show_layer_names=False,
rankdir='TB').create(prog='dot', format='svg'))
You need an input layer first and then pass that on to the embedding layer. The following is an example using two inputs (one for the target word and one for the context word):
target_input = keras.layers.Input(input_shape)
context_input = keras.layers.Input(input_shape)
target_emb = Embedding(input_dim=vocab_size, output_dim=embed_size,
embeddings_initializer="glorot_uniform",
input_length=1)(target_input)
target_emb = Reshape((embed_size,))(target_emb)
context_emb = Embedding(input_dim=vocab_size, output_dim=embed_size,
embeddings_initializer="glorot_uniform",
input_length=1)(context_input)
context_emb = Reshape((embed_size,))(target_emb)
# Add the remaining layers here...
model = keras.models.Model(inputs=[target_input, context_input], outputs=output)

Resources