Trying to use transfer learning (fine tuning) with InceptionV3, removing the last layer, keeping training for all the layers off, and adding a single dense layer. When I look at the summary again, I do not see my added layer, and getting expectation.
RuntimeError: You tried to call count_params on dense_7, but the
layer isn't built. You can build it manually via:
dense_7.build(batch_input_shape).
from keras import applications
pretrained_model = applications.inception_v3.InceptionV3(weights = "imagenet", include_top=False, input_shape = (299, 299, 3))
from keras.layers import Dense
for layer in pretrained_model.layers:
layer.trainable = False
pretrained_model.layers.pop()
layer = (Dense(2, activation='sigmoid'))
pretrained_model.layers.append(layer)
Looking at summary again gives above exception.
pretrained_model.summary()
Wanted to train compile and fit model, but
pretrained_model.compile(optimizer=RMSprop(lr=0.0001),
loss = 'sparse_categorical_crossentropy', metrics = ['acc'])
Above line gives this error,
Could not interpret optimizer identifier:
You are using pop to pop the fully connected layer like Dense at the end of the network. But this is already accomplished by the argument include top = False. So you just need to initialize Inception with include_top = False, add the final Dense layer. In addition, since it's InceptionV3, I suggest you to add GlobalAveragePooling2D() after output of InceptionV3 to reduce overfitting. Here is a code,
from keras import applications
from keras.models import Model
from keras.layers import Dense, GlobalAveragePooling2D
pretrained_model = applications.inception_v3.InceptionV3(weights = "imagenet", include_top=False, input_shape = (299, 299, 3))
x = pretrained_model.output
x = GlobalAveragePooling2D()(x) #Highly reccomended
layer = Dense(2, activation='sigmoid')(x)
model = Model(input=pretrained_model.input, output=layer)
for layer in pretrained_model.layers:
layer.trainable = False
model.summary()
This should give you the desired model to fine tune.
Related
The end goal is to determine important features in a Neural Network model built within tensorflow keras OR kerasRegressor. The logic has been explained in this question, which utilizes eli5 by introducing noise for variables and measuring outcome.
I have been attempting for hours to implement this with no luck.
Question:
Why won't eli5 for feature importance work on either of my models?
My Current Error:
ValueError: Classification metrics can't handle a mix of multilabel-indicator and continuous-multioutput targets
I have built the model in both tf.keras && kerasRegressor reading somewhere that eli5 doesn't work with tensorflow.keras. I admit I do not truly understanding the difference b/n kerasRegressor & tf.keras.
Code for Keras Regressor Model:
def base_model():
# 1- Instantiate Model
modelNEW = keras.Sequential()
# 2- Specify Shape of First Layer
modelNEW.add(layers.Dense(512, activation = 'relu', input_shape = ourInputShape))
# 3- Add the layers
modelNEW.add(layers.Dense(3, activation= 'softmax')) #softmax returns array of probability scores (num prior), and in this case we have to predict either CSCANCEL, MEMBERCANCEL, ACTIVE)
modelNEW.compile(optimizer='rmsprop', loss='categorical_crossentropy', metrics=['accuracy'])
return modelNEW
# *** THIS IS SUPPOSED TO PREVENT OVERFITTING ***
from tensorflow.keras.callbacks import EarlyStopping
callbacks = [
EarlyStopping(patience=2)
]
yTrain = keras.utils.to_categorical(yTrain, 3)
yValidation = keras.utils.to_categorical(yValidation, 3)
currentModel = KerasRegressor(build_fn=base_model, epochs=100, batch_size=50, shuffle='True')
history = currentModel.fit(xTrain, yTrain)
Code for tf.Keras model:
Only change is model name
modelNEW = keras.Sequential()
modelNEW.add(layers.Dense(512, activation = 'relu', input_shape = ourInputShape))
modelNEW.add(layers.Dense(3, activation= 'softmax')) #softmax returns array of probability scores (num prior), and in this case we have to predict either CSCANCEL, MEMBERCANCEL, ACTIVE)
modelNEW.compile(optimizer='rmsprop', loss='categorical_crossentropy', metrics=['accuracy'])
*** THIS IS SUPPOSED TO PREVENT OVERFITTING ***
from tensorflow.keras.callbacks import EarlyStopping
callbacks = [
EarlyStopping(patience=2)
]
yTrain = keras.utils.to_categorical(yTrain, 3)
yValidation = keras.utils.to_categorical(yValidation, 3)
history = modelNEW.fit(xTrain, yTrain, epochs=100, batch_size=50, shuffle="True")
Attempting to implement eli5:
from keras.wrappers.scikit_learn import KerasClassifier, KerasRegressor
import eli5
from eli5.sklearn import PermutationImportance
from eli5 import show_weights
perm = PermutationImportance(currentModel, scoring="accuracy", random_state=1).fit(xTrain,yTrain)
eli5.show_weights(perm, feature_names = xTrain.columns.tolist())
I am trying to classify 2 categories with transfer learning. After preprocessing my data I want to apply 'InceptionResNetV2'. Where I want to remove the last layer of this Keras application and want to add a layer.
The following script I wrote to do this:
irv2 = tf.keras.applications.inception_resnet_v2.InceptionResNetV2()
irv2.summary()
x = irv2.layers[-1].output
x = Dropout(0.25)(x)
predictions = Dense(2, activation='softmax')(x)
model = Model(inputs=mobile.input, outputs=predictions)
Then an error occurred:
---------------------------------------------------------------------------
NameError Traceback (most recent call last)
<ipython-input-40-911de74d9eaf> in <module>()
5 predictions = Dense(2, activation='softmax')(x)
6
----> 7 model = Model(inputs=mobile.input, outputs=predictions)
NameError: name 'Model' is not defined
If is there another way to remove the last layer and add a new layer(predictions = Dense(2, activation='softmax')) please let me know.
This is my full code.
You can use this code snippet to define your transfer learning model.
Here, we are using weights trained on imagenet datsaset and are ignoring the final layer (the 1000 neuron layer that was used to train 1000 classes in imagenet dataset) and adding our custom layers. In this example we are adding a GAP layer followed by a dense layer for binary classification.
from tensorflow import keras
input_layer = keras.layers.Input(shape=(224, 224, 3))
irv2 = keras.applications.Xception(weights='imagenet',include_top=False,input_tensor = input_layer)
global_avg = keras.layers.GlobalAveragePooling2D()(irv2.output)
dense_1 = keras.layers.Dense(1,activation = 'sigmoid')(global_avg)
model = keras.Model(inputs=irv2.inputs,outputs=dense_1)
model.summary()
The error you faced could possibly be due to the import changes between tf 1.x and tf 2.x
Try out any one of the below import methods depending on your tensorflow version. It should fix the error.
from tensorflow.keras.models import Model
or
from tensorflow.keras import Model
And also make sure you either import everything from tensorflow or from keras. Using the functions which are imported from either of the libraries in the same script would cause incompatibility errors.
-1 will give you the last Dense layer, but what you really what it a layer above that which is -2
Input should be the inception model input layer
import tensorflow as tf
from tensorflow.keras.layers import Dense
from keras.models import Model
irv2 = tf.keras.applications.inception_resnet_v2.InceptionResNetV2()
predictions = Dense(2, activation='softmax')(irv2.layers[-2].output)
model = Model(inputs=irv2.input, outputs=predictions)
model.summary()
I have a pretrain model (vgg16) from keras. I'm trying to add BatchNormalization layer after every conv2d by looping. However, it seems that I could not all of them together. Here is my code.
from keras.applications import VGG16
from keras.layers import BatchNormalization, Input
from keras.models import Model
input_tensor = Input(shape=(256, 256, 3))
pretrain = VGG16(weights="imagenet", include_top=False, input_tensor=input_tensor)
model = pretrain.layers[0].input
for i in range(len(pretrain.layers)):
x = pretrain.layers[i].output
if "conv" in pretrain.layers[i].name:
x = pretrain.layers[i].output
x = BatchNormalization(axis=-1)(x)
model = Model(input=model.input, output=x)
May I have your suggestions? Thank you in advance
You should build a new model by copying layers from VGG and adding your own where you want.
The problem now is that the conv. layers are connected to the batch norm but which are not connected to anything.
I am using Keras functional API to build a classifier and I am using the training flag in the dropout layer to enable dropout when predicting new instances (in order to get an estimate of the uncertainty). In order to get the expected response one needs to repeat this prediction several times, with keras randomly activating links in the dense layer, and of course it is computational expensive. Therefore, I would also like to have the option to not use dropout at the prediction phase, i.e., use all the network links. Does anyone know how I can do this? Following is a sample code of what I am doing. I tried to look if predict has any relevant parameter but does not seem like it does (?). I can technically train the same model without the training flag at the dropout layer, but I do not want to do this (or better I want to have a more clean solution, rather than having 2 different models).
from sklearn.datasets import make_circles
from keras.models import Sequential
from keras.utils import to_categorical
from keras.layers import Dense
from keras.layers import Dropout
import numpy as np
import keras
# generate a 2d classification sample dataset
X, y = make_circles(n_samples=100, noise=0.1, random_state=1)
n_train = 30
trainX, testX = X[:n_train, :], X[n_train:, :]
trainy, testy = y[:n_train], y[n_train:]
trainy = to_categorical(trainy)
testy = to_categorical(testy)
inputlayer = keras.layers.Input((2,))
d = keras.layers.Dense(500, activation = 'relu')(inputlayer)
d1 = keras.layers.Dropout(rate = .3)(d,training = True)
out = keras.layers.Dense(2, activation = 'softmax')(d1)
model = keras.Model(inputs = inputlayer, outputs = out)
model.compile(loss = 'categorical_crossentropy',metrics = ['accuracy'],optimizer='adam')
model.fit(x = trainX, y = trainy, validation_data=(testX, testy),epochs=1000, verbose=1)
# another prediction on a specific sample
print(model.predict(testX[0:1,:]))
# another prediction on the same sample
print(model.predict(testX[0:1,:]))
Running the above example I get the following output:
[[0.9230819 0.07691813]]
[[0.8222245 0.17777553]]
which is as expected, different class probabilities for the same input, since there is a random (de)activation of the links from the dropout layer.
Any suggestions on how I can enable/disable dropout at the prediction phase with the functional API?
Sure, you do not need to set the training flag when building the Dropout layer. After training your model you define this function:
mc_func = K.function([model.input, K.learning_phase()],
[model.output])
Then you call mc_func with your input and flag 1 to enable dropout, or 0 to disable it:
stochastic_pred = mc_func([some_input, 1])
deterministic_pred = mc_func([some_input, 0])
I'm new to Keras
my neural network structure is here:
neural network structure
my idea is :
import keras.backend as KBack
import tensorflow as tf
#...some code here
model = Sequential()
hidden_units = 4
layer1 = Dense(
hidden_units,
input_dim=len(InputIndex),
activation='sigmoid'
)
model.add(layer1)
# layer1_bias = layer1.get_weights()[1][0]
layer2 = Dense(
1, activation='sigmoid',
use_bias=False
)
model.add(layer2)
# KBack.bias_add(model.output, layer1_bias[0])
I know this is not working cause layer1_bias[0] is not tensor, but I have no idea how to fix it. Or somebody has other solution.
Thanks.
You get the error because bias_add expects a Tensor and you are passing it a float (the actual value of the bias). Also, be aware that your hidden layer actually has 3 biases (one for each node). If you want to add the bias of the first node to your output layer, this should work:
import keras.backend as K
from keras.layers import Dense, Activation
from keras.models import Sequential
model = Sequential()
layer1 = Dense(3, input_dim=2, activation='sigmoid')
layer2 = Dense(1, activation=None, use_bias=False)
activation = Activation('sigmoid')
model.add(layer1)
model.add(layer2)
K.bias_add(model.output, layer1.bias[0:1]) # slice like this to not lose a dimension
model.add(activation)
print(model.summary())
Note that, to be 'correct' (according to the definition of what a dense layer does), you should add the bias first, then the activation.
Also, your code is not really in line with the picture of your network. In the picture, one single shared bias is added to each of the nodes in the network. You can do this with the functional API. The idea is to disable the use of biases in the hidden layer and the output layers, and to manually add a bias variable that you define yourself and that will be shared by the layers. I'm using tensorflow for tf.add() since that supports broadcasting:
from keras.layers import Dense, Lambda, Input, Add
from keras.models import Model
import keras.backend as K
import tensorflow as tf
# Define the shared bias as a custom keras variable
shared_bias = K.variable(value=[0], name='shared_bias')
input_layer = Input(shape=(2,))
# Disable biases in the hidden layer
dense_1 = Dense(units=3, use_bias=False, activation=None)(input_layer)
# Manually add the shared bias
dense_1 = Lambda(lambda x: tf.add(x, shared_bias))(dense_1)
# Disable bias in output layer
output_layer = Dense(units=1, use_bias=False)(dense_1)
# Manually add the bias variable
output_layer = Lambda(lambda x: tf.add(x, shared_bias))(output_layer)
model = Model(inputs=input_layer, outputs=output_layer)
print(model.summary())
This assumes that your shared bias is not trainable though.