Related
My concern stems from how I currently have a preprocessing layer required for altering the input to the transfer learning base model, but I want to add several other preprocessing layers for data augmentation.
I want the base model preprocessing layer to apply to test data during predict() calls, but don't want the layers added for data augmentation to do so. According to the Keras documentation, "Data augmentation is inactive at test time so input images will only be augmented during calls to Model.fit (not Model.evaluate or Model.predict).", but is that the same for a base model preprocessing layer too?
Model Construction:
# layer construction
input_layer = layers.Input(shape=(targetWidth, targetHeight, 3))
preprocc_input = applications.inception_v3.preprocess_input
base_model = applications.InceptionV3(
input_shape=(targetWidth, targetHeight, 3),
include_top=False,
)
base_model.trainable = False
global_avg_pool_layer = layers.GlobalAveragePooling2D()
fc_layer = layers.Dense(1024, activation='relu')
#dropout_layer = layers.Dropout(0.2)
output_layer = layers.Dense(
units=1,
activation=keras.activations.sigmoid #'softmax'
)
#layer connecting
x = preprocc_input(input_layer)
x = base_model(x, training=False)
x = global_avg_pool_layer(x)
x = fc_layer(x)
#x = dropout_layer(x)
predictions = output_layer(x)
model = keras.Model(input_layer, predictions)
I have many models already trained, which each answer a simple yes/no question. Pseudocode:
model_dog = keras.load('is_dog')
model_cat = keras.load('is_cat')
model_rat = keras.load('is_rat')
image = load_photo_as_numpy_array('photo.jpg')
multi_class = [ m.predict(image) for m in (model_dog,model_cat,model_rat) ]
This works fine, but it is a> slow because inference is done sequentially instead of in parallel (I have several hundred such models, not just 3), and b> is much more complex to use than if I had ONE model which does multi-classification.
What I want, is:
model = keras.concat_horizontal([ model_dog, model_cat, model_rat ])
model.save('combined_model')
Then whenever I want to use the combined model, it is as simple as:
model = keras.load('combined_model')
multi_class = m.predict(image)
This way, I can add a new classification to the combined model, by training one simple model, for example, that recognizes a fish.
As I suggested in comments, you can merge multiple models in one new model and predict using this new model.
First, I write a function to merge models and return a new combined model. This is what you want:
def concat_horizontal(models, input_shape):
models_count = len(models)
hidden = []
input = tf.keras.layers.Input(shape=input_shape)
for i in range(models_count):
hidden.append(models[i](input))
output = tf.keras.layers.concatenate(hidden)
model = tf.keras.Model(inputs=input, outputs=output)
return model
Let's explore an example. Say we want merge two sequential models like this:
def model_1():
model = tf.keras.models.Sequential([
tf.keras.layers.Flatten(input_shape=(28,28,1)),
tf.keras.layers.Dense(150, activation='relu'),
tf.keras.layers.Dense(200, activation='relu'),
tf.keras.layers.Dense(150, activation='relu'),
tf.keras.layers.Dense(10, activation='softmax')], name="model1")
model.compile(optimizer=tf.keras.optimizers.Adam(), loss=tf.keras.losses.sparse_categorical_crossentropy, metrics=['accuracy'])
return model
def model_2():
model = tf.keras.models.Sequential([
tf.keras.layers.Flatten(input_shape=(28,28,1)),
tf.keras.layers.Dense(150, activation='relu'),
tf.keras.layers.Dense(150, activation='relu'),
tf.keras.layers.Dense(10, activation='softmax')], name="model2")
model.compile(optimizer=tf.keras.optimizers.Adam(), loss=tf.keras.losses.sparse_categorical_crossentropy, metrics=['accuracy'])
return model
model1 = model_1()
model2 = model_2()
Let's use MNIST as train dataset for both of our models:
import tensorflow_datasets as tfds
ds_1 = tfds.load('mnist', split='train', as_supervised=True)
ds_2 = tfds.load('mnist', split='test', as_supervised=True)
def map_fn(image, label):
image = image / 255
return image, label
ds_1 = ds_1.map(map_fn).shuffle(1024).batch(32)
ds_2 = ds_2.map(map_fn).shuffle(1024).batch(32)
Now, we can train models, save them, and then load them like this:
model1.fit(ds_1, epochs=2, validation_data=ds_1)
model2.fit(ds_2, epochs=2, validation_data=ds_2)
model1.save('model1.h5')
model2.save('model2.h5')
model3 = tf.keras.models.load_model('model1.h5')
model4 = tf.keras.models.load_model('model2.h5')
So we have 2 separate models (model3,model4) and want to merge these, to a new one. Pass them along the input shape (in this case it is MNIST data shape) to the function we have written above:
new_model = concat_horizontal([model3,model4],(28,28,1))
Now, if we plot this new model:
tf.keras.utils.plot_model(new_model)
It's time to get predictions of model:
sample = ds_1.unbatch().take(1)
for i,j in sample:
img = i
lbl = j
img = tf.expand_dims(img,axis=0)
pred = new_model.predict(img)
pred = np.reshape(pred,(2,10))
results = np.argmax(pred,axis=1)
print(results)
import matplotlib.pyplot as plt
plt.imshow(np.array(img).squeeze())
plt.show
In my case I get both of predictions classified as 4:
Output:
I try to create image embeddings for the purpose of deep ranking using a triplet loss function. The idea is that we can take a pretrained CNN (e.g. resnet50 or vgg16), remove the FC layers and add an L2 normalization function to retrieve unit vectors which can then be compared via a distance metric (e.g. cosine similarity). As far as I understand the predicted vectors that come out of a pretrained CNN are not optimal, but are a good start. By adding the triplet loss function we can re-train the network to keep similar pictures 'close' to each other and different pictures 'far' apart in the feature space. Inspired by this notebook , I tried to setup the following code, but I get an error ValueError: The name "conv1_pad" is used 3 times in the model. All layer names should be unique..
# Anchor, Positive and Negative are numpy arrays of size (200, 256, 256, 3), same for the test images
pic_size=256
def shared_dnn(inp):
base_model = ResNet50(weights='imagenet', include_top=False, input_shape=(3, pic_size, pic_size),
input_tensor=inp)
x = base_model.output
x = Flatten()(x)
x = Lambda(lambda x: K.l2_normalize(x,axis=1))(x)
for layer in base_model.layers[15:]:
layer.trainable = False
return x
anchor_input = Input((3, pic_size,pic_size ), name='anchor_input')
positive_input = Input((3, pic_size,pic_size ), name='positive_input')
negative_input = Input((3, pic_size,pic_size ), name='negative_input')
encoded_anchor = shared_dnn(anchor_input)
encoded_positive = shared_dnn(positive_input)
encoded_negative = shared_dnn(negative_input)
merged_vector = concatenate([encoded_anchor, encoded_positive, encoded_negative], axis=-1, name='merged_layer')
model = Model(inputs=[anchor_input,positive_input, negative_input], outputs=merged_vector)
#ValueError: The name "conv1_pad" is used 3 times in the model. All layer names should be unique.
model.compile(loss=triplet_loss, optimizer=adam_optim)
model.fit([Anchor,Positive,Negative],
y=Y_dummy,
validation_data=([Anchor_test,Positive_test,Negative_test],Y_dummy2), batch_size=512, epochs=500)
I am new to keras and I am not quite sure how to solve this. The author in the link above creates his own CNN from scratch, but I would like to build it upon resnet (or vgg16). How can I configure ResNet50 to use a triplet loss function (in the link above you find also the source code for the triplet loss function).
In your ResNet50 definition, you've written
base_model = ResNet50(weights='imagenet', include_top=False, input_shape=(3, pic_size, pic_size), input_tensor=inp)
Remove the input_tensor argument. Change input_shape=inp.
If you're using TF backend as you mentioned the input should be (256, 256, 3), then your input should be (pic_size, pic_size, 3).
def shared_dnn(inp):
base_model = ResNet50(weights='imagenet', include_top=False, input_shape=inp)
x = base_model.output
x = Flatten()(x)
x = Lambda(lambda x: K.l2_normalize(x,axis=1))(x)
for layer in base_model.layers[15:]:
layer.trainable = False
return x
img_shape=(256, 256, 3)
anchor_input = Input(img_shape, name='anchor_input')
positive_input = Input(img_shape, name='positive_input')
negative_input = Input(img_shape, name='negative_input')
encoded_anchor = shared_dnn(anchor_input)
encoded_positive = shared_dnn(positive_input)
encoded_negative = shared_dnn(negative_input)
merged_vector = concatenate([encoded_anchor, encoded_positive, encoded_negative], axis=-1, name='merged_layer')
model = Model(inputs=[anchor_input,positive_input, negative_input], outputs=merged_vector)
model.compile(loss=triplet_loss, optimizer=adam_optim)
model.fit([Anchor,Positive,Negative],
y=Y_dummy,
validation_data=([Anchor_test,Positive_test,Negative_test],Y_dummy2), batch_size=512, epochs=500)
The model plot is as follows:
model_plot
Hello I have a some question for keras.
currently i want implement some network
using same cnn model, and use two images as input of cnn model
and use two result of cnn model, provide to Dense model
for example
def cnn_model():
input = Input(shape=(None, None, 3))
x = Conv2D(8, (3, 3), strides=(1, 1))(input)
x = GlobalAvgPool2D()(x)
model = Model(input, x)
return model
def fc_model(cnn1, cnn2):
input_1 = cnn1.output
input_2 = cnn2.output
input = concatenate([input_1, input_2])
x = Dense(1, input_shape=(None, 16))(input)
x = Activation('sigmoid')(x)
model = Model([cnn1.input, cnn2.input], x)
return model
def main():
cnn1 = cnn_model()
cnn2 = cnn_model()
model = fc_model(cnn1, cnn2)
model.compile(optimizer='adam', loss='mean_squared_error')
model.fit(x=[image1, image2], y=[1.0, 1.0], batch_size=1, ecpochs=1)
i want to implement model something like this, and train models
but i got error message like below :
'All layer names should be unique'
Actually i want use only one CNN model as feature extractor and finally use two features to predict one float value as 0.0 ~ 1.0
so whole system -->>
using two images and extract features from same CNN model, and features are provided to Dense model to get one floating value
Please, help me implement this system and how to train..
Thank you
See the section of the Keras documentation on shared layers:
https://keras.io/getting-started/functional-api-guide/
A code snippet from the documentation above demonstrating this:
# This layer can take as input a matrix
# and will return a vector of size 64
shared_lstm = LSTM(64)
# When we reuse the same layer instance
# multiple times, the weights of the layer
# are also being reused
# (it is effectively *the same* layer)
encoded_a = shared_lstm(tweet_a)
encoded_b = shared_lstm(tweet_b)
# We can then concatenate the two vectors:
merged_vector = keras.layers.concatenate([encoded_a, encoded_b], axis=-1)
# And add a logistic regression on top
predictions = Dense(1, activation='sigmoid')(merged_vector)
# We define a trainable model linking the
# tweet inputs to the predictions
model = Model(inputs=[tweet_a, tweet_b], outputs=predictions)
model.compile(optimizer='rmsprop',
loss='binary_crossentropy',
metrics=['accuracy'])
model.fit([data_a, data_b], labels, epochs=10)
I have a sequential model that I built in Keras.
I try to figure out how to change the shape of the input. In the following example
model = Sequential()
model.add(Dense(32, input_shape=(500,)))
model.add(Dense(10, activation='softmax'))
model.compile(optimizer='rmsprop',
loss='categorical_crossentropy',
metrics=['accuracy'])
let's say that I want to build a new model with different input shape, conceptual this should looks like this:
model1 = model
model1.layers[0] = Dense(32, input_shape=(250,))
is there a way to modify the model input shape?
Somewhat related, so hopefully someone will find this useful: If you have an existing model where the input is a placeholder that looks like (None, None, None, 3) for example, you can load the model, replace the first layer with a concretely shaped input. Transformation of this kind is very useful when for example you want to use your model in iOS CoreML (In my case the input of the model was a MLMultiArray instead of CVPixelBuffer, and the model compilation failed)
from keras.models import load_model
from keras import backend as K
from keras.engine import InputLayer
import coremltools
model = load_model('your_model.h5')
# Create a new input layer to replace the (None,None,None,3) input layer :
input_layer = InputLayer(input_shape=(272, 480, 3), name="input_1")
# Save and convert :
model.layers[0] = input_layer
model.save("reshaped_model.h5")
coreml_model = coremltools.converters.keras.convert('reshaped_model.h5')
coreml_model.save('MyPredictor.mlmodel')
Think about what changing the input shape in that situation would mean.
Your first model
model.add(Dense(32, input_shape=(500,)))
Has a dense layer that really is a 500x32 matrix.
If you changed your input to 250 elements, your layers's matrix and input dimension would mismatch.
If, however, what you were trying to achieve was to reuse your last layer's trained parameters from your first 500 element input model, you could get those weights by get_weights. Then you could rebuild a new model and set values at the new model with set_weights.
model1 = Sequential()
model1.add(Dense(32, input_shape=(250,)))
model1.add(Dense(10, activation='softmax'))
model1.layers[1].set_weights(model1.layers[1].get_weights())
Keep in mind that model1 first layer (aka model1.layers[0]) would still be untrained
Here is another solution without defining each layer of the model from scratch. The key for me was to use "_layers" instead of "layers". The latter only seems to return a copy.
import keras
import numpy as np
def get_model():
old_input_shape = (20, 20, 3)
model = keras.models.Sequential()
model.add(keras.layers.Conv2D(9, (3, 3), padding="same", input_shape=old_input_shape))
model.add(keras.layers.MaxPooling2D((2, 2)))
model.add(keras.layers.Flatten())
model.add(keras.layers.Dense(1, activation="sigmoid"))
model.compile(loss='binary_crossentropy', optimizer=keras.optimizers.Adam(lr=0.0001), metrics=['acc'], )
model.summary()
return model
def change_model(model, new_input_shape=(None, 40, 40, 3)):
# replace input shape of first layer
model._layers[1].batch_input_shape = new_input_shape
# feel free to modify additional parameters of other layers, for example...
model._layers[2].pool_size = (8, 8)
model._layers[2].strides = (8, 8)
# rebuild model architecture by exporting and importing via json
new_model = keras.models.model_from_json(model.to_json())
new_model.summary()
# copy weights from old model to new one
for layer in new_model.layers:
try:
layer.set_weights(model.get_layer(name=layer.name).get_weights())
except:
print("Could not transfer weights for layer {}".format(layer.name))
# test new model on a random input image
X = np.random.rand(10, 40, 40, 3)
y_pred = new_model.predict(X)
print(y_pred)
return new_model
if __name__ == '__main__':
model = get_model()
new_model = change_model(model)