How to use Keras Applications for a multi-output model?
I am trying to use VGG16 for a multi-output model but get an value error ValueError: Input 0 of layer block1_conv1 is incompatible with the layer: expected axis -1 of input shape to have value 3 but received input with shape (None, 64, 64, 1). I am a bit lost and can't seem to find anything on multioutputs for VGG16 or any other Keras pre-trained application.
I am using this as a guide: Transfer Learning in Keras with Computer Vision
# load model without classifier layers
base_model = VGG16(weights='imagenet', include_top=False, input_shape=(IMG_HEIGHT, IMG_WIDTH,3))
# add new classifier layers
sex_x = base_model.output
sex_x = GlobalAveragePooling2D()(sex_x)
sex_x = Dropout(0.5)(gender_x)
output_sex = Dense(2, activation='sigmoid')(sex_x)
weight_x = base_model.output
weight_x = GlobalAveragePooling2D()(weight_x)
weight_x = Dropout(0.5)(weight_x)
output_weight = Dense(1, activation='linear')(weight_x)
# define new model
model = Model(inputs=base_model.inputs, outputs=[output_sex, output_weight])
model.compile(optimizer = 'adam', loss =['binary_crossentropy','mae',],metrics=['accuracy'])
history = model.fit(x_train,[y_train[:,0],y_train[:,1]],validation_data=(x_test,[y_test[:,0],y_test[:,1]]),epochs = 10, batch_size=128,shuffle = True)
Any ideas as to what I am doing wrong?
`
According to this error, your model input asks 3 channel RGB but your data seems to be in 1 channel (grayscale). Make sure you data has RGB images.
Related
I am trying to build a basic NN using Keras using multiple input layers, some of which are connected to Embedding layers. I am using the data for Kaggle's Housing Prices competition.
Here is the structure of the model I am using:
I use a scikit-learn Pipeline to transform the data. For the embedding layers, I transform the data using label encoding. The rest of the code I am using to build the model:
##################### CREATE INPUT AND EMBEDDING LAYERS FOR CATEGORICAL FEATURES #####################
# Input and embedding layer for MSSubClass feature
no_of_unique_categories = X_train['MSSubClass'].nunique() + 1
embedding_size = int(max(min(np.ceil((no_of_unique_categories)/2), 50),2))
input_cat_layer_1 = Input(shape=(1,))
embedding_layer_1 = Embedding(
input_dim=no_of_unique_categories,
output_dim=embedding_size,
input_length=1)(input_cat_layer_1)
flattened_layer_1 = Flatten()(embedding_layer_1)
no_of_unique_categories = X_train['MSZoning'].nunique() + 1
embedding_size = int(max(min(np.ceil((no_of_unique_categories)/2), 50),2))
# Input and embedding layer for MSZoning feature
input_cat_layer_2 = Input(shape=(1,))
embedding_layer_2 = Embedding(
input_dim=no_of_unique_categories,
output_dim=embedding_size,
input_length=1)(input_cat_layer_2)
flattened_layer_2 = Flatten()(embedding_layer_2)
# Input layer for all the numerical features
no_of_numeric_features = len(numeric_columns)
numeric_input = Input(shape=(no_of_numeric_features,))
##################### BUILD THE MODEL #####################
concatenate = Concatenate()([numeric_input, flattened_layer_1, flattened_layer_2])
hidden = Dense(110, activation='relu')(concatenate)
output = Dense(1)(hidden)
model = Model(inputs=[numeric_input, input_cat_layer_1, input_cat_layer_2], outputs=output, name='final_model')
print(model.summary())
plot_model(model, show_shapes=True, show_layer_names=True, to_file='model_structure.png')
optimizer = Adam(learning_rate=0.001)
model.compile(loss='mse', optimizer=optimizer, metrics=['RootMeanSquaredError'])
##################### TRANSFORM DATA TO USE IN MODEL #####################
X_train_transformed = preprocessor.fit_transform(X_train)
X_test_transformed = preprocessor.transform(X_test)
model.fit(X_train_transformed, y_train, epochs=200, validation_data=(X_test_transformed, y_test))
mse_test = model.evaluate(X_test_transformed, y_test)
predictions = model.predict(X_test_transformed)
mse = mean_squared_error(y_test, predictions)
rmse = np.sqrt(mse)
The problem I am running into is when I try and fit the model to the training data, I receive the following error:
ValueError: Layer "final_model" expects 3 input(s), but it received 1 input tensors. Inputs received: [<tf.Tensor 'IteratorGetNext:0' shape=(None, 37) dtype=float32>]
I cannot figure out why this is happening and what I can do to fix it.
I try to create image embeddings for the purpose of deep ranking using a triplet loss function. The idea is that we can take a pretrained CNN (e.g. resnet50 or vgg16), remove the FC layers and add an L2 normalization function to retrieve unit vectors which can then be compared via a distance metric (e.g. cosine similarity). As far as I understand the predicted vectors that come out of a pretrained CNN are not optimal, but are a good start. By adding the triplet loss function we can re-train the network to keep similar pictures 'close' to each other and different pictures 'far' apart in the feature space. Inspired by this notebook , I tried to setup the following code, but I get an error ValueError: The name "conv1_pad" is used 3 times in the model. All layer names should be unique..
# Anchor, Positive and Negative are numpy arrays of size (200, 256, 256, 3), same for the test images
pic_size=256
def shared_dnn(inp):
base_model = ResNet50(weights='imagenet', include_top=False, input_shape=(3, pic_size, pic_size),
input_tensor=inp)
x = base_model.output
x = Flatten()(x)
x = Lambda(lambda x: K.l2_normalize(x,axis=1))(x)
for layer in base_model.layers[15:]:
layer.trainable = False
return x
anchor_input = Input((3, pic_size,pic_size ), name='anchor_input')
positive_input = Input((3, pic_size,pic_size ), name='positive_input')
negative_input = Input((3, pic_size,pic_size ), name='negative_input')
encoded_anchor = shared_dnn(anchor_input)
encoded_positive = shared_dnn(positive_input)
encoded_negative = shared_dnn(negative_input)
merged_vector = concatenate([encoded_anchor, encoded_positive, encoded_negative], axis=-1, name='merged_layer')
model = Model(inputs=[anchor_input,positive_input, negative_input], outputs=merged_vector)
#ValueError: The name "conv1_pad" is used 3 times in the model. All layer names should be unique.
model.compile(loss=triplet_loss, optimizer=adam_optim)
model.fit([Anchor,Positive,Negative],
y=Y_dummy,
validation_data=([Anchor_test,Positive_test,Negative_test],Y_dummy2), batch_size=512, epochs=500)
I am new to keras and I am not quite sure how to solve this. The author in the link above creates his own CNN from scratch, but I would like to build it upon resnet (or vgg16). How can I configure ResNet50 to use a triplet loss function (in the link above you find also the source code for the triplet loss function).
In your ResNet50 definition, you've written
base_model = ResNet50(weights='imagenet', include_top=False, input_shape=(3, pic_size, pic_size), input_tensor=inp)
Remove the input_tensor argument. Change input_shape=inp.
If you're using TF backend as you mentioned the input should be (256, 256, 3), then your input should be (pic_size, pic_size, 3).
def shared_dnn(inp):
base_model = ResNet50(weights='imagenet', include_top=False, input_shape=inp)
x = base_model.output
x = Flatten()(x)
x = Lambda(lambda x: K.l2_normalize(x,axis=1))(x)
for layer in base_model.layers[15:]:
layer.trainable = False
return x
img_shape=(256, 256, 3)
anchor_input = Input(img_shape, name='anchor_input')
positive_input = Input(img_shape, name='positive_input')
negative_input = Input(img_shape, name='negative_input')
encoded_anchor = shared_dnn(anchor_input)
encoded_positive = shared_dnn(positive_input)
encoded_negative = shared_dnn(negative_input)
merged_vector = concatenate([encoded_anchor, encoded_positive, encoded_negative], axis=-1, name='merged_layer')
model = Model(inputs=[anchor_input,positive_input, negative_input], outputs=merged_vector)
model.compile(loss=triplet_loss, optimizer=adam_optim)
model.fit([Anchor,Positive,Negative],
y=Y_dummy,
validation_data=([Anchor_test,Positive_test,Negative_test],Y_dummy2), batch_size=512, epochs=500)
The model plot is as follows:
model_plot
I have a inceptionv3 pre-trained model, I want to convert this functional model to sequential model. But an error has occured. I don't know why. How can I do this?
model_inceptionv3_conv = InceptionV3(weights='imagenet', include_top=False)
for layer in model_inceptionv3_conv.layers:
layer.trainable = False
x = model_inceptionv3_conv.output
x = GlobalAveragePooling2D()(x)
predictions = Dense(NB_CLASSES, activation='sigmoid', name='predictions')(x)
my_model = Model(inputs=model_inceptionv3_conv.input, outputs=predictions)
seq = Sequential(model.layers)
error
ValueError: Input 0 is incompatible with layer conv2d_7: expected axis -1 of input shape to have value 192 but got shape (None, None, None, 64)
I am going to fine-tune InceptionV3 model using my self-defined dataset. Unfortunately, when using model.fit to train, here comes the error below:
ValueError: Error when checking target: expected dense_6 to have shape (4,) but got array with shape (1,)
Firstly, I load my own dataset as training_data which contains a pair of image and corresponding label. Then, I use the code below to convert them into specific array-type(img_new and label_new) so that it's compatible to Keras's inputs of both data and labels.
for img, label in training_data:
img_new[i,:,:,:] = img
label_new[i,:] = label
i=i+1
Second, I fine tune the Inception Model below.
InceptionV3_model=keras.applications.inception_v3.InceptionV3(include_top=False,
weights='imagenet',
input_tensor=None,
input_shape=None,
pooling=None,
classes=1000)
#InceptionV3_model.summary()
# add a global spatial average pooling layer
x = InceptionV3_model.output
x = GlobalAveragePooling2D()(x)
# let's add a fully-connected layer
x = Dense(1024, activation='relu')(x)
# and a logistic layer -- let's say we have 4 classes
predictions = Dense(4, activation='softmax')(x)
# this is the model we will train
model = Model(inputs=InceptionV3_model.input, outputs=predictions)
# Transfer Learning
for layer in model.layers[:311]:
layer.trainable = False
for layer in model.layers[311:]:
layer.trainable = True
from keras.optimizers import SGD
model.compile(optimizer=SGD(lr=0.001, momentum=0.9), loss='categorical_crossentropy')
model.fit(x=X_train, y=y_train, batch_size=3, epochs=3, validation_split=0.2)
model.save_weights('first_try.h5')
Does anyone have ideas of what is wrong while training using model.fit?
Sincerely thanks for your kind help.
The error is caused because my labels r integers, I gotta compile it by sparse_categorical_crossentropy which is set for integer labels instead of categorical_crossentropy which is used for one-hot encoding.
Sincerely thank for the help by #Amir very much. :-)
Hello I have a some question for keras.
currently i want implement some network
using same cnn model, and use two images as input of cnn model
and use two result of cnn model, provide to Dense model
for example
def cnn_model():
input = Input(shape=(None, None, 3))
x = Conv2D(8, (3, 3), strides=(1, 1))(input)
x = GlobalAvgPool2D()(x)
model = Model(input, x)
return model
def fc_model(cnn1, cnn2):
input_1 = cnn1.output
input_2 = cnn2.output
input = concatenate([input_1, input_2])
x = Dense(1, input_shape=(None, 16))(input)
x = Activation('sigmoid')(x)
model = Model([cnn1.input, cnn2.input], x)
return model
def main():
cnn1 = cnn_model()
cnn2 = cnn_model()
model = fc_model(cnn1, cnn2)
model.compile(optimizer='adam', loss='mean_squared_error')
model.fit(x=[image1, image2], y=[1.0, 1.0], batch_size=1, ecpochs=1)
i want to implement model something like this, and train models
but i got error message like below :
'All layer names should be unique'
Actually i want use only one CNN model as feature extractor and finally use two features to predict one float value as 0.0 ~ 1.0
so whole system -->>
using two images and extract features from same CNN model, and features are provided to Dense model to get one floating value
Please, help me implement this system and how to train..
Thank you
See the section of the Keras documentation on shared layers:
https://keras.io/getting-started/functional-api-guide/
A code snippet from the documentation above demonstrating this:
# This layer can take as input a matrix
# and will return a vector of size 64
shared_lstm = LSTM(64)
# When we reuse the same layer instance
# multiple times, the weights of the layer
# are also being reused
# (it is effectively *the same* layer)
encoded_a = shared_lstm(tweet_a)
encoded_b = shared_lstm(tweet_b)
# We can then concatenate the two vectors:
merged_vector = keras.layers.concatenate([encoded_a, encoded_b], axis=-1)
# And add a logistic regression on top
predictions = Dense(1, activation='sigmoid')(merged_vector)
# We define a trainable model linking the
# tweet inputs to the predictions
model = Model(inputs=[tweet_a, tweet_b], outputs=predictions)
model.compile(optimizer='rmsprop',
loss='binary_crossentropy',
metrics=['accuracy'])
model.fit([data_a, data_b], labels, epochs=10)