Merging same vgg16 model but with different inputs - keras

I am working on a classification problem in a project. The specificity of my problem is that I have to use two different type of data to manage it. My classes are Car, Pedestrian, Truck and Cyclist. My dataset is composed of :
-Images coming from the Camera : they are RGB image. Here is an example :
Images obtain by projecting Lidar Point Cloud (just 3D points) into 2D camera plane and encoding pixels using Depth & Reflectance. Here are examples :
I already manage to use both modalities in order to perform the classification task by using the Concatenate function of the keras API.
But what I would like to do is to use a more powerful CNN like VGG. I used pre-trained model and freeze all layers except the last 4. I read the grayscale image as RGB because the VGG16 pre-trained model need 3 channels input. Here is my code :
from keras.applications import VGG16
#Load the VGG model
#Camera Model
vgg_conv_C = VGG16(weights='imagenet', include_top=False, input_shape=(227, 227, 3))
#Depth Model
vgg_conv_D = VGG16(weights='imagenet', include_top=False, input_shape= (227, 227, 3))
for layer in vgg_conv_D.layers[:-4]:
layer.trainable = False
for layer in vgg_conv_C.layers[:-4]:
layer.trainable = False
mergedModel = Concatenate()([vgg_conv_C.output,vgg_conv_D.output])
mergedModel = Dense(units = 1024)(mergedModel)
mergedModel = BatchNormalization()(mergedModel)
mergedModel = Activation('relu')(mergedModel)
mergedModel = Dropout(0.5)(mergedModel)
mergedModel = Dense(units = 4,activation = 'softmax')(mergedModel)
fused_model = Model([vgg_conv_C.input, vgg_conv_D.input], mergedModel) )
The last line give the following error :
ValueError: The name "block1_conv1" is used 2 times in the model. All
layer names should be unique.
Did someone know how to handle this? To be simple, I just want to use VGG16 on both type of images, then just get the feature vectors for each modality, then Concatenate them and add fully connected layers at top to predict the image's class. It works with no-pre trained models. Can provide the code if needed

Try this
#Camera Model
vgg_conv_C = VGG16(weights='imagenet', include_top=False, input_shape=(227, 227, 3))
for layer in vgg_conv_C.layers:
layer.name = layer.name + str('_C')
#Depth Model
vgg_conv_D = VGG16(weights='imagenet', include_top=False, input_shape= (227, 227, 3))
for layer in vgg_conv_D.layers:
layer.name = layer.name + str('_D')
In this way, you'd still be able to use two identical pre-trained networks.

As mentioned in the error,
ValueError: The name "block1_conv1" is used 2 times in the model. All
layer names should be unique.
Therefore use Saimse network or If use dual CNN them remember in network layer ame are unique. its better and copy the network for second configuration and change the layers name.

IStackoverflowAndIKnowThings solution gives me the error:
AttributeError: Can't set the attribute "name", likely because it conflicts with an existing read-only #property of the object. Please choose a different name.
The following worked for me (see this post):
..
for layer in vgg_conv_C.layers:
layer._name = layer._name + str('_C')
..

Related

How to make DenseNet followed by Inception

I want to create a model with DenseNet201 followed by InceptionV3, based on the paper https://www.researchgate.net/publication/339653204_Detection_of_rice_plant_diseases_based_on_deep_transfer_learning on page 3251.
But I couldn't find a way to do this. I have the input of shape [448, 448, 3] and the base models:
model1 = DenseNet201(include_top=False, weights="imagenet", input_shape=[448,448,3])
model2 = InceptionResNetV2(weights='imagenet', include_top=False)
I tried combining them this way
ip = model1(inputs)
ip = model2(ip)
according to the architecture in the paper, but this resulted in dimension incompatibility.

Modify ResNet50 output layer for regression

I am trying to create a ResNet50 model for a regression problem, with an output value ranging from -1 to 1.
I omitted the classes argument, and in my preprocessing step I resize my images to 224,224,3.
I try to create the model with
def create_resnet(load_pretrained=False):
if load_pretrained:
weights = 'imagenet'
else:
weights = None
# Get base model
base_model = ResNet50(weights=weights)
optimizer = Adam(lr=1e-3)
base_model.compile(loss='mse', optimizer=optimizer)
return base_model
and then create the model, print the summary and use the fit_generator to train
history = model.fit_generator(batch_generator(X_train, y_train, 100, 1),
steps_per_epoch=300,
epochs=10,
validation_data=batch_generator(X_valid, y_valid, 100, 0),
validation_steps=200,
verbose=1,
shuffle = 1)
I get an error though that says
ValueError: Error when checking target: expected fc1000 to have shape (1000,) but got array with shape (1,)
Looking at the model summary, this makes sense, since the final Dense layer has an output shape of (None, 1000)
fc1000 (Dense) (None, 1000) 2049000 avg_pool[0][0]
But I can't figure out how to modify the model. I've read through the Keras documentation and looked at several examples, but pretty much everything I see is for a classification model.
How can I modify the model so it is formatted properly for regression?
Your code is throwing the error because you're using the original fully-connected top layer that was trained to classify images into one of 1000 classes. To make the network working, you need to replace this top layer with your own which should have the shape compatible with your dataset and task.
Here is a small snippet I was using to create an ImageNet pre-trained model for the regression task (face landmarks prediction) with Keras:
NUM_OF_LANDMARKS = 136
def create_model(input_shape, top='flatten'):
if top not in ('flatten', 'avg', 'max'):
raise ValueError('unexpected top layer type: %s' % top)
# connects base model with new "head"
BottleneckLayer = {
'flatten': Flatten(),
'avg': GlobalAvgPooling2D(),
'max': GlobalMaxPooling2D()
}[top]
base = InceptionResNetV2(input_shape=input_shape,
include_top=False,
weights='imagenet')
x = BottleneckLayer(base.output)
x = Dense(NUM_OF_LANDMARKS, activation='linear')(x)
model = Model(inputs=base.inputs, outputs=x)
return model
In your case, I guess you only need to replace InceptionResNetV2 with ResNet50. Essentially, you are creating a pre-trained model without top layers:
base = ResNet50(input_shape=input_shape, include_top=False)
And then attaching your custom layer on top of it:
x = Flatten()(base.output)
x = Dense(NUM_OF_LANDMARKS, activation='sigmoid')(x)
model = Model(inputs=base.inputs, outputs=x)
That's it.
You also can check this link from the Keras repository that shows how ResNet50 is constructed internally. I believe it will give you some insights about the functional API and layers replacement.
Also, I would say that both regression and classification tasks are not that different if we're talking about fine-tuning pre-trained ImageNet models. The type of task mostly depends on your loss function and the top layer's activation function. Otherwise, you still have a fully-connected layer with N outputs but they are interpreted in a different way.

How to do transfer-learning on our own models?

I am trying to apply the transfer-learning on my CNN model, I am getting the below error.
model = model1(weights = "model1_weights", include_top=False)
—-
TypeError: __call__() takes exactly 2 arguments (1 given)
Thanks
If you are trying to use transfer-learning using custom model, the answer depends on the way you saved your model architecture(description) and weights.
1. If you saved the description and weights of the model on single .h5 file.
You can easily load model, using keras's load_model method.
from keras.models import load_model
model = load_model("model_path.h5")
2. If you saved the description and weights of the model on separate file (e.g in json and .h5 files respectively).
You can first load the model description from json file and then load model weights.
form keras.models import model_from_json
with open("path_to_json_file.json") as json_file:
model = model_from_json(json_file.read())
model.load_weights("path_to_weights_file.h5")
After the old model is loaded you can now decide which layers to discard(usually these layers are top fully connected layers) and which layers to freeze.
Let's assume you want to use the first five layers of the model without training again, the next three to be trained again, the last layers to be discarded(here it is assumed that the number of the network layers is greater than eight), and add three fully connected layer after the last layer. This can be done as follows.
Freeze the first five layers
for i in range(5):
model.layers[i].trainable = False
Make the next three layers trainable, this can be ignored if all layers are trainable.
for i in range(5,8):
model.layers[i].trainable = True
Add three more layers
ll = model.layers[8].output
ll = Dense(32)(ll)
ll = Dense(64)(ll)
ll = Dense(num_classes,activation="softmax")(ll)
new_model = Model(inputs=model.input,outputs=ll)

Modify layers in resnet model

I am trying to train resnet50 model for image classification problem. I have loaded the pretrained 'imagenet' weights before training the model on the dataset I have. I want to insert a layer (mean subtraction layer) in-between the input layer and the first convolutiuon layer.
model = ResNet50(weights='imagenet')
def mean_subtract(img):
img = T.set_subtensor(img[:,0,:,:],img[:,0,:,:] - 123.68)
img = T.set_subtensor(img[:,1,:,:],img[:,1,:,:] - 116.779)
img = T.set_subtensor(img[:,2,:,:],img[:,2,:,:] - 103.939)
return img / 255.0
I want to insert inputs = Lambda(mean_subtract, name='mean_subtraction')(inputs) next to the input layer and connect this to the first convolution layer of resnet model without losing the weights saved.
How do I do that?
Thanks!
Quick answer (Seems better than adding the function to the model)
Use the preprocessing function as described here: preprocessing images generated using keras function ImageDataGenerator() to train resnet50 model
Long answer
Since your function doesn't change shapes, you can put it in an outer model without changing the Resnet model (changing models may not be so simple, I always try to mount new models with parts from other models if needed).
resnet_model = ResNet50(weights='imagenet')
inputs = Input((None,None,3))
#it seems you're using (3,None,None) instead.
#choose based on your "data_format", which by default is channels_last
outputs = Lambda(mean_subtract,output_shape=not_necessary_with_tensorflow)(inputs)
outputs = resnet_model(outputs)
model = Model(inputs, outputs)

Generative Adversarial Networks (GANs) in Keras - creating the combined model

I'm trying to create a pretty simple GANs model, and not sure how to combine the generator and the discriminator for training the generator
from keras import optimizers
from keras.layers import Input, Dense
from keras.models import Sequential, Model
import numpy as np
def build_generator(input_dim=10, output_dim=40, hidden_dim=28):
model = Sequential()
model.add(Dense(hidden_dim, input_dim=input_dim, activation='sigmoid', kernel_initializer="random_uniform"))
model.add(Dense(output_dim, activation='sigmoid', kernel_initializer="random_uniform"))
return model
def build_discriminator(input_dim=40, hidden_dim=28, output_dim=50):
input_d = Input(shape=(input_dim,))
encoded = Dense(hidden_dim, activation='sigmoid', kernel_initializer="random_uniform")(input_d)
decoded = Dense(output_dim, activation='sigmoid', kernel_initializer="random_uniform")(encoded)
x = Dense(1, activation='relu')(encoded)
y = Dense(1, activation='sigmoid')(encoded)
model = Model(inputs=input_d, outputs=[decoded, x, y])
return model
sgd = optimizers.SGD(lr=0.1)
generator = build_generator(10, 100, 70)
discriminator = build_discriminator(100, 60, 80)
generator.compile(loss='mean_squared_error', optimizer=sgd)
discriminator.trainable = True
discriminator.compile(loss='mean_squared_error', optimizer=sgd)
discriminator.trainable = False
Now I'm not sure how to combine them both, so the discriminator will receive the generator output and than will pass the generator back propagation data
For this, the best to do is to use the functional Model API. This is suited for more complex models, accepting branches, concatenations, etc.
(It's still possible, in this specific case to use the sequential models, but using the functional API always sounded better to me, for freedom and further experiments on the models)
So, you may preserve your two sequential models. All you have to do is to build a third model that contains these two.
generator = build_generator(....) #don't create a new generator, use the one you have.
discriminator = build_discriminator(....)
Now, a functional API model has its input shape defined like this:
inputTensor = Input(inputShape) #inputShape must be the same as in generator
And we work by passing inputs to layers and getting outputs:
#Getting the output of the generator given our input tensor:
genOut = generator(inputTensor) #you call a model just like you call a layer
#and we pass the generator's output to the discriminator, getting its output:
discOut = discriminator(genOut)
Finally, we create the actual model by defining its start and end points:
GAN = Model(inputTensor, discOut)
Use the model.layers[i].trainable parameter before compile to define which layers will be trainable or not in each of the models.
Combining the Generator & Discriminator models can, indeed, sometimes be quite confusing. I found this repository in the link below, which demonstrates quite well with a detailed code of how to construct multiple architectures of GANs in keras:
https://github.com/kochlisGit/Keras-GAN

Resources