How to make DenseNet followed by Inception - keras

I want to create a model with DenseNet201 followed by InceptionV3, based on the paper https://www.researchgate.net/publication/339653204_Detection_of_rice_plant_diseases_based_on_deep_transfer_learning on page 3251.
But I couldn't find a way to do this. I have the input of shape [448, 448, 3] and the base models:
model1 = DenseNet201(include_top=False, weights="imagenet", input_shape=[448,448,3])
model2 = InceptionResNetV2(weights='imagenet', include_top=False)
I tried combining them this way
ip = model1(inputs)
ip = model2(ip)
according to the architecture in the paper, but this resulted in dimension incompatibility.

Related

Create unsupervised embedding model in keras?

I want to create an autoencoder with the following architecture:
path_source_token_input = Input(shape=(MAX_CONTEXTS,), dtype=tf.int32, name='source_token_input')
path_input = Input(shape=(MAX_CONTEXTS,), dtype=tf.int32, name='path_input')
path_target_token_input = Input(shape=(MAX_CONTEXTS,), dtype=tf.int32, name='target_token_input')
paths_embedded = Embedding(PATH_SIZE, DEFAULT_EMBEDDINGS_SIZE, name='path_embedding')(path_input)
token_embedding_shared_layer = Embedding(TOKEN_SIZE, DEFAULT_EMBEDDINGS_SIZE, name='token_embedding')
path_source_token_embedded = token_embedding_shared_layer(path_source_token_input)
path_target_token_embedded = token_embedding_shared_layer(path_target_token_input)
context_embedded = Concatenate()([path_source_token_embedded, paths_embedded, path_target_token_embedded]) # --> this up to now, is the output of the STANDALONE embedding model
-------- SPLIT HERE? ------
context_after_dense = TimeDistributed(Dense(CODE_VECTOR_SIZE, use_bias=False, activation='tanh'))(context_embedded) # in short, this layer probably has to stay
encoded = LSTM(100, activation='relu', input_shape=context_after_dense.shape)(context_after_dense)
decoded = RepeatVector(MAX_CONTEXTS)(encoded)
decoded = LSTM(100, activation='relu', return_sequences=True)(decoded)
result = TimeDistributed(Dense(1), name='PROBLEM_is_here')(decoded) # this seems to be some trick according to https://github.com/keras-team/keras/issues/10753, so probably don't remove
inputs = (path_source_token_input, path_input, path_target_token_input)
model = tf.keras.Model(inputs=inputs, outputs=result)
So far I have learned that it is impossible to implement an inversed embedding layer in the decoder, so naturally my conclusion is to split my network in two: one to generate concatenated embeddings of input, and the second part would be the autoencoder itself with input as concatenated embeddings (output of first part). Now my question is, is it possible to create an unsupervised embedding model in keras? or anywhere else for that matter. My data is unlabeled and the point of my final neural network would be to create clusters of said unlabeled data.

Merging same vgg16 model but with different inputs

I am working on a classification problem in a project. The specificity of my problem is that I have to use two different type of data to manage it. My classes are Car, Pedestrian, Truck and Cyclist. My dataset is composed of :
-Images coming from the Camera : they are RGB image. Here is an example :
Images obtain by projecting Lidar Point Cloud (just 3D points) into 2D camera plane and encoding pixels using Depth & Reflectance. Here are examples :
I already manage to use both modalities in order to perform the classification task by using the Concatenate function of the keras API.
But what I would like to do is to use a more powerful CNN like VGG. I used pre-trained model and freeze all layers except the last 4. I read the grayscale image as RGB because the VGG16 pre-trained model need 3 channels input. Here is my code :
from keras.applications import VGG16
#Load the VGG model
#Camera Model
vgg_conv_C = VGG16(weights='imagenet', include_top=False, input_shape=(227, 227, 3))
#Depth Model
vgg_conv_D = VGG16(weights='imagenet', include_top=False, input_shape= (227, 227, 3))
for layer in vgg_conv_D.layers[:-4]:
layer.trainable = False
for layer in vgg_conv_C.layers[:-4]:
layer.trainable = False
mergedModel = Concatenate()([vgg_conv_C.output,vgg_conv_D.output])
mergedModel = Dense(units = 1024)(mergedModel)
mergedModel = BatchNormalization()(mergedModel)
mergedModel = Activation('relu')(mergedModel)
mergedModel = Dropout(0.5)(mergedModel)
mergedModel = Dense(units = 4,activation = 'softmax')(mergedModel)
fused_model = Model([vgg_conv_C.input, vgg_conv_D.input], mergedModel) )
The last line give the following error :
ValueError: The name "block1_conv1" is used 2 times in the model. All
layer names should be unique.
Did someone know how to handle this? To be simple, I just want to use VGG16 on both type of images, then just get the feature vectors for each modality, then Concatenate them and add fully connected layers at top to predict the image's class. It works with no-pre trained models. Can provide the code if needed
Try this
#Camera Model
vgg_conv_C = VGG16(weights='imagenet', include_top=False, input_shape=(227, 227, 3))
for layer in vgg_conv_C.layers:
layer.name = layer.name + str('_C')
#Depth Model
vgg_conv_D = VGG16(weights='imagenet', include_top=False, input_shape= (227, 227, 3))
for layer in vgg_conv_D.layers:
layer.name = layer.name + str('_D')
In this way, you'd still be able to use two identical pre-trained networks.
As mentioned in the error,
ValueError: The name "block1_conv1" is used 2 times in the model. All
layer names should be unique.
Therefore use Saimse network or If use dual CNN them remember in network layer ame are unique. its better and copy the network for second configuration and change the layers name.
IStackoverflowAndIKnowThings solution gives me the error:
AttributeError: Can't set the attribute "name", likely because it conflicts with an existing read-only #property of the object. Please choose a different name.
The following worked for me (see this post):
..
for layer in vgg_conv_C.layers:
layer._name = layer._name + str('_C')
..

Modify ResNet50 output layer for regression

I am trying to create a ResNet50 model for a regression problem, with an output value ranging from -1 to 1.
I omitted the classes argument, and in my preprocessing step I resize my images to 224,224,3.
I try to create the model with
def create_resnet(load_pretrained=False):
if load_pretrained:
weights = 'imagenet'
else:
weights = None
# Get base model
base_model = ResNet50(weights=weights)
optimizer = Adam(lr=1e-3)
base_model.compile(loss='mse', optimizer=optimizer)
return base_model
and then create the model, print the summary and use the fit_generator to train
history = model.fit_generator(batch_generator(X_train, y_train, 100, 1),
steps_per_epoch=300,
epochs=10,
validation_data=batch_generator(X_valid, y_valid, 100, 0),
validation_steps=200,
verbose=1,
shuffle = 1)
I get an error though that says
ValueError: Error when checking target: expected fc1000 to have shape (1000,) but got array with shape (1,)
Looking at the model summary, this makes sense, since the final Dense layer has an output shape of (None, 1000)
fc1000 (Dense) (None, 1000) 2049000 avg_pool[0][0]
But I can't figure out how to modify the model. I've read through the Keras documentation and looked at several examples, but pretty much everything I see is for a classification model.
How can I modify the model so it is formatted properly for regression?
Your code is throwing the error because you're using the original fully-connected top layer that was trained to classify images into one of 1000 classes. To make the network working, you need to replace this top layer with your own which should have the shape compatible with your dataset and task.
Here is a small snippet I was using to create an ImageNet pre-trained model for the regression task (face landmarks prediction) with Keras:
NUM_OF_LANDMARKS = 136
def create_model(input_shape, top='flatten'):
if top not in ('flatten', 'avg', 'max'):
raise ValueError('unexpected top layer type: %s' % top)
# connects base model with new "head"
BottleneckLayer = {
'flatten': Flatten(),
'avg': GlobalAvgPooling2D(),
'max': GlobalMaxPooling2D()
}[top]
base = InceptionResNetV2(input_shape=input_shape,
include_top=False,
weights='imagenet')
x = BottleneckLayer(base.output)
x = Dense(NUM_OF_LANDMARKS, activation='linear')(x)
model = Model(inputs=base.inputs, outputs=x)
return model
In your case, I guess you only need to replace InceptionResNetV2 with ResNet50. Essentially, you are creating a pre-trained model without top layers:
base = ResNet50(input_shape=input_shape, include_top=False)
And then attaching your custom layer on top of it:
x = Flatten()(base.output)
x = Dense(NUM_OF_LANDMARKS, activation='sigmoid')(x)
model = Model(inputs=base.inputs, outputs=x)
That's it.
You also can check this link from the Keras repository that shows how ResNet50 is constructed internally. I believe it will give you some insights about the functional API and layers replacement.
Also, I would say that both regression and classification tasks are not that different if we're talking about fine-tuning pre-trained ImageNet models. The type of task mostly depends on your loss function and the top layer's activation function. Otherwise, you still have a fully-connected layer with N outputs but they are interpreted in a different way.

How to connect custom Keras layer with multiple outputs

I defined a custom Keras layer custom_layer with two outputs: output_1 and output_2. Next, I want two independent layers A and B to connect to output_1 and output_2 respectively. How to implement this kind of network?
Using the keras api mode you can create any network architecture.
In your case a possible solution is
input_layer = Input(shape=(100,1))
custom_layer = Dense(10)(input_layer)
# layer A model
layer_a = Dense(10, activation='relu')(custom_layer)
output1 = Dense(1, activation='sigmoid')(layer_a)
# layer B model
layer_b = Dense(10, activation='relu')(custom_layer)
output1 = Dense(1, activation='sigmoid')(layer_b)
# define model input and output
model = Model(inputs=input_layer, outputs=[output1, output2])
If the custom layer has two output tensors (i.e. it returns a list of output tensors) when applied on one input, then:
custom_layer_output = CustomLayer(...)(input_tensor)
layer_a_output = LayerA(...)(custom_layer_output[0])
layer_b_output = LayerB(...)(custom_layer_output[1])
But if it is applied on two different input tensors, then:
custom_layer = CustomLayer(...)
out1 = custom_layer(input1)
out2 = custom_layer(input2)
layer_a_output = LayerA(...)(out1)
layer_b_output = LayerB(...)(out2)
# alternative way
layer_a_output = LayerA(...)(custom_layer.get_output_at(0))
layer_b_output = LayerB(...)(custom_layer.get_output_at(1))
Keras supports having multiple output layers in your custom layer. There is a merge, which will update the documentation soon.
The basic idea is to work with lists. Everithing you have to reutrn in your custom layer (like layers and shape) you have to return as lists of them.
If you implement your custom layer in the right way the rest is simple:
output_1, output_2 = custom_layer()(input_layer)
layer_a_output = layer_a()(output_1)
layer_b_output = layer_b()(output_2)

Modify layers in resnet model

I am trying to train resnet50 model for image classification problem. I have loaded the pretrained 'imagenet' weights before training the model on the dataset I have. I want to insert a layer (mean subtraction layer) in-between the input layer and the first convolutiuon layer.
model = ResNet50(weights='imagenet')
def mean_subtract(img):
img = T.set_subtensor(img[:,0,:,:],img[:,0,:,:] - 123.68)
img = T.set_subtensor(img[:,1,:,:],img[:,1,:,:] - 116.779)
img = T.set_subtensor(img[:,2,:,:],img[:,2,:,:] - 103.939)
return img / 255.0
I want to insert inputs = Lambda(mean_subtract, name='mean_subtraction')(inputs) next to the input layer and connect this to the first convolution layer of resnet model without losing the weights saved.
How do I do that?
Thanks!
Quick answer (Seems better than adding the function to the model)
Use the preprocessing function as described here: preprocessing images generated using keras function ImageDataGenerator() to train resnet50 model
Long answer
Since your function doesn't change shapes, you can put it in an outer model without changing the Resnet model (changing models may not be so simple, I always try to mount new models with parts from other models if needed).
resnet_model = ResNet50(weights='imagenet')
inputs = Input((None,None,3))
#it seems you're using (3,None,None) instead.
#choose based on your "data_format", which by default is channels_last
outputs = Lambda(mean_subtract,output_shape=not_necessary_with_tensorflow)(inputs)
outputs = resnet_model(outputs)
model = Model(inputs, outputs)

Resources