keras: extractor one layer as a model - keras

There are several good answers for getting the output of a intermediate layer in keras model. But I want to extract one layer in a keras model, and use the layer's input as the new model's input, output as the new model's output. And I have tried:
extractor = Model(model.get_layer('dw_conv5').input, model.get_layer('dw_conv5').output)
But there is an error:
Input layers to a Model must be InputLayer objects. Received inputs: Tensor("leaky_re_lu_4/LeakyRelu/Maximum:0", shape=(?, 3, 3, 256), dtype=float32). Input 0 (0-based) originates from layer type LeakyReLU.

inputs = Input(a_compatible_shape)
outputs = model.get_layer('dw_conv5')(inputs)
model = Model(inputs,outputs)

Related

compilation step in keras sequential model throwing the error "ValueError: Input 0 of layer sequential_9 is incompatible with the layer:

I'm trying to develop as classifier for two classes. I've implemented the model as follows:
model = keras.models.Sequential() #using tensorflow's version of keras
model.add(keras.layers.InputLayer(input_shape = X_train_scaled[:,1].shape))
model.add(keras.layers.Dense(250,activation="relu"))
model.add(keras.layers.Dense(50,activation="relu"))
model.add(keras.layers.Dense(2,activation="softmax"))
model.summary()
# Compile the model
model.compile(loss = 'sparse_categorical_crossentropy',
optimizer = "sgd",
metrics = ["accuracy"])
The size of the inputs are
X_train_scaled[:,1].shape, y_train.shape
((552,), (552,))
The entire error message is:
ValueError: Input 0 of layer sequential_9 is incompatible with the layer:
expected axis -1 of input shape to have value 552 but received input with shape (None, 1)
What am I doing wrong here?
The error message says that you defined a model which expects as input a shape of (batch_size, 552) and you are trying to feed it an array with a shape of (batch_size, 1).
The issue is most likely with
input_shape = X_train_scaled[:,1].shape)
This should most likely be:
input_shape = X_train_scaled.shape[1:]
i.e. you want to define the shape of your model to the the shape of the features (without the number of examples). The model is then fed in mini-batches... e.g. if you do call model.fit(X_train_scaled, ...) keras will create mini-batches (of 32 examples by default) and update the model weights for each mini batch.
Also, please be aware that you defined the model to return a shape of (batch_size, 2). So y_train must have a shape of (X_train.shape[0], 2).
The question was answered by Pedro

ValueError: Layer conv2d_41 was called with an input that isn't a symbolic tensor. All inputs to the layer should be tensors

I try transfer learning with custom input of backbone:
(I can not transfer learning normally because my input shape is N*N*8, so I need add small network_1 to reach N*N*3)
model_1
|
|
add model_2
|
add some layer
My code:
model_1.add(model_2)
model_1 is my small network:
model_2 is Mobilenet or VGG16, or Densenet .....
model_1 = Sequential()
model_1.add(InputLayer(input_shape=(size, size, F), name="InputLayer"))
model_1.add(Convolution2D(3, 128, padding = 'same'))
from keras.applications.densenet import DenseNet169
model_2=DenseNet169(weights='imagenet',include_top=False)
model_2.layers.pop(0) # remove input_layer of model_2
model_1.add(model_2) # output model_1 is input model_2?
model_1 = GlobalAveragePooling2D()(model_1)
model_1 = Dropout(0.2)(model_1)
model_1 = Dense(256*256, activation='softmax')(model_1)
model_1 = Reshape(256, 256)(model_1)
I got errors:
ValueError: Layer global_average_pooling2d_3 was called with an input that isn't a symbolic tensor. Received type: <class 'keras.engine.sequential.Sequential'>. Full input: [<keras.engine.sequential.Sequential object at 0x7f74f6621d68>]. All inputs to the layer should be tensors.
What wrong in my code?
Global Average Pooling 2D is a layer that performs an operation on a tensor, or multi-dimensional array. Therefore, passing a model architecture like DenseNet throws an error because the model has no idea what it's looking at. Model Architecture files are totally different from tensors.
To achieve what I think you're trying to achieve, run DenseNet and then pass the output of DenseNet into the model you're creating, instead of passing the model itself. Good luck!

Adding an activation layer to Keras Add() layer and using this layer as output to model

I am trying to apply a softmax activation layer to the output of the Add() layer. I am trying to make this layer the output of my model and I am running into a few problems.
It seems Add() layer doesn't allow the usage of activations and if I do something like this:
predictions = Add()([x,y])
predictions = softmax(predictions)
model = Model(inputs = model.input, outputs = predictions)
I get:
ValueError: Output tensors to a Model must be the output of a Keras `Layer` (thus holding past layer metadata). Found: Tensor("Softmax:0", shape=(?, 6), dtype=float32)
It has nothing to do with the Add layer, you are using K.softmax directly on Keras tensors and this won't work, you need an actual layer. You can use the Activation layer for this:
from keras.layers import Activation
predictions = Add()([x,y])
predictions = Activation("softmax")(predictions)
model = Model(inputs = model.input, outputs = predictions)

Modify ResNet50 output layer for regression

I am trying to create a ResNet50 model for a regression problem, with an output value ranging from -1 to 1.
I omitted the classes argument, and in my preprocessing step I resize my images to 224,224,3.
I try to create the model with
def create_resnet(load_pretrained=False):
if load_pretrained:
weights = 'imagenet'
else:
weights = None
# Get base model
base_model = ResNet50(weights=weights)
optimizer = Adam(lr=1e-3)
base_model.compile(loss='mse', optimizer=optimizer)
return base_model
and then create the model, print the summary and use the fit_generator to train
history = model.fit_generator(batch_generator(X_train, y_train, 100, 1),
steps_per_epoch=300,
epochs=10,
validation_data=batch_generator(X_valid, y_valid, 100, 0),
validation_steps=200,
verbose=1,
shuffle = 1)
I get an error though that says
ValueError: Error when checking target: expected fc1000 to have shape (1000,) but got array with shape (1,)
Looking at the model summary, this makes sense, since the final Dense layer has an output shape of (None, 1000)
fc1000 (Dense) (None, 1000) 2049000 avg_pool[0][0]
But I can't figure out how to modify the model. I've read through the Keras documentation and looked at several examples, but pretty much everything I see is for a classification model.
How can I modify the model so it is formatted properly for regression?
Your code is throwing the error because you're using the original fully-connected top layer that was trained to classify images into one of 1000 classes. To make the network working, you need to replace this top layer with your own which should have the shape compatible with your dataset and task.
Here is a small snippet I was using to create an ImageNet pre-trained model for the regression task (face landmarks prediction) with Keras:
NUM_OF_LANDMARKS = 136
def create_model(input_shape, top='flatten'):
if top not in ('flatten', 'avg', 'max'):
raise ValueError('unexpected top layer type: %s' % top)
# connects base model with new "head"
BottleneckLayer = {
'flatten': Flatten(),
'avg': GlobalAvgPooling2D(),
'max': GlobalMaxPooling2D()
}[top]
base = InceptionResNetV2(input_shape=input_shape,
include_top=False,
weights='imagenet')
x = BottleneckLayer(base.output)
x = Dense(NUM_OF_LANDMARKS, activation='linear')(x)
model = Model(inputs=base.inputs, outputs=x)
return model
In your case, I guess you only need to replace InceptionResNetV2 with ResNet50. Essentially, you are creating a pre-trained model without top layers:
base = ResNet50(input_shape=input_shape, include_top=False)
And then attaching your custom layer on top of it:
x = Flatten()(base.output)
x = Dense(NUM_OF_LANDMARKS, activation='sigmoid')(x)
model = Model(inputs=base.inputs, outputs=x)
That's it.
You also can check this link from the Keras repository that shows how ResNet50 is constructed internally. I believe it will give you some insights about the functional API and layers replacement.
Also, I would say that both regression and classification tasks are not that different if we're talking about fine-tuning pre-trained ImageNet models. The type of task mostly depends on your loss function and the top layer's activation function. Otherwise, you still have a fully-connected layer with N outputs but they are interpreted in a different way.

Changing keras model input name

When i'm creating model ensemble consisting of 6 models in keras i do the following:
out = keras.layers.Average(name="output")(outputs) #6 models
model = Model(input, out, name='ensemble')
i would like to set the input to be
input = Input(SHAPE, name='my_input')
but then i get the error message because from what i understand when making an avarage model it's input was automatically set to 'input_6', so i get an error: Graph disconnected: cannot obtain value for tensor Tensor("input_6:0", shape=(?, 256, 256, 1), dtype=float32). Is there a way to change model input name?

Resources