Extracting Activation maps from trained neural network - keras

I have a trained cnn model. I am trying to extract the output from each convolutional layer and plot the results to explore which regions of the image have high activations. Any ideas on how to do this?
Below is the network I have trained.
input_shape = (3,227,227)
x = Input(input_shape)
# Conv Layer 1
x = Convolution2D(96, 7,7,subsample=(4,4),activation='relu',
name='conv_1', init='he_normal')(x_input)
x = MaxPooling2D((3, 3), strides=(2,2), name='maxpool')(x)
x = BatchNormalization()(x)
x = ZeroPadding2D((2,2))(x)
# Conv Layer 2
x = Convolution2D(256, 5,5,activation='relu',name='conv_2', init='he_normal')(x)
x = MaxPooling2D((3, 3), strides=(2,2),name='maxpool2')(x)
x = BatchNormalization()(x)
x = ZeroPadding2D((2,2))(x)
# Conv Layer 3
x = Convolution2D(384, 3,3,activation='relu',
name='conv_3', init='he_normal')(x)
x = MaxPooling2D((3, 3), strides=(2,2),name='maxpool3')(x)
x = Flatten()(x)
x = Dense(512, activation = "relu")(x)
x = Dropout(0.5)(x)
x = Dense(512, activation ="relu")(x)
x = Dropout(0.5)(x)
predictions = Dense(2, activation="softmax")(x)
model = Model(inputs = x_input, outputs = predictions)
Thanks!

Look at this GitHub issue and the FAQ How can I obtain the output of an intermediate layer?. It seems the easiest way to do that is defining new models with the outputs that you want. For example:
input_shape = (3,227,227)
x = Input(input_shape)
# Conv Layer 1
# Save layer in a variable
conv1 = Convolution2D(96, 7, 7, subsample=(4,4), activation='relu',
name='conv_1', init='he_normal')(x_input)
x = conv1
x = MaxPooling2D(...)(x)
# ...
conv2 = Convolution2D(...)(x)
x = conv2
# ...
conv3 = Convolution2D(...)(x)
x = conv3
# ...
predictions = Dense(2, activation="softmax")(x)
# Main model
model = Model(inputs=x_input, outputs=predictions)
# Intermediate evaluation model
conv_layers_model = Model(inputs=x_input, outputs=[conv1, conv2, conv3])
# After training is done, retrieve intermediate evaluations for data
conv1_val, conv2_val, conv3_val = conv_layers_model.predict(data)
Note that since you are using the same objects in both models the weights are automatically shared between them.
A more complete example of activation visualization can be found here. In that case they use the K.function approach.

Related

Converting TensorFlow Keras model API to model subclassing

For a simple TF2 Object detection CNN architecture defined using Keras's functional API as follows:
input_ = Input(shape = (144, 144, 3), name = 'image')
# name - An optional name string for the Input layer. Should be unique in
# a model (do not reuse the same name twice). It will be autogenerated if it isn't provided.
# Here 'image' is the Python3 dict's key used to map the data to one of the layer in the model.
x = input_
# Define a conv block-
x = Conv2D(filters = 64, kernel_size = 3, activation = 'relu')(x)
x = BatchNormalization()(x)
x = MaxPool2D(pool_size = 2)(x)
x = Flatten()(x) # flatten the last pooling layer's output volume
x = Dense(256, activation='relu')(x)
# We are using a data generator which yields dictionaries. Using 'name' argument makes it
# possible to map the correct data generator's output to the appropriate layer
class_out = Dense(units = 9, activation = 'softmax', name = 'class_out')(x) # classification output
box_out = Dense(units = 2, activation = 'linear', name = 'box_out')(x) # regression output
# Define the CNN model-
model = tf.keras.models.Model(input_, [class_out, box_out]) # since we have 2 outputs, we use a list
I am attempting to define it using Model sub-classing as:
class OD(Model):
def __init__(self):
super(OD, self).__init__()
self.conv1 = Conv2D(filters = 64, kernel_size = 3, activation = None)
self.bn = BatchNormalization()
self.pool = MaxPool2D(pool_size = 2)
self.flatten = Flatten()
self.dense = Dense(256, activation = None)
self.class_out = Dense(units = 9, activation = None, name = 'class_out')
self.box_out = Dense(units = 2, activation = 'linear', name = 'box_out')
def call(self, x):
x = tf.nn.relu(self.bn(self.conv1(x)))
x = self.pool(x)
x = self.flatten(x)
x = tf.nn.relu(self.dense(x))
x = [tf.nn.softmax(self.class_out(x)), self.box_out(x)]
return x
A batch of training data is obtained as:
example, label = next(data_generator(batch_size = 32))
example.keys()
# dict_keys(['image'])
image = example['image']
image.shape
# (32, 144, 144, 3)
label.keys()
# dict_keys(['class_out', 'box_out'])
label['class_out'].shape, label['box_out'].shape
# ((32, 9), (32, 2))
Is my Model sub-classing architecture equivalent to Keras's functional API?

model.trainable_variables return none

I want to write my custom training function, but I couldn't access my trainable_weights because it returns []. I can get the weights using layer.get_weight() but my trainable_variables are empty. this is my training method:
def train_on_batch(X, y_real,model):
with tf.GradientTape() as tape:
tape.watch(X)
y_pred = model(X, training= True)
print(model.trainable_variables)
loss_value = loss(y_real, y_pred)
grads = tape. gradient(loss_value, model.trainable_variables)
optimizer.apply_gradients(zip(grads, model.trainable_variables))
return loss_value
and this is my part of CNN model:
model_list = list()
# base model input
in_image = Input(shape=input_shape)
# conv 1x1
d = Conv2D(128, (1,1), padding='same', kernel_initializer=init, kernel_constraint= const)(in_image)
d = LeakyReLU(alpha=0.2)(d)
# conv 3x3 (output block)
d = MinibatchStdev()(d)
d = Conv2D(128, (3,3), padding='same', kernel_initializer=init, kernel_constraint= const)(d)
d = LeakyReLU(alpha=0.2)(d)
# conv 4x4
d = Conv2D(128, (4,4), padding='same', kernel_initializer=init, kernel_constraint= const)(d)
d = LeakyReLU(alpha=0.2)(d)
# dense output layer
d = Flatten()(d)
out_class = Dense(1, name='dense')(d)
print(type(out_class))
# define model
model = Model(in_image, out_class)
enter code here
Check the last block of code of question. This code has given the best way to get the variables.
TensorFlow 2.0 How to get trainable variables from tf.keras.layers layers, like Conv2D or Dense

Keras - Proper way to extract weights from a nested model

I have a nested model which has an input layer, and has some final dense layers before the output. Here is the code for it:
image_input = Input(shape, name='image_input')
x = DenseNet121(input_shape=shape, include_top=False, weights=None,backend=keras.backend,
layers=keras.layers,
models=keras.models,
utils=keras.utils)(image_input)
x = GlobalAveragePooling2D(name='avg_pool')(x)
x = Dense(1024, activation='relu', name='dense_layer1_image')(x)
x = BatchNormalization()(x)
x = Dropout(0.5)(x)
x = Dense(512, activation='relu', name='dense_layer2_image')(x)
x = BatchNormalization()(x)
x = Dropout(0.5)(x)
output = Dense(num_class, activation='softmax', name='image_output')(x)
classificationModel = Model(inputs=[image_input], outputs=[output])
Now If say I wanted to extract the densenets weights from this model and perform transfer learning to another larger model which also has the same densenet model nested but also has an some other layers after the dense net such as:
image_input = Input(shape, name='image_input')
x = DenseNet121(input_shape=shape, include_top=False, weights=None,backend=keras.backend,
layers=keras.layers,
models=keras.models,
utils=keras.utils)(image_input)
x = GlobalAveragePooling2D(name='avg_pool')(x)
x = Dense(1024, activation='relu', name='dense_layer1_image')(x)
x = BatchNormalization()(x)
x = Dropout(0.5)(x)
x = Dense(512, activation='relu', name='dense_layer2_image')(x)
x = BatchNormalization()(x)
x = Dropout(0.5)(x)
x = Dense(256, activation='relu', name='dense_layer3_image')(x)
x = BatchNormalization()(x)
x = Dropout(0.5)(x)
output = Dense(num_class, activation='sigmoid', name='image_output')(x)
classificationModel = Model(inputs=[image_input], outputs=[output])
Would I need to just do: modelB.load_weights(<weights.hdf5>, by_name=True)? Also should I name the internal densenet? and if so how?
You can, before using the nested model, have it into a variable.
It gets a lot easier to do everything:
densenet = DenseNet121(input_shape=shape, include_top=False,
weights=None,backend=keras.backend,
layers=keras.layers,
models=keras.models,
utils=keras.utils)
image_input = Input(shape, name='image_input')
x = densenet(image_input)
x = GlobalAveragePooling2D(name='avg_pool')(x)
......
Now it's super simple to:
weights = densenet.get_weights()
another_densenet.set_weights(weights)
The loaded file
You can also print a model.summary() of your loaded model. The dense net will be the first or second layer (you must check this).
You can then get it like densenet = loaded_model.layers[i].
You can then transfer these weights to the new dense net, both with the method in the previous answer and with the new_model.layers[i].set_weights(densenet.get_weights())
Perhaps the easiest way to go about this is to use the model you have trained itself without trying to load the model weights. Say you have trained the initial model (copied and pasted from the provided source code with minimal edits to variable name):
image_input = Input(shape, name='image_input')
# ... intermediery layers elided
x = BatchNormalization()(x)
output = Dropout(0.5)(x)
model_output = Dense(num_class, activation='softmax', name='image_output')(output)
smaller_model = Model(inputs=[image_input], outputs=[model_output])
To use the trained weights of this model for a larger model, we can simply declare another model that uses the trained weights, then use that newly defined model as a component of the larger model.
new_model = Model(image_input, output) # Model that uses trained weights
main_input = Input(shape, name='main_input')
x = new_model(main_input)
x = Dense(256, activation='relu', name='dense_layer3_image')(x)
x = BatchNormalization()(x)
x = Dropout(0.5)(x)
output = Dense(num_class, activation='sigmoid', name='image_output')(x)
final_model = Model(inputs=[main_input], outputs=[output])
If anything is unclear, I'd be more than happy to elaborate.

Keras layer asks for different shape than in the summary

I'm writing a U-net CNN in keras, and trying to use fit_generator for training. In order for this to work, I used a generator script, that could feed the images and labels for my network (simple fit function is working but I want to train a big dataset which cannot fit into the memory).
My problem is that in the model summary, it says correctly that, the output layer has a shape: (None, 288, 512, 4)
https://i.imgur.com/69xG8pO.jpg
but when I try actual training I get this error:
https://i.imgur.com/j7H6sHX.jpg
I don't get why keras wants (288, 512, 1) when in the summary it expects (288, 512, 4)
I tried it with my own unet code, and copied a working code from github also, but both of them has the exact same problem which leads me to believe that my generator script is the weak link. Below is the code I used (the image and label array functions used here were already working when I used them with "fit" in a previous CNN):
def generator(img_path, label_path, batch_size, height, width, num_classes):
input_pairs = get_pairs(img_path, label_path) # rewrite if param name changes
random.shuffle(input_pairs)
iterate_pairs = itertools.cycle(input_pairs)
while True:
X = []
Y = []
for _ in range(batch_size):
im, lab = next(iterate_pairs)
appended_im = next(iter(im))
appended_lab = next(iter(lab))
X.append(input_image_array(appended_im, width, height))
Y.append(input_label_array(appended_lab, width, height, num_classes, palette))
yield (np.array(X), np.array(Y))
I tried the generator out and the provided batches has the shapes of (for batch size of 15):
(15, 288, 512, 3)
(15, 288, 512, 4)
So I really do not know what could be the problem here.
EDIT: Here is the model code I used:
def conv_block(input_tensor, n_filter, kernel=(3, 3), padding='same', initializer="he_normal"):
x = Conv2D(n_filter, kernel, padding=padding, kernel_initializer=initializer)(input_tensor)
x = BatchNormalization()(x)
x = Activation("relu")(x)
x = Conv2D(n_filter, kernel, padding=padding, kernel_initializer=initializer)(x)
x = BatchNormalization()(x)
x = Activation("relu")(x)
return x
def deconv_block(input_tensor, residual, n_filter, kernel=(3, 3), strides=(2, 2), padding='same'):
y = Conv2DTranspose(n_filter, kernel, strides, padding)(input_tensor)
y = concatenate([y, residual], axis=3)
y = conv_block(y, n_filter)
return y
# NETWORK - n_classes is the desired number of classes, filters are fixed
def Unet(input_height, input_width, n_classes=4, filters=64):
# Downsampling
input_layer = Input(shape=(input_height, input_width, 3), name='input')
conv_1 = conv_block(input_layer, filters)
conv_1_out = MaxPooling2D(pool_size=(2, 2))(conv_1)
conv_2 = conv_block(conv_1_out, filters*2)
conv_2_out = MaxPooling2D(pool_size=(2, 2))(conv_2)
conv_3 = conv_block(conv_2_out, filters*4)
conv_3_out = MaxPooling2D(pool_size=(2, 2))(conv_3)
conv_4 = conv_block(conv_3_out, filters*8)
conv_4_out = MaxPooling2D(pool_size=(2, 2))(conv_4)
conv_4_drop = Dropout(0.5)(conv_4_out)
conv_5 = conv_block(conv_4_drop, filters*16)
conv_5_drop = Dropout(0.5)(conv_5)
# Upsampling
deconv_1 = deconv_block(conv_5_drop, conv_4, filters*8)
deconv_1_drop = Dropout(0.5)(deconv_1)
deconv_2 = deconv_block(deconv_1_drop, conv_3, filters*4)
deconv_2_drop = Dropout(0.5)(deconv_2)
deconv_3 = deconv_block(deconv_2_drop, conv_2, filters*2)
deconv_3 = deconv_block(deconv_3, conv_1, filters)
# Output - mapping each 64-component feature vector to number of classes
output = Conv2D(n_classes, (1, 1))(deconv_3)
output = BatchNormalization()(output)
output = Activation("softmax")(output)
# embed into functional API
model = Model(inputs=input_layer, outputs=output, name="Unet")
return model
Change your loss to categorical_crossentropy.
When using the sparse_categorical_crossentropy loss, your targets
should be integer targets.

How to have 2 inputs in a Dense network with Keras?

Most tutorials I've followed shows how I can give a single input into the first layer of a Dense network with something like this using Keras:
Inp = Input(shape=(1,))
x = Dense(100, activation='relu', name = "Dense_1")(Inp)
x = Dense(100, activation='relu', name = "Dense_2")(x)
output = Dense(50, activation='softmax', name = "outputL")(x)
However, if I want to provide 2 or more inputs into the first layer of a Dense network, how can I do so with Keras? The idea is just simply to have 2 inputs of x1 and x2, like this:
I've tried something like this which I've modified from snippets found on one of the pages in the Keras documentation:
Inp1 = Input(shape=(1,))
Inp2 = Input(shape=(1,))
Inp = keras.layers.concatenate([Inp1, Inp2])
x = Dense(100, activation='relu', name = "Dense_1")(Inp)
x = Dense(100, activation='relu', name = "Dense_2")(x)
output = Dense(50, activation='softmax', name = "outputL")(x)
res = model.fit([x1_train, x2_train], y_train,
validation_data=([x1_test, x2_test], y_test))
But so the far, the results that I'm getting from the model training appears to have ridiculously low accuracy. Is what I've done what I've actually intended?

Resources