I am trying to implement VGG-19 CNN on CIFAR-10 dataset where the images are of dimension (32, 32, 3). The training set has 50000 images while the testing set has 10000 images.
I am using Python 3.7 and TensorFlow 2.0. I have preprocessed the dataset by normalizing them-
# Normalize the training and testing datasets-
X_train /= 255.0
X_test /= 255.0
I have then designed a CNN-
def vgg_19():
"""
Function to define the architecture of a convolutional neural network
model following VGG-19 architecture for CIFAR-10 dataset.
Vgg-19 architecture-
64, 64, pool -- convolutional layers
128, 128, pool -- convolutional layers
256, 256, 256, 256, max-pool -- convolutional layers
512, 512, 512, 512, max-pool -- convolutional layers
512, 512, 512, 512, avg-pool -- convolutional layers
256, 256, 10 -- fully connected layers
Output: Returns designed and compiled convolutional neural network model
"""
l = tf.keras.layers
model = Sequential()
model.add(
Conv2D(
filters = 64, kernel_size = (3, 3),
activation='relu', kernel_initializer = tf.initializers.GlorotUniform(),
strides = (1, 1), padding = 'same',
input_shape=(32, 32, 3)
)
)
model.add(
Conv2D(
filters = 64, kernel_size = (3, 3),
activation='relu', kernel_initializer = tf.initializers.GlorotUniform(),
strides = (1, 1), padding = 'same'
)
)
model.add(
MaxPooling2D(
pool_size = (2, 2),
strides = (2, 2)
)
)
model.add(
Conv2D(
filters = 128, kernel_size = (3, 3),
activation='relu', kernel_initializer = tf.initializers.GlorotUniform(),
strides = (1, 1), padding = 'same'
)
)
model.add(
Conv2D(
filters = 128, kernel_size = (3, 3),
activation='relu', kernel_initializer = tf.initializers.GlorotUniform(),
strides = (1, 1), padding = 'same'
)
)
model.add(
MaxPooling2D(
pool_size = (2, 2),
strides = (2, 2)
)
)
model.add(
Conv2D(
filters = 256, kernel_size = (3, 3),
activation='relu', kernel_initializer = tf.initializers.GlorotUniform(),
strides = (1, 1), padding = 'same'
)
)
model.add(
Conv2D(
filters = 256, kernel_size = (3, 3),
activation='relu', kernel_initializer = tf.initializers.GlorotUniform(),
strides = (1, 1), padding = 'same'
)
)
model.add(
Conv2D(
filters = 256, kernel_size = (3, 3),
activation='relu', kernel_initializer = tf.initializers.GlorotUniform(),
strides = (1, 1), padding = 'same'
)
)
model.add(
Conv2D(
filters = 256, kernel_size = (3, 3),
activation='relu', kernel_initializer = tf.initializers.GlorotUniform(),
strides = (1, 1), padding = 'same'
)
)
model.add(
MaxPooling2D(
pool_size = (2, 2),
strides = (2, 2)
)
)
model.add(
Conv2D(
filters = 512, kernel_size = (3, 3),
activation='relu', kernel_initializer = tf.initializers.GlorotUniform(),
strides = (1, 1), padding = 'same'
)
)
model.add(
Conv2D(
filters = 512, kernel_size = (3, 3),
activation='relu', kernel_initializer = tf.initializers.GlorotUniform(),
strides = (1, 1), padding = 'same'
)
)
model.add(
Conv2D(
filters = 512, kernel_size = (3, 3),
activation='relu', kernel_initializer = tf.initializers.GlorotUniform(),
strides = (1, 1), padding = 'same'
)
)
model.add(
Conv2D(
filters = 512, kernel_size = (3, 3),
activation='relu', kernel_initializer = tf.initializers.GlorotUniform(),
strides = (1, 1), padding = 'same'
)
)
model.add(
MaxPooling2D(
pool_size = (2, 2),
strides = (2, 2)
)
)
model.add(
Conv2D(
filters = 512, kernel_size = (3, 3),
activation='relu', kernel_initializer = tf.initializers.GlorotUniform(),
strides = (1, 1), padding = 'same'
)
)
model.add(
Conv2D(
filters = 512, kernel_size = (3, 3),
activation='relu', kernel_initializer = tf.initializers.GlorotUniform(),
strides = (1, 1), padding = 'same'
)
)
model.add(
Conv2D(
filters = 512, kernel_size = (3, 3),
activation='relu', kernel_initializer = tf.initializers.GlorotUniform(),
strides = (1, 1), padding = 'same'
)
)
model.add(
Conv2D(
filters = 512, kernel_size = (3, 3),
activation='relu', kernel_initializer = tf.initializers.GlorotUniform(),
strides = (1, 1), padding = 'same'
)
)
model.add(
AveragePooling2D(
pool_size=(2, 2), strides=(2, 2)
)
)
'''
model.add(
MaxPooling2D(
pool_size = (2, 2),
strides = (2, 2)
)
)
'''
model.add(Flatten())
model.add(
Dense(
units = 256, activation='relu'
)
)
model.add(
Dense(
units = 256, activation='relu'
)
)
'''
model.add(
Dense(
units = 1000, activation='relu'
)
)
'''
model.add(
Dense(
units = 10, activation='softmax'
)
)
# Compile pruned CNN-
model.compile(
loss=tf.keras.losses.categorical_crossentropy,
# optimizer='adam',
optimizer = tf.keras.optimizers.SGD(learning_rate=0.01, momentum=0.9),
metrics=['accuracy']
)
return model
However, when I try to train it-
history = orig_model.fit(
x = X_train, y = y_train,
batch_size = batch_size,
epochs = num_epochs,
verbose = 1,
# callbacks = callback,
validation_data = (X_test, y_test),
shuffle = True
)
The designed CNN gives a validation accuracy of about 9%.
What's going wrong?
The abysmally low validation accuracy is due to Glorot initializer. After changing it to 'he normal', the VGG-19 CNN starts learning and reaches about 77-79% validation accuracy.
Related
I am trying to write a VGG19 neural network for single-channel images, where everything is essentially the same as in a three-channel network except for the input layer.
def model(self, inputShape=(64, 64, 1)):
inputLayer = Input(shape=inputShape)
After applying the Flatten layer to the convolution tensor I use the same dense layer parameters as in classic VGG19 but I get an error when compiling the model
ValueError: Shapes (None, 64, 64, 1) and (None, 1000) are incompatible
As far as I understand the number of neurons in dense layer should correspond to the dimensionality of the input data. That is 64x64 image, after applying the Flatten layer, the dense layer should receive a vector with 4096 neurons. As described in the classical model
layerSet = Flatten()(layerSet)
layerSet = Dense(4096, activation='relu')(layerSet)
layerSet = Dropout(0.5)(layerSet)
layerSet = Dense(4096, activation='relu')(layerSet)
layerSet = Dropout(0.5)(layerSet)
outputLayer = Dense(1000, activation='relu')(layerSet)
The last dense layer gets 1000 neurons, each corresponding to some recognizable class.
In my case, I need a set of features for SRGAN, so I doubt that for my problem there is a need to use classification vector. Features derived from VGG19 model in association with features derived from discriminative model should be passed as output layer of generative-competitive model.
Next I give you the full code example where I give the model itself and the training method. I expect to eventually get the required features from the model
class VGG19DeepConvolutionNetwork:
__model = None
def __init__(self):
self.model()
def model(self, inputShape=(64, 64, 1)):
inputLayer = Input(shape=inputShape)
layerSet = Conv2D(64, (3, 3), activation='relu', padding='same', name='block1_conv1')(inputLayer)
layerSet = Conv2D(64, (3, 3), activation='relu', padding='same', name='block1_conv2')(layerSet)
layerSet = MaxPooling2D(strides=(2,2), padding='same')(layerSet)
layerSet = Conv2D(128, (3, 3), activation='relu', padding='same', name='block2_conv1')(layerSet)
layerSet = Conv2D(128, (3, 3), activation='relu', padding='same', name='block2_conv2')(layerSet)
layerSet = MaxPooling2D(strides=(2,2), padding='same')(layerSet)
layerSet = Conv2D(256, (3, 3), activation='relu', padding='same', name='block3_conv1')(layerSet)
layerSet = Conv2D(256, (3, 3), activation='relu', padding='same', name='block3_conv2')(layerSet)
layerSet = Conv2D(256, (3, 3), activation='relu', padding='same', name='block3_conv3')(layerSet)
layerSet = Conv2D(256, (3, 3), activation='relu', padding='same', name='block3_conv4')(layerSet)
layerSet = MaxPooling2D(strides=(2,2), padding='same')(layerSet)
layerSet = Conv2D(512, (3, 3), activation='relu', padding='same', name='block4_conv1')(layerSet)
layerSet = Conv2D(512, (3, 3), activation='relu', padding='same', name='block4_conv2')(layerSet)
layerSet = Conv2D(512, (3, 3), activation='relu', padding='same', name='block4_conv3')(layerSet)
layerSet = Conv2D(512, (3, 3), activation='relu', padding='same', name='block4_conv4')(layerSet)
layerSet = MaxPooling2D(strides=(2,2), padding='same')(layerSet)
layerSet = Conv2D(512, (3, 3), activation='relu', padding='same', name='block5_conv1')(layerSet)
layerSet = Conv2D(512, (3, 3), activation='relu', padding='same', name='block5_conv2')(layerSet)
layerSet = Conv2D(512, (3, 3), activation='relu', padding='same', name='block5_conv3')(layerSet)
layerSet = Conv2D(512, (3, 3), activation='relu', padding='same', name='block5_conv4')(layerSet)
layerSet = MaxPooling2D(strides=(2,2), padding='same')(layerSet)
layerSet = Flatten()(layerSet)
layerSet = Dense(4096, activation='relu')(layerSet)
layerSet = Dropout(0.5)(layerSet)
layerSet = Dense(4096, activation='relu')(layerSet)
layerSet = Dropout(0.5)(layerSet)
outputLayer = Dense(1000, activation='relu')(layerSet)
self.__model = Model(inputs=[inputLayer], outputs=[outputLayer])
self.__model.compile(optimizer='adam', loss='categorical_crossentropy')
print(self.__model.summary())
def train(self, imageDataPath:string='srgangImageData.h5', weightsPath:string='vgg19Weights.h5', sliceSize=32, epochsNumber=100):
if self.__model is None:
self.model((sliceSize, sliceSize, 1))
imageData = ImageDataProcessing()
sourceTrain, targetTrain, sourceTest, targetTest = imageData.readImageData(imageDataPath)
del imageData
print( 'train source', sourceTrain.shape )
print( 'train target', targetTrain.shape )
print( 'test source', sourceTest.shape )
print( 'test target', targetTest.shape )
checkpoint = ModelCheckpoint(weightsPath, verbose=1, save_best_only=True, save_weights_only=False, mode='min')
callbacks_list = [checkpoint]
history = self.__model.fit(sourceTrain, targetTrain, batch_size=128, steps_per_epoch=len(sourceTrain)//128, validation_data=(sourceTest, targetTest),
callbacks=callbacks_list, shuffle=True, epochs=epochsNumber, verbose=1)
Some corrections:
The flatten layer should result with 2 x 2 x 512 = 2048 parameters as that is the output of the last convolutional layer. Tensorflow/keras should infer that for you.
The reason the last layer gets 1000 neurons is because the model was originally trained on a dataset with 1000 classes (1 neuron per class).
What version of tensorflow are you using? Are you sure it is failing at the compile step? I tried to compile your model with tensorflow 2.10.0 (Python 3.10.4) and everything worked fine. I tried to do a forward pass with an input of (10,64,64,1) and that worked fine too.
Here is the code I tried both locally and in Google Colab:
from tensorflow.keras.layers import Input, Conv2D, MaxPooling2D, Flatten, Dense, Dropout
from tensorflow.keras import Model
import tensorflow as tf
class VGG19DeepConvolutionNetwork:
__model = None
def __init__(self):
self.model()
def model(self, inputShape=(64, 64, 1)):
inputLayer = Input(shape=inputShape)
layerSet = Conv2D(64, (3, 3), activation='relu', padding='same', name='block1_conv1')(inputLayer)
layerSet = Conv2D(64, (3, 3), activation='relu', padding='same', name='block1_conv2')(layerSet)
layerSet = MaxPooling2D(strides=(2,2), padding='same')(layerSet)
layerSet = Conv2D(128, (3, 3), activation='relu', padding='same', name='block2_conv1')(layerSet)
layerSet = Conv2D(128, (3, 3), activation='relu', padding='same', name='block2_conv2')(layerSet)
layerSet = MaxPooling2D(strides=(2,2), padding='same')(layerSet)
layerSet = Conv2D(256, (3, 3), activation='relu', padding='same', name='block3_conv1')(layerSet)
layerSet = Conv2D(256, (3, 3), activation='relu', padding='same', name='block3_conv2')(layerSet)
layerSet = Conv2D(256, (3, 3), activation='relu', padding='same', name='block3_conv3')(layerSet)
layerSet = Conv2D(256, (3, 3), activation='relu', padding='same', name='block3_conv4')(layerSet)
layerSet = MaxPooling2D(strides=(2,2), padding='same')(layerSet)
layerSet = Conv2D(512, (3, 3), activation='relu', padding='same', name='block4_conv1')(layerSet)
layerSet = Conv2D(512, (3, 3), activation='relu', padding='same', name='block4_conv2')(layerSet)
layerSet = Conv2D(512, (3, 3), activation='relu', padding='same', name='block4_conv3')(layerSet)
layerSet = Conv2D(512, (3, 3), activation='relu', padding='same', name='block4_conv4')(layerSet)
layerSet = MaxPooling2D(strides=(2,2), padding='same')(layerSet)
layerSet = Conv2D(512, (3, 3), activation='relu', padding='same', name='block5_conv1')(layerSet)
layerSet = Conv2D(512, (3, 3), activation='relu', padding='same', name='block5_conv2')(layerSet)
layerSet = Conv2D(512, (3, 3), activation='relu', padding='same', name='block5_conv3')(layerSet)
layerSet = Conv2D(512, (3, 3), activation='relu', padding='same', name='block5_conv4')(layerSet)
layerSet = MaxPooling2D(strides=(2,2), padding='same')(layerSet)
layerSet = Flatten()(layerSet)
layerSet = Dense(4096, activation='relu')(layerSet)
layerSet = Dropout(0.5)(layerSet)
layerSet = Dense(4096, activation='relu')(layerSet)
layerSet = Dropout(0.5)(layerSet)
outputLayer = Dense(1000, activation='relu')(layerSet)
self.__model = Model(inputs=[inputLayer], outputs=[outputLayer])
self.__model.compile(optimizer='adam', loss='categorical_crossentropy')
print(self.__model.summary())
def getModel(self):
return self.__model
def train(self, imageDataPath: str='srgangImageData.h5', weightsPath: str='vgg19Weights.h5', sliceSize=32, epochsNumber=100):
if self.__model is None:
self.model((sliceSize, sliceSize, 1))
imageData = ImageDataProcessing()
sourceTrain, targetTrain, sourceTest, targetTest = imageData.readImageData(imageDataPath)
del imageData
print( 'train source', sourceTrain.shape )
print( 'train target', targetTrain.shape )
print( 'test source', sourceTest.shape )
print( 'test target', targetTest.shape )
checkpoint = ModelCheckpoint(weightsPath, verbose=1, save_best_only=True, save_weights_only=False, mode='min')
callbacks_list = [checkpoint]
history = self.__model.fit(sourceTrain, targetTrain, batch_size=128, steps_per_epoch=len(sourceTrain)//128, validation_data=(sourceTest, targetTest),
callbacks=callbacks_list, shuffle=True, epochs=epochsNumber, verbose=1)
modelWrapper = VGG19DeepConvolutionNetwork()
model = modelWrapper.getModel()
X = tf.random.uniform((10,64,64,1))
output = model(X)
print(output)
# modelWrapper.train()
I am using Cats Vs Dogs dataset which contains 2000 images in 2 categories and divided into train and validation directory which can be download here.
I am trying to use real-time image augmentation to feed into a CNN model using train and validation generators. I am using Python 3.8 and TF2.5. The code is as follows:
path_to_imgs = "cats_and_dogs_filtered\\"
# Define the train and validation directory-
train_dir = os.path.join(path_to_imgs, 'train')
val_dir = os.path.join(path_to_imgs, 'validation')
batch_size = 64
IMG_HEIGHT, IMG_WIDTH = 150, 150
def plotImages(images_arr):
# function to plot 5 images together-
fig, axes = plt.subplots(1, 5, figsize=(20,20))
axes = axes.flatten()
for img, ax in zip( images_arr, axes):
ax.imshow(img)
ax.axis('off')
plt.tight_layout()
plt.show()
return None
# Use image augmentation for training dataset-
image_generator = ImageDataGenerator(
rescale = 1./255, rotation_range = 135)
train_data_gen = image_generator.flow_from_directory(
directory = train_dir, batch_size = batch_size,
shuffle = True, target_size = (IMG_HEIGHT, IMG_WIDTH),
class_mode = 'sparse'
)
# Found 2000 images belonging to 2 classes.
# Validation images need no augmentations-
val_data_gen = tf.keras.preprocessing.image_dataset_from_directory(
val_dir, image_size = (IMG_HEIGHT, IMG_WIDTH),
batch_size = batch_size)
# Found 1000 files belonging to 2 classes.
# Configure the dataset for performance-
# AUTOTUNE = tf.data.AUTOTUNE
# val_data_gen = val_data_gen.cache().prefetch(buffer_size = AUTOTUNE)
val_data_gen = val_data_gen.take(batch_size).cache().repeat()
augmented_images = [train_data_gen[0][0][0] for i in range(5)]
plotImages(augmented_images)
# Get a batch of training images and labels-
x, y = next(iter(train_data_gen))
# Get a batch of validation images and labels-
x_t, y_t = next(iter(val_data_gen))
x.shape, y.shape
# ((64, 150, 150, 3), (64,))
x_t.shape, y_t.shape
# (TensorShape([64, 150, 150, 3]), TensorShape([64]))
weight_decay = 0.0005
model = Sequential()
model.add(
Conv2D(
filters = 64, kernel_size = (3, 3),
activation='relu', kernel_initializer = tf.initializers.he_normal(),
strides = (1, 1), padding = 'same', kernel_regularizer = regularizers.l2(weight_decay),
input_shape=(IMG_HEIGHT, IMG_WIDTH, 3)
)
)
model.add(
Conv2D(
filters = 64, kernel_size = (3, 3),
activation='relu', kernel_initializer = tf.initializers.he_normal(),
strides = (1, 1), padding = 'same', kernel_regularizer = regularizers.l2(weight_decay)
)
)
model.add(
# AveragePooling2D(
MaxPooling2D(
pool_size=(2, 2), strides=(2, 2)
)
)
model.add(
Conv2D(
filters = 128, kernel_size = (3, 3),
activation='relu', kernel_initializer = tf.initializers.he_normal(),
strides = (1, 1), padding = 'same', kernel_regularizer = regularizers.l2(weight_decay)
)
)
model.add(
Conv2D(
filters = 128, kernel_size = (3, 3),
activation='relu', kernel_initializer = tf.initializers.he_normal(),
strides = (1, 1), padding = 'same', kernel_regularizer = regularizers.l2(weight_decay)
)
)
model.add(
# AveragePooling2D(
MaxPooling2D(
pool_size=(2, 2), strides=(2, 2)
)
)
model.add(
Conv2D(
filters = 256, kernel_size = (3, 3),
activation='relu', kernel_initializer = tf.initializers.he_normal(),
strides = (1, 1), padding = 'same', kernel_regularizer = regularizers.l2(weight_decay)
)
)
model.add(
Conv2D(
filters = 256, kernel_size = (3, 3),
activation='relu', kernel_initializer = tf.initializers.he_normal(),
strides = (1, 1), padding = 'same', kernel_regularizer = regularizers.l2(weight_decay)
)
)
model.add(
AveragePooling2D(
# MaxPooling2D(
pool_size=(2, 2), strides=(2, 2)
)
)
model.add(Flatten())
model.add(
Dense(
units = 2, activation = 'sigmoid'
)
)
# Compile defined model-
model.compile(
optimizer = tf.keras.optimizers.Adam(learning_rate = 0.001),
# loss=tf.losses.SparseCategoricalCrossentropy(from_logits = True),
# loss = tf.losses.SparseCategoricalCrossentropy(),
loss = 'sparse_categorical_crossentropy',
metrics=['accuracy']
)
model(x).shape
# TensorShape([64, 2])
model.predict(x).shape
# (64, 2)
'''
# This is deprecated in favor of model.fit()-
model.fit_generator(
generator = train_data_gen, steps_per_epoch = len(train_data_gen),
epochs = 5
)
'''
model.fit(train_data_gen, val_data_gen, batch_size = batch_size, epochs = 5)
Using "model.fit()" gives the error:
ValueError: y argument is not supported when using
keras.utils.Sequence as input.
What am I doing wrong?
I am using a Conv-6 CNN in TensorFlow 2.5 and Python3. The objective is to selectively set certain weights within any trainable layer. The Conv-6 CNN model definition is as follows:
def conv6_cnn():
"""
Function to define the architecture of a neural network model
following Conv-6 architecture for CIFAR-10 dataset and using
provided parameter which are used to prune the model.
Conv-6 architecture-
64, 64, pool -- convolutional layers
128, 128, pool -- convolutional layers
256, 256, pool -- convolutional layers
256, 256, 10 -- fully connected layers
Output: Returns designed and compiled neural network model
"""
l = tf.keras.layers
model = Sequential()
model.add(
Conv2D(
filters = 64, kernel_size = (3, 3),
activation='relu', kernel_initializer = tf.initializers.GlorotNormal(),
strides = (1, 1), padding = 'same',
input_shape=(32, 32, 3)
)
)
model.add(
Conv2D(
filters = 64, kernel_size = (3, 3),
activation='relu', kernel_initializer = tf.initializers.GlorotNormal(),
strides = (1, 1), padding = 'same'
)
)
model.add(
MaxPooling2D(
pool_size = (2, 2),
strides = (2, 2)
)
)
model.add(
Conv2D(
filters = 128, kernel_size = (3, 3),
activation='relu', kernel_initializer = tf.initializers.GlorotNormal(),
strides = (1, 1), padding = 'same'
)
)
model.add(
Conv2D(
filters = 128, kernel_size = (3, 3),
activation='relu', kernel_initializer = tf.initializers.GlorotNormal(),
strides = (1, 1), padding = 'same'
)
)
model.add(
MaxPooling2D(
pool_size = (2, 2),
strides = (2, 2)
)
)
model.add(
Conv2D(
filters = 256, kernel_size = (3, 3),
activation='relu', kernel_initializer = tf.initializers.GlorotNormal(),
strides = (1, 1), padding = 'same'
)
)
model.add(
Conv2D(
filters = 256, kernel_size = (3, 3),
activation='relu', kernel_initializer = tf.initializers.GlorotNormal(),
strides = (1, 1), padding = 'same'
)
)
model.add(
MaxPooling2D(
pool_size = (2, 2),
strides = (2, 2)
)
)
model.add(Flatten())
model.add(
Dense(
units = 256, activation='relu',
kernel_initializer = tf.initializers.GlorotNormal()
)
)
model.add(
Dense(
units = 256, activation='relu',
kernel_initializer = tf.initializers.GlorotNormal()
)
)
model.add(
Dense(
units = 10, activation='softmax'
)
)
'''
# Compile CNN-
model.compile(
loss=tf.keras.losses.categorical_crossentropy,
# optimizer='adam',
optimizer=tf.keras.optimizers.Adam(learning_rate = 0.0003),
metrics=['accuracy']
)
'''
return model
# Load trained model from before-
best_model = conv6_cnn()
best_model.load_weights("best_weights.h5")
I came across this GitHub answer of freezing certain weights during training. On it's basis, I coded the following to freeze weights in the first and sixth conv layers:
conv1 = pruned_model.trainable_weights[0]
# Find all weights less than a threshold (0.1) and set them to zero-
conv1 = tf.where(conv1 < 0.1, 0, conv1)
# For all weights set to zero, stop training them-
conv1 = tf.where(conv1 == 0, tf.stop_gradient(conv1), conv1)
# Sanity check: number of parameters set at 0-
tf.math.count_nonzero(conv1, axis = None).numpy()
# 133
# Original number of paramaters-
tf.math.count_nonzero(best_model.trainable_weights[0], axis = None).numpy()
# 1728
# Assign conv layer1 back to pruned model-
pruned_model.trainable_weights[0].assign(conv1)
# Sanity check-
tf.math.count_nonzero(pruned_model.trainable_weights[0], axis = None).numpy()
# 133
# conv layer 6-
conv6 = pruned_model.trainable_weights[10]
# Find all weights less than a threshold (0.1) and set them to zero-
conv6 = tf.where(conv6 < 0.1, 0, conv6)
# For all weights set to zero, stop training them-
conv6 = tf.where(conv6 == 0, tf.stop_gradient(conv6), conv6)
# Sanity check: number of parameters set at 0-
tf.math.count_nonzero(conv6, axis = None).numpy()
# 5369
# Original number of paramaters-
tf.math.count_nonzero(best_model.trainable_weights[10], axis = None).numpy()
# 589824
# Assign conv layer6 back to pruned model-
pruned_model.trainable_weights[10].assign(conv6)
# Sanity check-
tf.math.count_nonzero(pruned_model.trainable_weights[10], axis = None).numpy()
# 5369
# Train model for 10 epochs for testing:
# Compile CNN-
pruned_model.compile(
loss = tf.keras.losses.CategoricalCrossentropy(from_logits=False),
optimizer=tf.keras.optimizers.Adam(learning_rate = 0.01),
metrics=['accuracy']
)
history = pruned_model.fit(
x = X_train, y = y_train,
epochs = 10, validation_data = (X_test, y_test)
)
However, after training when I check the number of non-zero weights:
# first conv layer-
tf.math.count_nonzero(pruned_model.trainable_weights[0], axis = None).numpy()
# sixth conv layer-
tf.math.count_nonzero(pruned_model.trainable_weights[10], axis = None).numpy()
The weights have increased in numbers again. They should have been 133 and 5369, but they are not.
Help?
I am new with keras and have been learning it for about 3 weeks now. I apologies if my question sounds a bit stupid.
I am currently doing semantic medical image segmentation of 512x512. I'm using UNet from this link https://github.com/zhixuhao/unet . Basically, I want to segment a brain from an image (so two-class segmentation, background vs foreground)
I have made a few modification of the network and I'm getting some results which i am happy with. But I think I can improve the segmentation results by imposing more weight on the foreground because the number of pixels of the brain is much smaller than the number of background pixels. In some cases the brain does not appear in the image especially those located in the bottom slices.
I don't know which part of the code I need to modify in https://github.com/zhixuhao/unet
I would really appreciate if anyone can help me with this. Thanks a lot in advance!
import numpy as np
import os
import skimage.io as io
import skimage.transform as trans
import numpy as np
from keras.models import *
from keras.layers import *
from keras.optimizers import *
from keras.callbacks import ModelCheckpoint, LearningRateScheduler
from keras import backend as keras
def unet(pretrained_weights=None, input_size=(256, 256, 1)):
inputs = Input(input_size)
conv1 = Conv2D(64, 3, activation='relu', padding='same', kernel_initializer='he_normal')(inputs)
conv1 = BatchNormalization()(conv1)
conv1 = Conv2D(64, 3, activation='relu', padding='same', kernel_initializer='he_normal')(conv1)
conv1 = BatchNormalization()(conv1)
pool1 = MaxPooling2D(pool_size=(2, 2))(conv1)
conv2 = Conv2D(128, 3, activation='relu', padding='same', kernel_initializer='he_normal')(pool1)
conv2 = BatchNormalization()(conv2)
conv2 = Conv2D(128, 3, activation='relu', padding='same', kernel_initializer='he_normal')(conv2)
conv2 = BatchNormalization()(conv2)
pool2 = MaxPooling2D(pool_size=(2, 2))(conv2)
conv3 = Conv2D(256, 3, activation='relu', padding='same', kernel_initializer='he_normal')(pool2)
conv3 = BatchNormalization()(conv3)
conv3 = Conv2D(256, 3, activation='relu', padding='same', kernel_initializer='he_normal')(conv3)
conv3 = BatchNormalization()(conv3)
pool3 = MaxPooling2D(pool_size=(2, 2))(conv3)
conv4 = Conv2D(512, 3, activation='relu', padding='same', kernel_initializer='he_normal')(pool3)
conv4 = BatchNormalization()(conv4)
conv4 = Conv2D(512, 3, activation='relu', padding='same', kernel_initializer='he_normal')(conv4)
conv4 = BatchNormalization()(conv4)
drop4 = Dropout(0.5)(conv4)
pool4 = MaxPooling2D(pool_size=(2, 2))(drop4)
conv5 = Conv2D(1024, 3, activation='relu', padding='same', kernel_initializer='he_normal')(pool4)
conv5 = BatchNormalization()(conv5)
conv5 = Conv2D(1024, 3, activation='relu', padding='same', kernel_initializer='he_normal')(conv5)
conv5 = BatchNormalization()(conv5)
drop5 = Dropout(0.5)(conv5)
up6 = Conv2D(512, 2, activation='relu', padding='same', kernel_initializer='he_normal')(
UpSampling2D(size=(2, 2))(drop5))
merge6 = concatenate([drop4, up6], axis=3)
conv6 = Conv2D(512, 3, activation='relu', padding='same', kernel_initializer='he_normal')(merge6)
conv6 = BatchNormalization()(conv6)
conv6 = Conv2D(512, 3, activation='relu', padding='same', kernel_initializer='he_normal')(conv6)
conv6 = BatchNormalization()(conv6)
up7 = Conv2D(256, 2, activation='relu', padding='same', kernel_initializer='he_normal')(UpSampling2D(size=(2, 2))(conv6))
merge7 = concatenate([conv3, up7], axis=3)
conv7 = Conv2D(256, 3, activation='relu', padding='same', kernel_initializer='he_normal')(merge7)
conv7 = BatchNormalization()(conv7)
conv7 = Conv2D(256, 3, activation='relu', padding='same', kernel_initializer='he_normal')(conv7)
conv7 = BatchNormalization()(conv7)
up8 = Conv2D(128, 2, activation='relu', padding='same', kernel_initializer='he_normal')(UpSampling2D(size=(2, 2))(conv7))
merge8 = concatenate([conv2, up8], axis=3)
conv8 = Conv2D(128, 3, activation='relu', padding='same', kernel_initializer='he_normal')(merge8)
conv8 = BatchNormalization()(conv8)
conv8 = Conv2D(128, 3, activation='relu', padding='same', kernel_initializer='he_normal')(conv8)
conv8 = BatchNormalization()(conv8)
up9 = Conv2D(64, 2, activation='relu', padding='same', kernel_initializer='he_normal')(UpSampling2D(size=(2, 2))(conv8))
merge9 = concatenate([conv1, up9], axis=3)
conv9 = Conv2D(64, 3, activation='relu', padding='same', kernel_initializer='he_normal')(merge9)
conv9 = BatchNormalization()(conv9)
conv9 = Conv2D(64, 3, activation='relu', padding='same', kernel_initializer='he_normal')(conv9)
conv9 = BatchNormalization()(conv9)
conv9 = Conv2D(2, 3, activation='relu', padding='same', kernel_initializer='he_normal')(conv9)
conv9 = BatchNormalization()(conv9)
conv10 = Conv2D(1, 1, activation='sigmoid')(conv9)
model = Model(input=inputs, output=conv10)
model.compile(optimizer=Adam(lr=1e-4), loss='binary_crossentropy', metrics=['accuracy'])
# model.summary()
if (pretrained_weights):
model.load_weights(pretrained_weights)
return model
Here's the main.py
from model2 import *
from data2 import *
from keras.models import load_model
class_weight= {0:0.10, 1:0.90}
myGene = trainGenerator(2,'data/brainTIF/trainNew','image','label',save_to_dir = None)
model = unet()
model_checkpoint = ModelCheckpoint('unet_brainTest_e10_s5.hdf5',
monitor='loss')
model.fit_generator(myGene,steps_per_epoch=5,epochs=10,callbacks = [model_checkpoint])
testGene = testGenerator("data/brainTIF/test3")
results = model.predict_generator(testGene,18,verbose=1)
saveResult("data/brainTIF/test_results3",results)
As an option for class_weight for binary classes, you can also handle imbalanced classes using Synthetic Oversampling Technique (SMOTE), increasing the size of the minority group:
from imblearn.over_sampling import SMOTE
sm = SMOTE()
x, y = sm.fit_sample(X_train, Y_train)
I would like to train a deep learning model, where input image shape is (224,224,3) . And I would like to feed them into a u-net model.
After training I get the error : Error when checking target: expected conv2d_29 to have 4 dimensions, but got array with shape (1255, 12)
I'm confused since I'm sure the image array and label has no issue. Is the issue within the model? How should I resolve this?
The model is as below:
#def unet(pretrained_weights = None, input_size = (224,224,3)):
concat_axis = 3
input_size= Input((224,224,3))
conv1 = Conv2D(64, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(input_size)
conv1 = Conv2D(64, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(conv1)
pool1 = MaxPooling2D(pool_size=(2, 2))(conv1)
#flat1 = Flatten()(pool1)
conv2 = Conv2D(128, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(pool1)
conv2 = Conv2D(128, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(conv2)
pool2 = MaxPooling2D(pool_size=(2, 2))(conv2)
conv3 = Conv2D(256, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(pool2)
conv3 = Conv2D(256, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(conv3)
pool3 = MaxPooling2D(pool_size=(2, 2))(conv3)
conv4 = Conv2D(512, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(pool3)
conv4 = Conv2D(512, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(conv4)
drop4 = Dropout(0.5)(conv4)
pool4 = MaxPooling2D(pool_size=(2, 2))(drop4)
conv5 = Conv2D(1024, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(pool4)
conv5 = Conv2D(1024, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(conv5)
drop5 = Dropout(0.5)(conv5)
up_conv5 = UpSampling2D(size=(2, 2), data_format="channels_last")(conv5)
ch, cw = get_crop_shape(conv4, up_conv5)
crop_conv4 = Cropping2D(cropping=(ch,cw), data_format="channels_last")(conv4)
up6 = concatenate([up_conv5, crop_conv4], axis=concat_axis)
conv6 = Conv2D(256, (3, 3), padding="same", activation="relu", kernel_initializer = 'he_normal')(up6)
conv6 = Conv2D(256, (3, 3), padding="same", activation="relu", kernel_initializer = 'he_normal')(conv6)
up_conv6 = UpSampling2D(size=(2, 2), data_format="channels_last")(conv6)
ch, cw = get_crop_shape(conv3, up_conv6)
crop_conv3 = Cropping2D(cropping=(ch,cw), data_format="channels_last")(conv3)
up7 = concatenate([up_conv6, crop_conv3], axis=concat_axis)
conv7 = Conv2D(128, (3, 3), padding="same", activation="relu", kernel_initializer = 'he_normal')(up7)
conv7 = Conv2D(128, (3, 3), padding="same", activation="relu", kernel_initializer = 'he_normal')(conv7)
up_conv7 = UpSampling2D(size=(2, 2), data_format="channels_last")(conv7)
ch, cw = get_crop_shape(conv2, up_conv7)
crop_conv2 = Cropping2D(cropping=(ch,cw), data_format="channels_last")(conv2)
up8 = concatenate([up_conv7, crop_conv2], axis=concat_axis)
conv8 = Conv2D(64, (3, 3), padding="same", activation="relu", kernel_initializer = 'he_normal')(up8)
conv8 = Conv2D(64, (3, 3), padding="same", activation="relu", kernel_initializer = 'he_normal')(conv8)
up_conv8 = UpSampling2D(size=(2, 2), data_format="channels_last")(conv8)
ch, cw = get_crop_shape(conv1, up_conv8)
crop_conv1 = Cropping2D(cropping=(ch,cw), data_format="channels_last")(conv1)
up9 = concatenate([up_conv8, crop_conv1], axis=concat_axis)
conv9 = Conv2D(32, (3, 3), padding="same", activation="relu", kernel_initializer = 'he_normal')(up9)
conv9 = Conv2D(32, (3, 3), padding="same", activation="relu", kernel_initializer = 'he_normal')(conv9)
model = Model(inputs = input_size, outputs = conv9)
Since the model output's layer is conv layer, the output shape has 4 dimensions(Batch_size, width, height, channels). But you are feeding an array of shape (1255, 12). If the target label has a shape of (Batch_size, num_features) then the last layer's output should have a shape of (None, 12) or (Batch_size, 12).
You have two options to deal with this situation.
Using dense layer after flattening the output of conv layer
Reshaping the output of conv layer to have the desired shape.
The choice depends on the problem you're dealing with. If the problem is classification, option one could be used to add softmax activation. With option 1 the modification to the code would be,
conv9 = Conv2D(32, (3, 3), padding="same", activation="relu", kernel_initializer = 'he_normal')(conv9)
flatten1 = Flatten()(conv9)
dense1 = Dense(12, activation="softmax")(flatten1) # The choice of the activation depends on the problem you are dealing with.
model = Model(inputs = input_size, outputs = dense1)
With option 2, the modification would be
conv9 = Conv2D(32, (3, 3), padding="same", activation="relu", kernel_initializer = 'he_normal')(conv9)
reshape1 = Reshape((12,)(conv9) # The choice of the activation depends on the problem you are dealing with.
model = Model(inputs = input_size, outputs = reshape1)
N.B: When the Reshape layer is used to reshape tensor to (None, 12) shape be sure that the product of the output shape of the previous layer should be divisible by 12.