Unable to replicate Keras model to Pytorch - keras

I am currently working on a DL project where I would like to reproduce the work from an existing research paper. It is about detecting sleep episodes from EEG data using CNN model. The source code is using Keras and I have hard time to reproduce the model in Pytorch.
The existing Keras model runs very fast and after 2 epochs it has the following stats ( This is 4 classes classification problem):
Training loss = 0.431
Training accuracy = 0.842
On validation set:
Kappa score per class = [0.373 0.5571 0.033 0.129]
Overall kappa = 0.319
accuracy = 0.687
precision = [0.993 0.514 0.029 0.115 ]
recall = [0.696 0.719 0.180 0.573]
f1 = [0.818 0.600 0.05 0.192]
I created the Pytorch version of the model, and run on the same data. It runs much slower and after 6 epochs this is what I got:
Training loss = 0.379899
Training accuracy = 0.869
On validation set:
Kappa score per class = [ 0.111 0.079 -0.016 0.040]
Overall kappa = 0.078
accuracy = 0.559
precision = [0.682 0.432 0.012 0.105]
recall = [0.797 0.106 0.035 0.155 ]
f1 = [0.735 0.170 0.018 0.125]
The only difference in these 2 models are in the initialization,optimizers, and loss function:
The Keras model uses Nadam optimizer, the Pytorch model uses SGD(I am not aware that Nadam is available in Pytorch)
The Keras model uses glorot-normal for kernel initialization, Pytorch model uses xavier_uniform.
The Keras model uses “Categorical Cross Entropy” loss function with softmax as the last output layer, Pytorch model uses “CrossEntrropyLoss”. I do not use Softtmax in the last layer since CrossEntropyLoss internally uses LogSoftmax.
I have been spending a lot of time trying to understand why my Pytorch model performs much worse with no avail. I would appreciate if gurus and experts could help advise here :slight_smile:
Here is the Keras model:
def build_model(data_dim, n_channels, n_cl):
eeg_channels = 1
act_conv = 'relu'
init_conv = 'glorot_normal'
dp_conv = 0.3
def cnn_block(input_shape):
input = Input(shape=input_shape)
x = GaussianNoise(0.0005)(input)
x = Conv2D(32, (3, 1), strides=(1, 1), padding='same', kernel_initializer=init_conv)(x)
x = BatchNormalization()(x)
x = Activation(act_conv)(x)
x = MaxPooling2D(pool_size=(2, 1), padding='same')(x)
x = Conv2D(64, (3, 1), strides=(1, 1), padding='same', kernel_initializer=init_conv)(x)
x = BatchNormalization()(x)
x = Activation(act_conv)(x)
x = MaxPooling2D(pool_size=(2, 1), padding='same')(x)
for i in range(4):
x = Conv2D(128, (3, 1), strides=(1, 1), padding='same', kernel_initializer=init_conv)(x)
x = BatchNormalization()(x)
x = Activation(act_conv)(x)
x = MaxPooling2D(pool_size=(2, 1), padding='same')(x)
for i in range(6):
x = Conv2D(256, (3, 1), strides=(1, 1), padding='same', kernel_initializer=init_conv)(x)
x = BatchNormalization()(x)
x = Activation(act_conv)(x)
x = MaxPooling2D(pool_size=(2, 1), padding='same')(x)
flatten1 = Flatten()(x)
cnn_eeg = Model(inputs=input, outputs=flatten1)
return cnn_eeg
hidden_units1 = 256
dp_dense = 0.5
eeg_channels = 1
eog_channels = 2
input_eeg = Input(shape=( data_dim, 1, 3))
cnn_eeg = cnn_block(( data_dim, 1, 3))
x_eeg = cnn_eeg(input_eeg)
x = BatchNormalization()(x_eeg)
x = Dropout(dp_dense)(x)
x = Dense(units=hidden_units1, activation=act_conv, kernel_initializer=init_conv)(x)
x = BatchNormalization()(x)
x = Dropout(dp_dense)(x)
predictions = Dense(units=n_cl, activation='softmax', kernel_initializer=init_conv)(x)
model = Model(inputs=[input_eeg] , outputs=[predictions])
return [cnn_eeg, model]
The model is used as follows:
[cnn_eeg, model] = build_model(data_dim, n_channels, n_cl)
Nadam = optimizers.Nadam( )
model.compile(optimizer='Nadam', loss='categorical_crossentropy', metrics=['accuracy'], sample_weight_mode=None)
print(cnn_eeg.summary())
print(model.summary())
model.fit_generator(generator_train, steps_per_epoch = steps_per_epoch, class_weight = weight, epochs = 1, verbose=1, callbacks=[history], initial_epoch=0 )
Printout of the model:
Layer (type) Output Shape Param #
input_2 (InputLayer) [(None, 3200, 1, 3)] 0
gaussian_noise (GaussianNois (None, 3200, 1, 3) 0
conv2d (Conv2D) (None, 3200, 1, 32) 320
batch_normalization (BatchNo (None, 3200, 1, 32) 128
activation (Activation) (None, 3200, 1, 32) 0
max_pooling2d (MaxPooling2D) (None, 1600, 1, 32) 0
conv2d_1 (Conv2D) (None, 1600, 1, 64) 6208
batch_normalization_1 (Batch (None, 1600, 1, 64) 256
activation_1 (Activation) (None, 1600, 1, 64) 0
max_pooling2d_1 (MaxPooling2 (None, 800, 1, 64) 0
conv2d_2 (Conv2D) (None, 800, 1, 128) 24704
batch_normalization_2 (Batch (None, 800, 1, 128) 512
activation_2 (Activation) (None, 800, 1, 128) 0
max_pooling2d_2 (MaxPooling2 (None, 400, 1, 128) 0
conv2d_3 (Conv2D) (None, 400, 1, 128) 49280
batch_normalization_3 (Batch (None, 400, 1, 128) 512
activation_3 (Activation) (None, 400, 1, 128) 0
max_pooling2d_3 (MaxPooling2 (None, 200, 1, 128) 0
conv2d_4 (Conv2D) (None, 200, 1, 128) 49280
batch_normalization_4 (Batch (None, 200, 1, 128) 512
activation_4 (Activation) (None, 200, 1, 128) 0
max_pooling2d_4 (MaxPooling2 (None, 100, 1, 128) 0
conv2d_5 (Conv2D) (None, 100, 1, 128) 49280
batch_normalization_5 (Batch (None, 100, 1, 128) 512
activation_5 (Activation) (None, 100, 1, 128) 0
max_pooling2d_5 (MaxPooling2 (None, 50, 1, 128) 0
conv2d_6 (Conv2D) (None, 50, 1, 256) 98560
batch_normalization_6 (Batch (None, 50, 1, 256) 1024
activation_6 (Activation) (None, 50, 1, 256) 0
max_pooling2d_6 (MaxPooling2 (None, 25, 1, 256) 0
conv2d_7 (Conv2D) (None, 25, 1, 256) 196864
batch_normalization_7 (Batch (None, 25, 1, 256) 1024
activation_7 (Activation) (None, 25, 1, 256) 0
max_pooling2d_7 (MaxPooling2 (None, 13, 1, 256) 0
conv2d_8 (Conv2D) (None, 13, 1, 256) 196864
batch_normalization_8 (Batch (None, 13, 1, 256) 1024
activation_8 (Activation) (None, 13, 1, 256) 0
max_pooling2d_8 (MaxPooling2 (None, 7, 1, 256) 0
conv2d_9 (Conv2D) (None, 7, 1, 256) 196864
batch_normalization_9 (Batch (None, 7, 1, 256) 1024
activation_9 (Activation) (None, 7, 1, 256) 0
max_pooling2d_9 (MaxPooling2 (None, 4, 1, 256) 0
conv2d_10 (Conv2D) (None, 4, 1, 256) 196864
batch_normalization_10 (Batc (None, 4, 1, 256) 1024
activation_10 (Activation) (None, 4, 1, 256) 0
max_pooling2d_10 (MaxPooling (None, 2, 1, 256) 0
conv2d_11 (Conv2D) (None, 2, 1, 256) 196864
batch_normalization_11 (Batc (None, 2, 1, 256) 1024
activation_11 (Activation) (None, 2, 1, 256) 0
max_pooling2d_11 (MaxPooling (None, 1, 1, 256) 0
flatten (Flatten) (None, 256) 0
Total params: 1,270,528
Trainable params: 1,266,240
Non-trainable params: 4,288
None
Model: "model_1"
Layer (type) Output Shape Param #
input_1 (InputLayer) [(None, 3200, 1, 3)] 0
model (Functional) (None, 256) 1270528
batch_normalization_12 (Batc (None, 256) 1024
dropout (Dropout) (None, 256) 0
dense (Dense) (None, 256) 65792
batch_normalization_13 (Batc (None, 256) 1024
dropout_1 (Dropout) (None, 256) 0
dense_1 (Dense) (None, 4) 1028
Total params: 1,339,396
Trainable params: 1,334,084
Non-trainable params: 5,312
Below is my Pytorch model:
class MSECNN16s(nn.Module):
def __init__(self):
# input shape = (batchsize, 3, 1, windowsize=16*200=3200)
super(MSECNN16s, self).__init__()
self.conv1 = nn.Conv2d(3, 32, kernel_size=(1,3), padding=(0,1))
self.conv2 = nn.Conv2d(32, 64, kernel_size=(1,3), padding=(0,1))
self.conv3 = nn.Conv2d(64, 128, kernel_size=(1,3), padding=(0,1))
self.conv4 = nn.ModuleList()
for x in range(3):
conv = nn.Conv2d(128, 128, kernel_size=(1,3), padding=(0,1))
nn.init.xavier_uniform_(conv.weight)
nn.init.zeros_(conv.bias)
self.conv4.append(conv)
self.conv5 = nn.Conv2d(128, 256, kernel_size=(1,3), padding=(0,1))
self.conv6 = nn.ModuleList()
for x in range(5):
conv = nn.Conv2d(256, 256, kernel_size=(1,3), padding=(0,1))
nn.init.xavier_uniform_(conv.weight)
nn.init.zeros_(conv.bias)
self.conv6.append(conv)
self.fc1 = nn.Linear(256, 256)
self.fc2 = nn.Linear(256, 4)
nn.init.xavier_uniform_(self.conv1.weight)
nn.init.zeros_(self.conv1.bias)
nn.init.xavier_uniform_(self.conv2.weight)
nn.init.zeros_(self.conv2.bias)
nn.init.xavier_uniform_(self.conv3.weight)
nn.init.zeros_(self.conv3.bias)
nn.init.xavier_uniform_(self.conv5.weight)
nn.init.zeros_(self.conv5.bias)
nn.init.xavier_uniform_(self.fc1.weight)
nn.init.zeros_(self.fc1.bias)
nn.init.xavier_uniform_(self.fc2.weight)
nn.init.zeros_(self.fc2.bias)
def forward(self, x):
# x = (batchsize, 1, 3, windowsize=16*200)
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
x = x.permute(0, 2, 1, 3) #convert to (batchsize, 3, 1, 3200)
std = 0.0005
x = (torch.randn_like(x) * std) + x
x = self.conv1(x) #output: (batchsize, 32, 1, 3200)
x = nn.BatchNorm2d(32).to(device)(x)
x = F.relu(x)
x = nn.MaxPool2d(kernel_size=(1,2))(x) #output: (batchsize, 32, 1, 1600)
x = self.conv2(x) #output: (batchsize, 64, 1, 1600)
x = nn.BatchNorm2d(64).to(device)(x)
x = F.relu(x)
x = nn.MaxPool2d(kernel_size=(1,2))(x) #output: (batchsize, 64, 1, 800)
x = self.conv3(x) #output: (batchsize, 128, 1, 800)
x = nn.BatchNorm2d(128).to(device)(x)
x = F.relu(x)
x = nn.MaxPool2d(kernel_size=(1,2))(x) #output: (batchsize, 128, 1, 400)
for conv in self.conv4:
x = conv(x)
x = nn.BatchNorm2d(128).to(device)(x)
x = F.relu(x)
padding = (0,0)
if x.shape[-1] % 2 > 0: padding = (0,1)
x = nn.MaxPool2d(kernel_size=(1,2), padding = padding)(x) # output channels will be 200->100->50
x = self.conv5(x) #output: (batchsize, 256, 1, 50)
x = nn.BatchNorm2d(256).to(device)(x)
x = F.relu(x)
x = nn.MaxPool2d(kernel_size=(1,2))(x) #output: (batchsize, 256, 1, 25)
for conv in self.conv6:
x = conv(x)
x = nn.BatchNorm2d(256).to(device)(x)
x = F.relu(x)
padding = (0,0)
if x.shape[-1] % 2 > 0: padding = (0,1)
x = nn.MaxPool2d(kernel_size=(1,2), padding = padding)(x) # output channels will be 13->7->4->2->1
# x is (batchsize, 256, 1, 1)
x = x.squeeze() #x is (batchsize, 256)
x = nn.BatchNorm1d(256).to(device)(x)
x = nn.Dropout(p=0.5)(x)
x = self.fc1(x)
x = F.relu(x)
x = nn.BatchNorm1d(256).to(device)(x)
x = nn.Dropout(p=0.5)(x)
x = self.fc2(x)
return x
with the following optimizer and criterion:
model = MSECNN16s()
model.to(device)
criterion = nn.CrossEntropyLoss(weight=torch.Tensor([1.0, 11.1, 102.9, 38.1]).to(device))
# load the optimizer
optimizer = torch.optim.SGD(model.parameters(), lr=0.01, momentum=0.9, weight_decay=0.0001 )
Any advice is appreciated! many thanks!
Regards
Edwin

Related

AutoEncoder convu1d layer change the shape of output an cause ValueError: Dimensions must be equal,

i'm trying to build a AutoEncoder with the following configurations
x = Input(shape=(36,1))
# Encoder
conv1_1 = Conv1D(16, 3, activation='relu', padding='same')(x)
pool1 = MaxPooling1D(2)(conv1_1)
conv1_2 = Conv1D(8, 3, activation='relu', padding='same')(pool1)
pool2 = MaxPooling1D(2)(conv1_2)
conv1_3 = Conv1D(8, 3, activation='relu', padding='same')(pool2)
h = MaxPooling1D(3)(conv1_3)
# Decoder
conv2_1 = Conv1D(8,3, activation='relu', padding='same')(h)
up1 = UpSampling1D(3)(conv2_1)
conv2_2 = Conv1D(8,3, activation='relu', padding='same')(up1)
up2 = UpSampling1D(2)(conv2_2)
conv2_3 = Conv1D(16,3, activation='relu')(up2)
up3 = UpSampling1D(2)(conv2_3)
r = Conv1D(1,3, activation='sigmoid', padding='same')(up3)
the summary is
Model: "model"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input_1 (InputLayer) [(None, 36, 1)] 0
conv1d (Conv1D) (None, 36, 16) 64
max_pooling1d (MaxPooling1D (None, 18, 16) 0
)
conv1d_1 (Conv1D) (None, 18, 8) 392
max_pooling1d_1 (MaxPooling (None, 9, 8) 0
1D)
conv1d_2 (Conv1D) (None, 9, 8) 200
max_pooling1d_2 (MaxPooling (None, 3, 8) 0
1D)
conv1d_3 (Conv1D) (None, 3, 8) 200
up_sampling1d (UpSampling1D (None, 9, 8) 0
)
conv1d_4 (Conv1D) (None, 9, 8) 200
up_sampling1d_1 (UpSampling (None, 18, 8) 0
1D)
conv1d_5 (Conv1D) (None, 16, 16) 400 <-------
up_sampling1d_2 (UpSampling (None, 32, 16) 0
1D)
conv1d_6 (Conv1D) (None, 32, 1) 49
as u can see i put the arrow where the output changes and i cannot understand why, it causes to have different output from input
what can i do? there is a way to understand how to put the best parameter gived the input shape?

How to plot the model output?

My 'z_sample' should be of shape (1, 10) according to the model, but it takes shape (1, 2) and throws the following error:
ValueError: Input 0 of layer "dense_1" is incompatible with the layer: expected axis -1of input shape to have value 10, but received input with shape (1, 2)
As far as I understand, the error lies in this snippet:
for i, yi in enumerate(grid_y):
for j, xi in enumerate(grid_x):
z_sample = np.array([[xi, yi]])
x_decoded = vae_decoder(z_sample)
How can I write it down correctly to plot the model output?
Here is the full plot function:
def plot_latent_space(n=30, figsize=15):
digit_size = 28
scale = 1.5
figure = np.zeros((digit_size * n, digit_size * n))
grid_x = np.linspace(-scale, scale, n)
grid_y = np.linspace(-scale, scale, n)[::-1]
for i, yi in enumerate(grid_y):
for j, xi in enumerate(grid_x):
z_sample = np.array([[xi, yi]])
x_decoded = vae_decoder(z_sample)
digit = tf.reshape(x_decoded[0], shape=(digit_size, digit_size))
figure[
i * digit_size : (i + 1) * digit_size,
j * digit_size : (j + 1) * digit_size,
] = digit
plt.figure(figsize=(figsize, figsize))
start_range = digit_size // 2
end_range = n * digit_size + start_range
pixel_range = np.arange(start_range, end_range, digit_size)
sample_range_x = np.round(grid_x, 1)
sample_range_y = np.round(grid_y, 1)
plt.xticks(pixel_range, sample_range_x)
plt.yticks(pixel_range, sample_range_y)
plt.xlabel("z[0]")
plt.ylabel("z[1]")
plt.imshow(figure, cmap="Greys_r")
plt.show()
The model output is as follows:
Model: "encoder"
__________________________________________________________________________________________________
Layer (type) Output Shape Param # Connected to
==================================================================================================
input_4 (InputLayer) [(None, 28, 28, 1)] 0 []
conv2d_6 (Conv2D) multiple 156 ['input_4[0][0]']
max_pooling2d_6 (MaxPooling2D) (None, 14, 14, 6) 0 ['conv2d_6[1][0]']
conv2d_7 (Conv2D) (None, 10, 10, 16) 2416 ['max_pooling2d_6[0][0]']
max_pooling2d_7 (MaxPooling2D) (None, 5, 5, 16) 0 ['conv2d_7[0][0]']
flatten_3 (Flatten) (None, 400) 0 ['max_pooling2d_7[0][0]']
dense_6 (Dense) (None, 20) 8020 ['flatten_3[0][0]']
tf.split_2 (TFOpLambda) [(None, 10), 0 ['dense_6[0][0]']
(None, 10)]
tf.math.multiply_4 (TFOpLambda (None, 10) 0 ['tf.split_2[0][1]']
)
tf.compat.v1.shape_2 (TFOpLamb (2,) 0 ['tf.split_2[0][0]']
da)
tf.math.exp_2 (TFOpLambda) (None, 10) 0 ['tf.math.multiply_4[0][0]']
tf.random.normal_2 (TFOpLambda (None, 10) 0 ['tf.compat.v1.shape_2[0][0]']
)
tf.math.multiply_5 (TFOpLambda (None, 10) 0 ['tf.math.exp_2[0][0]',
) 'tf.random.normal_2[0][0]']
tf.__operators__.add_2 (TFOpLa (None, 10) 0 ['tf.split_2[0][0]',
mbda) 'tf.math.multiply_5[0][0]']
dense_7 (Dense) (None, 400) 4400 ['tf.__operators__.add_2[0][0]']
reshape_3 (Reshape) (None, 5, 5, 16) 0 ['dense_7[0][0]']
up_sampling2d_6 (UpSampling2D) multiple 0 ['reshape_3[0][0]',
'conv2d_transpose_8[0][0]']
conv2d_transpose_8 (Conv2DTran (None, 14, 14, 16) 6416 ['up_sampling2d_6[0][0]']
spose)
conv2d_transpose_9 (Conv2DTran (None, 28, 28, 6) 2406 ['up_sampling2d_6[1][0]']
spose)
conv2d_transpose_10 (Conv2DTra (None, 28, 28, 1) 55 ['conv2d_transpose_9[0][0]']
nspose)
==================================================================================================
Total params: 23,869
Trainable params: 23,869
Non-trainable params: 0

Convert a simple cnn from keras to pytorch

Can anyone please help me to convert this model to PyTorch? I already tried to convert from Keras to PyTorch like this How can I convert this keras cnn model to pytorch version but training results were different. Thank you.
input_3d = (1, 64, 96, 96)
pool_3d = (2, 2, 2)
model = Sequential()
model.add(Convolution3D(8, 3, 3, 3, name='conv1', input_shape=input_3d,
data_format='channels_first'))
model.add(MaxPooling3D(pool_size=pool_3d, name='pool1'))
model.add(Convolution3D(8, 3, 3, 3, name='conv2',data_format='channels_first'))
model.add(MaxPooling3D(pool_size=pool_3d, name='pool2'))
model.add(Convolution3D(8, 3, 3, 3, name='conv3',data_format='channels_first'))
model.add(MaxPooling3D(pool_size=pool_3d, name='pool3'))
model.add(Flatten())
model.add(Dense(2000, activation='relu', name='dense1'))
model.add(Dropout(0.5, name='dropout1'))
model.add(Dense(500, activation='relu', name='dense2'))
model.add(Dropout(0.5, name='dropout2'))
model.add(Dense(3, activation='softmax', name='softmax'))
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
conv1 (Conv3D) (None, 8, 60, 94, 94) 224
_________________________________________________________________
pool1 (MaxPooling3D) (None, 8, 30, 47, 47) 0
_________________________________________________________________
conv2 (Conv3D) (None, 8, 28, 45, 45) 1736
_________________________________________________________________
pool2 (MaxPooling3D) (None, 8, 14, 22, 22) 0
_________________________________________________________________
conv3 (Conv3D) (None, 8, 12, 20, 20) 1736
_________________________________________________________________
pool3 (MaxPooling3D) (None, 8, 6, 10, 10) 0
_________________________________________________________________
flatten_1 (Flatten) (None, 4800) 0
_________________________________________________________________
dense1 (Dense) (None, 2000) 9602000
_________________________________________________________________
dropout1 (Dropout) (None, 2000) 0
_________________________________________________________________
dense2 (Dense) (None, 500) 1000500
_________________________________________________________________
dropout2 (Dropout) (None, 500) 0
_________________________________________________________________
softmax (Dense) (None, 3) 1503
=================================================================
Your PyTorch equivalent of the Keras model would look like this:
class CNN(nn.Module):
def __init__(self, ):
super(CNN, self).__init__()
self.maxpool = nn.MaxPool3d((2, 2, 2))
self.conv1 = nn.Conv3d(in_channels=1, out_channels=8, kernel_size=3)
self.conv2 = nn.Conv3d(in_channels=8, out_channels=8, kernel_size=3)
self.conv3 = nn.Conv3d(in_channels=8, out_channels=8, kernel_size=3)
self.linear1 = nn.Linear(4800, 2000)
self.dropout1 = nn.Dropout3d(0.5)
self.linear2 = nn.Linear(2000, 500)
self.dropout2 = nn.Dropout3d(0.5)
self.linear3 = nn.Linear(500, 3)
def forward(self, x):
out = self.maxpool(self.conv1(x))
out = self.maxpool(self.conv2(out))
out = self.maxpool(self.conv3(out))
# Flattening process
b, c, d, h, w = out.size() # batch_size, channels, depth, height, width
out = out.view(-1, c * d * h * w)
out = self.dropout1(self.linear1(out))
out = self.dropout2(self.linear2(out))
out = self.linear3(out)
out = torch.softmax(out, 1)
return out
A driver program to test the model:
inputs = torch.randn(8, 1, 64, 96, 96)
model = CNN()
outputs = model(inputs)
print(outputs.shape) # torch.Size([8, 3])
You can save keras weight and reload then in pytorch.
the steps are
Step 0: Train a Model in Keras. ...
Step 1: Recreate & Initialize Your Model Architecture in PyTorch. ...
Step 2: Import Your Keras Model and Copy the Weights. ...
Step 3: Load Those Weights onto Your PyTorch Model. ...
Step 4: Test and Save Your Pytorch Model.
You Can follow example here https://gereshes.com/2019/06/24/how-to-transfer-a-simple-keras-model-to-pytorch-the-hard-way/

Unet: Multi Class Image Segmentation

I have recently started learning about Image Segmentation and UNet. I am trying to do a multi class Image Segmentation where I have 7 classes and input is a (256, 256, 3) rgb image and output is (256, 256, 1) grayscale image where each intensity value corresponds to one class. I am doing pixel wise softmax. I am using sparse categorical cross entropy so as to avoid doing One Hot Encoding.
def soft1(x):
return keras.activations.softmax(x, axis = -1)
def conv2d_block(input_tensor, n_filters, kernel_size = 3, batchnorm = True):
x = Conv2D(filters = n_filters, kernel_size = (kernel_size, kernel_size),\
kernel_initializer = 'he_normal', padding = 'same')(input_tensor)
if batchnorm:
x = BatchNormalization()(x)
x = Activation('relu')(x)
x = Conv2D(filters = n_filters, kernel_size = (kernel_size, kernel_size),\
kernel_initializer = 'he_normal', padding = 'same')(input_tensor)
if batchnorm:
x = BatchNormalization()(x)
x = Activation('relu')(x)
return x
def get_unet(input_img, n_classes, n_filters = 16, dropout = 0.1, batchnorm = True):
# Contracting Path
c1 = conv2d_block(input_img, n_filters * 1, kernel_size = 3, batchnorm = batchnorm)
p1 = MaxPooling2D((2, 2))(c1)
p1 = Dropout(dropout)(p1)
c2 = conv2d_block(p1, n_filters * 2, kernel_size = 3, batchnorm = batchnorm)
p2 = MaxPooling2D((2, 2))(c2)
p2 = Dropout(dropout)(p2)
c3 = conv2d_block(p2, n_filters * 4, kernel_size = 3, batchnorm = batchnorm)
p3 = MaxPooling2D((2, 2))(c3)
p3 = Dropout(dropout)(p3)
c4 = conv2d_block(p3, n_filters * 8, kernel_size = 3, batchnorm = batchnorm)
p4 = MaxPooling2D((2, 2))(c4)
p4 = Dropout(dropout)(p4)
c5 = conv2d_block(p4, n_filters = n_filters * 16, kernel_size = 3, batchnorm = batchnorm)
# Expansive Path
u6 = Conv2DTranspose(n_filters * 8, (3, 3), strides = (2, 2), padding = 'same')(c5)
u6 = concatenate([u6, c4])
u6 = Dropout(dropout)(u6)
c6 = conv2d_block(u6, n_filters * 8, kernel_size = 3, batchnorm = batchnorm)
u7 = Conv2DTranspose(n_filters * 4, (3, 3), strides = (2, 2), padding = 'same')(c6)
u7 = concatenate([u7, c3])
u7 = Dropout(dropout)(u7)
c7 = conv2d_block(u7, n_filters * 4, kernel_size = 3, batchnorm = batchnorm)
u8 = Conv2DTranspose(n_filters * 2, (3, 3), strides = (2, 2), padding = 'same')(c7)
u8 = concatenate([u8, c2])
u8 = Dropout(dropout)(u8)
c8 = conv2d_block(u8, n_filters * 2, kernel_size = 3, batchnorm = batchnorm)
u9 = Conv2DTranspose(n_filters * 1, (3, 3), strides = (2, 2), padding = 'same')(c8)
u9 = concatenate([u9, c1])
u9 = Dropout(dropout)(u9)
c9 = conv2d_block(u9, n_filters * 1, kernel_size = 3, batchnorm = batchnorm)
outputs = Conv2D(n_classes, (1, 1))(c9)
outputs = Reshape((image_height*image_width, 1, n_classes), input_shape = (image_height, image_width, n_classes))(outputs)
outputs = Activation(soft1)(outputs)
model = Model(inputs=[input_img], outputs=[outputs])
print(outputs.shape)
return model
My Model Summary is:
Model: "model_2"
__________________________________________________________________________________________________
Layer (type) Output Shape Param # Connected to
==================================================================================================
input_12 (InputLayer) (None, 256, 256, 3) 0
__________________________________________________________________________________________________
conv2d_211 (Conv2D) (None, 256, 256, 16) 448 input_12[0][0]
__________________________________________________________________________________________________
batch_normalization_200 (BatchN (None, 256, 256, 16) 64 conv2d_211[0][0]
__________________________________________________________________________________________________
activation_204 (Activation) (None, 256, 256, 16) 0 batch_normalization_200[0][0]
__________________________________________________________________________________________________
max_pooling2d_45 (MaxPooling2D) (None, 128, 128, 16) 0 activation_204[0][0]
__________________________________________________________________________________________________
dropout_89 (Dropout) (None, 128, 128, 16) 0 max_pooling2d_45[0][0]
__________________________________________________________________________________________________
conv2d_213 (Conv2D) (None, 128, 128, 32) 4640 dropout_89[0][0]
__________________________________________________________________________________________________
batch_normalization_202 (BatchN (None, 128, 128, 32) 128 conv2d_213[0][0]
__________________________________________________________________________________________________
activation_206 (Activation) (None, 128, 128, 32) 0 batch_normalization_202[0][0]
__________________________________________________________________________________________________
max_pooling2d_46 (MaxPooling2D) (None, 64, 64, 32) 0 activation_206[0][0]
__________________________________________________________________________________________________
dropout_90 (Dropout) (None, 64, 64, 32) 0 max_pooling2d_46[0][0]
__________________________________________________________________________________________________
conv2d_215 (Conv2D) (None, 64, 64, 64) 18496 dropout_90[0][0]
__________________________________________________________________________________________________
batch_normalization_204 (BatchN (None, 64, 64, 64) 256 conv2d_215[0][0]
__________________________________________________________________________________________________
activation_208 (Activation) (None, 64, 64, 64) 0 batch_normalization_204[0][0]
__________________________________________________________________________________________________
max_pooling2d_47 (MaxPooling2D) (None, 32, 32, 64) 0 activation_208[0][0]
__________________________________________________________________________________________________
dropout_91 (Dropout) (None, 32, 32, 64) 0 max_pooling2d_47[0][0]
__________________________________________________________________________________________________
conv2d_217 (Conv2D) (None, 32, 32, 128) 73856 dropout_91[0][0]
__________________________________________________________________________________________________
batch_normalization_206 (BatchN (None, 32, 32, 128) 512 conv2d_217[0][0]
__________________________________________________________________________________________________
activation_210 (Activation) (None, 32, 32, 128) 0 batch_normalization_206[0][0]
__________________________________________________________________________________________________
max_pooling2d_48 (MaxPooling2D) (None, 16, 16, 128) 0 activation_210[0][0]
__________________________________________________________________________________________________
dropout_92 (Dropout) (None, 16, 16, 128) 0 max_pooling2d_48[0][0]
__________________________________________________________________________________________________
conv2d_219 (Conv2D) (None, 16, 16, 256) 295168 dropout_92[0][0]
__________________________________________________________________________________________________
batch_normalization_208 (BatchN (None, 16, 16, 256) 1024 conv2d_219[0][0]
__________________________________________________________________________________________________
activation_212 (Activation) (None, 16, 16, 256) 0 batch_normalization_208[0][0]
__________________________________________________________________________________________________
conv2d_transpose_45 (Conv2DTran (None, 32, 32, 128) 295040 activation_212[0][0]
__________________________________________________________________________________________________
concatenate_45 (Concatenate) (None, 32, 32, 256) 0 conv2d_transpose_45[0][0]
activation_210[0][0]
__________________________________________________________________________________________________
dropout_93 (Dropout) (None, 32, 32, 256) 0 concatenate_45[0][0]
__________________________________________________________________________________________________
conv2d_221 (Conv2D) (None, 32, 32, 128) 295040 dropout_93[0][0]
__________________________________________________________________________________________________
batch_normalization_210 (BatchN (None, 32, 32, 128) 512 conv2d_221[0][0]
__________________________________________________________________________________________________
activation_214 (Activation) (None, 32, 32, 128) 0 batch_normalization_210[0][0]
__________________________________________________________________________________________________
conv2d_transpose_46 (Conv2DTran (None, 64, 64, 64) 73792 activation_214[0][0]
__________________________________________________________________________________________________
concatenate_46 (Concatenate) (None, 64, 64, 128) 0 conv2d_transpose_46[0][0]
activation_208[0][0]
__________________________________________________________________________________________________
dropout_94 (Dropout) (None, 64, 64, 128) 0 concatenate_46[0][0]
__________________________________________________________________________________________________
conv2d_223 (Conv2D) (None, 64, 64, 64) 73792 dropout_94[0][0]
__________________________________________________________________________________________________
batch_normalization_212 (BatchN (None, 64, 64, 64) 256 conv2d_223[0][0]
__________________________________________________________________________________________________
activation_216 (Activation) (None, 64, 64, 64) 0 batch_normalization_212[0][0]
__________________________________________________________________________________________________
conv2d_transpose_47 (Conv2DTran (None, 128, 128, 32) 18464 activation_216[0][0]
__________________________________________________________________________________________________
concatenate_47 (Concatenate) (None, 128, 128, 64) 0 conv2d_transpose_47[0][0]
activation_206[0][0]
__________________________________________________________________________________________________
dropout_95 (Dropout) (None, 128, 128, 64) 0 concatenate_47[0][0]
__________________________________________________________________________________________________
conv2d_225 (Conv2D) (None, 128, 128, 32) 18464 dropout_95[0][0]
__________________________________________________________________________________________________
batch_normalization_214 (BatchN (None, 128, 128, 32) 128 conv2d_225[0][0]
__________________________________________________________________________________________________
activation_218 (Activation) (None, 128, 128, 32) 0 batch_normalization_214[0][0]
__________________________________________________________________________________________________
conv2d_transpose_48 (Conv2DTran (None, 256, 256, 16) 4624 activation_218[0][0]
__________________________________________________________________________________________________
concatenate_48 (Concatenate) (None, 256, 256, 32) 0 conv2d_transpose_48[0][0]
activation_204[0][0]
__________________________________________________________________________________________________
dropout_96 (Dropout) (None, 256, 256, 32) 0 concatenate_48[0][0]
__________________________________________________________________________________________________
conv2d_227 (Conv2D) (None, 256, 256, 16) 4624 dropout_96[0][0]
__________________________________________________________________________________________________
batch_normalization_216 (BatchN (None, 256, 256, 16) 64 conv2d_227[0][0]
__________________________________________________________________________________________________
activation_220 (Activation) (None, 256, 256, 16) 0 batch_normalization_216[0][0]
__________________________________________________________________________________________________
conv2d_228 (Conv2D) (None, 256, 256, 7) 119 activation_220[0][0]
__________________________________________________________________________________________________
reshape_12 (Reshape) (None, 65536, 1, 7) 0 conv2d_228[0][0]
__________________________________________________________________________________________________
activation_221 (Activation) (None, 65536, 1, 7) 0 reshape_12[0][0]
==================================================================================================
Total params: 1,179,511
Trainable params: 1,178,039
Non-trainable params: 1,472
__________________________________________________________________________________________________
Is my model right? Shouldn't the final output be (65536, 1, 1) as I am using softmax?
The code is compiling but dice coefficient is very low.
Your model should end in (256,256,7).
That is 7 classes per pixel, and the shape should agree with your output images that are (256,256,1). This will work only for 'sparse_categorical_crossentropy' or a custom loss.
So, up to conv_228 the model seems fine (didn't look in detail, though).
There is no need for anything that comes after this convolution.
You can place the softmax directly in the conv_228 or directly after.
y_train should be (256,256,1) for this.
Your output in fact represents its pixel of your image. For its pixel, you have as an output of 1x7. Since it is sigmoid the values that this representation takes are between 0-1. Therefore the output fires when you have the desired class and therefore segmentation. If it was (65536, 1, 1) you should have not categorical but dense representation.

building a u-net model for multi-class semantic segmenation

I'm trying to build u-net in keras for multi-class semantic segmentation. The model I have below does not learn anything. It always just predicts the background (first) class.
Is my use of the final 'softmax' layer correct? The documentation shows a axis parameter, but I'm not sure how to set that or what it should be.
def unet(input_shape=(572, 572, 1), classes=2):
input_image = KL.Input(shape=input_shape)
contracting_1, pooled_1 = blocks.contracting(input_image, filters=64, block_name="block1")
contracting_2, pooled_2 = blocks.contracting(pooled_1, filters=128, block_name="block2")
contracting_3, pooled_3 = blocks.contracting(pooled_2, filters=256, block_name="block3")
contracting_4, pooled_4 = blocks.contracting(pooled_3, filters=512, block_name="block4")
contracting_5, _ = blocks.contracting(pooled_4, filters=1024, block_name="block5")
dropout = KL.Dropout(rate=0.5)(contracting_5)
expanding_1 = blocks.expanding(dropout, merge_layer=contracting_4, filters=512, block_name="block6")
expanding_2 = blocks.expanding(expanding_1, merge_layer=contracting_3, filters=256, block_name="block7")
expanding_3 = blocks.expanding(expanding_2, merge_layer=contracting_2, filters=128, block_name="block8")
expanding_4 = blocks.expanding(expanding_3, merge_layer=contracting_1, filters=64, block_name="block9")
class_output = KL.Conv2D(classes, kernel_size=(1, 1), activation='softmax', name='class_output')(expanding_4)
model = KM.Model(inputs=[input_image], outputs=[class_output])
return model
blocks:
def contracting(input_layer, filters, kernel_size=(3, 3), padding='same',
block_name=""):
conv_a = KL.Conv2D(filters, kernel_size, activation='relu', padding=padding,
name='{}_contracting_conv_a'.format(block_name))(input_layer)
conv_b = KL.Conv2D(filters, kernel_size, activation='relu', padding=padding,
name='{}_contracting_conv_b'.format(block_name))(conv_a)
pool = KL.MaxPooling2D(pool_size=(2, 2), padding=padding,
name='{}_contracting_pool'.format(block_name))(conv_b)
batch_normalization = KL.BatchNormalization()(pool)
return conv_b, batch_normalization
def expanding(input_layer, merge_layer, filters, kernel_size=(3, 3), padding='same',
block_name=""):
input_layer = KL.UpSampling2D(size=(2, 2))(input_layer)
conv_up = KL.Conv2D(filters, kernel_size=(2, 2), activation='relu',
padding='same', name='{}_expanding_conv_up'.format(block_name))(input_layer)
conv_up_height, conv_up_width = int(conv_up.shape[1]), int(conv_up.shape[2])
merge_height, merge_width = int(merge_layer.shape[1]), int(merge_layer.shape[2])
crop_top = (merge_height - conv_up_height) // 2
crop_bottom = (merge_height - conv_up_height) - crop_top
crop_left = (merge_width - conv_up_width) // 2
crop_right = (merge_width - conv_up_width) - crop_left
cropping = ((crop_top, crop_bottom), (crop_left, crop_right))
merge_layer = KL.Cropping2D(cropping)(merge_layer)
merged = KL.concatenate([merge_layer, conv_up])
conv_a = KL.Conv2D(filters, kernel_size, activation='relu', padding=padding,
name='{}_expanding_conv_a'.format(block_name))(merged)
conv_b = KL.Conv2D(filters, kernel_size, activation='relu', padding=padding,
name='{}_expanding_conv_b'.format(block_name))(conv_a)
batch_normalization = KL.BatchNormalization()(conv_b)
return batch_normalization
compile:
optimizer = keras.optimizers.SGD(lr=0.0001, momentum=0.9)
loss = keras.losses.categorical_crossentropy
metrics = [keras.metrics.categorical_accuracy]
model.compile(optimizer, loss, metrics)
Model Summary:
Layer (type) Output Shape Param # Connected to
==================================================================================================
input_2 (InputLayer) (None, 96, 96, 3) 0
__________________________________________________________________________________________________
block1_contracting_conv_a (Conv (None, 96, 96, 64) 1792 input_2[0][0]
__________________________________________________________________________________________________
block1_contracting_conv_b (Conv (None, 96, 96, 64) 36928 block1_contracting_conv_a[0][0]
__________________________________________________________________________________________________
block1_contracting_pool (MaxPoo (None, 48, 48, 64) 0 block1_contracting_conv_b[0][0]
__________________________________________________________________________________________________
batch_normalization_10 (BatchNo (None, 48, 48, 64) 256 block1_contracting_pool[0][0]
__________________________________________________________________________________________________
block2_contracting_conv_a (Conv (None, 48, 48, 128) 73856 batch_normalization_10[0][0]
__________________________________________________________________________________________________
block2_contracting_conv_b (Conv (None, 48, 48, 128) 147584 block2_contracting_conv_a[0][0]
__________________________________________________________________________________________________
block2_contracting_pool (MaxPoo (None, 24, 24, 128) 0 block2_contracting_conv_b[0][0]
__________________________________________________________________________________________________
batch_normalization_11 (BatchNo (None, 24, 24, 128) 512 block2_contracting_pool[0][0]
__________________________________________________________________________________________________
block3_contracting_conv_a (Conv (None, 24, 24, 256) 295168 batch_normalization_11[0][0]
__________________________________________________________________________________________________
block3_contracting_conv_b (Conv (None, 24, 24, 256) 590080 block3_contracting_conv_a[0][0]
__________________________________________________________________________________________________
block3_contracting_pool (MaxPoo (None, 12, 12, 256) 0 block3_contracting_conv_b[0][0]
__________________________________________________________________________________________________
batch_normalization_12 (BatchNo (None, 12, 12, 256) 1024 block3_contracting_pool[0][0]
__________________________________________________________________________________________________
block4_contracting_conv_a (Conv (None, 12, 12, 512) 1180160 batch_normalization_12[0][0]
__________________________________________________________________________________________________
block4_contracting_conv_b (Conv (None, 12, 12, 512) 2359808 block4_contracting_conv_a[0][0]
__________________________________________________________________________________________________
block4_contracting_pool (MaxPoo (None, 6, 6, 512) 0 block4_contracting_conv_b[0][0]
__________________________________________________________________________________________________
batch_normalization_13 (BatchNo (None, 6, 6, 512) 2048 block4_contracting_pool[0][0]
__________________________________________________________________________________________________
block5_contracting_conv_a (Conv (None, 6, 6, 1024) 4719616 batch_normalization_13[0][0]
__________________________________________________________________________________________________
block5_contracting_conv_b (Conv (None, 6, 6, 1024) 9438208 block5_contracting_conv_a[0][0]
__________________________________________________________________________________________________
dropout_2 (Dropout) (None, 6, 6, 1024) 0 block5_contracting_conv_b[0][0]
__________________________________________________________________________________________________
up_sampling2d_5 (UpSampling2D) (None, 12, 12, 1024) 0 dropout_2[0][0]
__________________________________________________________________________________________________
cropping2d_5 (Cropping2D) (None, 12, 12, 512) 0 block4_contracting_conv_b[0][0]
__________________________________________________________________________________________________
block6_expanding_conv_up (Conv2 (None, 12, 12, 512) 2097664 up_sampling2d_5[0][0]
__________________________________________________________________________________________________
concatenate_5 (Concatenate) (None, 12, 12, 1024) 0 cropping2d_5[0][0]
block6_expanding_conv_up[0][0]
__________________________________________________________________________________________________
block6_expanding_conv_a (Conv2D (None, 12, 12, 512) 4719104 concatenate_5[0][0]
__________________________________________________________________________________________________
block6_expanding_conv_b (Conv2D (None, 12, 12, 512) 2359808 block6_expanding_conv_a[0][0]
__________________________________________________________________________________________________
batch_normalization_15 (BatchNo (None, 12, 12, 512) 2048 block6_expanding_conv_b[0][0]
__________________________________________________________________________________________________
up_sampling2d_6 (UpSampling2D) (None, 24, 24, 512) 0 batch_normalization_15[0][0]
__________________________________________________________________________________________________
cropping2d_6 (Cropping2D) (None, 24, 24, 256) 0 block3_contracting_conv_b[0][0]
__________________________________________________________________________________________________
block7_expanding_conv_up (Conv2 (None, 24, 24, 256) 524544 up_sampling2d_6[0][0]
__________________________________________________________________________________________________
concatenate_6 (Concatenate) (None, 24, 24, 512) 0 cropping2d_6[0][0]
block7_expanding_conv_up[0][0]
__________________________________________________________________________________________________
block7_expanding_conv_a (Conv2D (None, 24, 24, 256) 1179904 concatenate_6[0][0]
__________________________________________________________________________________________________
block7_expanding_conv_b (Conv2D (None, 24, 24, 256) 590080 block7_expanding_conv_a[0][0]
__________________________________________________________________________________________________
batch_normalization_16 (BatchNo (None, 24, 24, 256) 1024 block7_expanding_conv_b[0][0]
__________________________________________________________________________________________________
up_sampling2d_7 (UpSampling2D) (None, 48, 48, 256) 0 batch_normalization_16[0][0]
__________________________________________________________________________________________________
cropping2d_7 (Cropping2D) (None, 48, 48, 128) 0 block2_contracting_conv_b[0][0]
__________________________________________________________________________________________________
block8_expanding_conv_up (Conv2 (None, 48, 48, 128) 131200 up_sampling2d_7[0][0]
__________________________________________________________________________________________________
concatenate_7 (Concatenate) (None, 48, 48, 256) 0 cropping2d_7[0][0]
block8_expanding_conv_up[0][0]
__________________________________________________________________________________________________
block8_expanding_conv_a (Conv2D (None, 48, 48, 128) 295040 concatenate_7[0][0]
__________________________________________________________________________________________________
block8_expanding_conv_b (Conv2D (None, 48, 48, 128) 147584 block8_expanding_conv_a[0][0]
__________________________________________________________________________________________________
batch_normalization_17 (BatchNo (None, 48, 48, 128) 512 block8_expanding_conv_b[0][0]
__________________________________________________________________________________________________
up_sampling2d_8 (UpSampling2D) (None, 96, 96, 128) 0 batch_normalization_17[0][0]
__________________________________________________________________________________________________
cropping2d_8 (Cropping2D) (None, 96, 96, 64) 0 block1_contracting_conv_b[0][0]
__________________________________________________________________________________________________
block9_expanding_conv_up (Conv2 (None, 96, 96, 64) 32832 up_sampling2d_8[0][0]
__________________________________________________________________________________________________
concatenate_8 (Concatenate) (None, 96, 96, 128) 0 cropping2d_8[0][0]
block9_expanding_conv_up[0][0]
__________________________________________________________________________________________________
block9_expanding_conv_a (Conv2D (None, 96, 96, 64) 73792 concatenate_8[0][0]
__________________________________________________________________________________________________
block9_expanding_conv_b (Conv2D (None, 96, 96, 64) 36928 block9_expanding_conv_a[0][0]
__________________________________________________________________________________________________
batch_normalization_18 (BatchNo (None, 96, 96, 64) 256 block9_expanding_conv_b[0][0]
__________________________________________________________________________________________________
class_output (Conv2D) (None, 96, 96, 4) 260 batch_normalization_18[0][0]
==================================================================================================
Total params: 31,039,620
Trainable params: 31,035,780
Non-trainable params: 3,840
__________________________________________________________________________________________________
Total params: 31,031,940
Trainable params: 31,031,940
Non-trainable params: 0
class percentages in dataset:
{0: 0.6245757457188198,
1: 0.16082110268729075,
2: 0.1188858904157366,
3: 0.09571726117815291}
class 0 is the background
shape of image from generator (rgb): (1, 96, 96, 3)
shape of labels from generator: (1, 96, 96, 4)
There doesn't seem to be anything that wrong in your model.
Softmax is ok, as it defaults to the last axis, and you're clearly using 'channels_last' as config. So it's ok.
Suggestions are:
Add a few BatchNormalization() layers and decrease your learning rate (this prevents relu from going too fast to "all zeroes").
Check that your output data range is correct, with np.unique(y_train) containing only 0 and 1
Check that every pixel is classified with only one class: (np.sum(y_train, axis=-1) == 1).all() == True.
Check if your images aren't too biased towards the first class. np.sum(y_train[:,:,:,0]) should not be too bigger than np.sum(y_train[:,:,:,1:]).
If it is, consider fitting with the class_weight parameter, passing weights to balance the loss for each class (check keras documentation on fit for how to use it)
This model works just fine for me with most of the segmentation projects, i use crossentropy for multiclass segmentation and smooth dice for binary classes
def conv_block(tensor, nfilters, size=3, padding='same', initializer="he_normal"):
x = Conv2D(filters=nfilters, kernel_size=(size, size), padding=padding, kernel_initializer=initializer)(tensor)
x = BatchNormalization()(x)
x = Activation("relu")(x)
x = Conv2D(filters=nfilters, kernel_size=(size, size), padding=padding, kernel_initializer=initializer)(x)
x = BatchNormalization()(x)
x = Activation("relu")(x)
return x
def deconv_block(tensor, residual, nfilters, size=3, padding='same', strides=(2, 2)):
y = Conv2DTranspose(nfilters, kernel_size=(size, size), strides=strides, padding=padding)(tensor)
y = concatenate([y, residual], axis=3)
y = conv_block(y, nfilters)
return y
def Unet(img_height, img_width, nclasses=3, filters=64):
# down
input_layer = Input(shape=(img_height, img_width, 3), name='image_input')
conv1 = conv_block(input_layer, nfilters=filters)
conv1_out = MaxPooling2D(pool_size=(2, 2))(conv1)
conv2 = conv_block(conv1_out, nfilters=filters*2)
conv2_out = MaxPooling2D(pool_size=(2, 2))(conv2)
conv3 = conv_block(conv2_out, nfilters=filters*4)
conv3_out = MaxPooling2D(pool_size=(2, 2))(conv3)
conv4 = conv_block(conv3_out, nfilters=filters*8)
conv4_out = MaxPooling2D(pool_size=(2, 2))(conv4)
conv4_out = Dropout(0.5)(conv4_out)
conv5 = conv_block(conv4_out, nfilters=filters*16)
conv5 = Dropout(0.5)(conv5)
# up
deconv6 = deconv_block(conv5, residual=conv4, nfilters=filters*8)
deconv6 = Dropout(0.5)(deconv6)
deconv7 = deconv_block(deconv6, residual=conv3, nfilters=filters*4)
deconv7 = Dropout(0.5)(deconv7)
deconv8 = deconv_block(deconv7, residual=conv2, nfilters=filters*2)
deconv9 = deconv_block(deconv8, residual=conv1, nfilters=filters)
# output
output_layer = Conv2D(filters=nclasses, kernel_size=(1, 1))(deconv9)
output_layer = BatchNormalization()(output_layer)
output_layer = Activation('softmax')(output_layer)
model = Model(inputs=input_layer, outputs=output_layer, name='Unet')
return model
Sometimes, the problem is related to model architecture. When you are dealing with a complicated dataset for segmentation, you need to enhance the model architecture. I encountered the same problem with a new dataset while the model could work well on another dataset. So, I used Res-Unet instead of Unet as the model architecture and the problem solved.
hope this will help

Resources