Related
i'm trying to build a AutoEncoder with the following configurations
x = Input(shape=(36,1))
# Encoder
conv1_1 = Conv1D(16, 3, activation='relu', padding='same')(x)
pool1 = MaxPooling1D(2)(conv1_1)
conv1_2 = Conv1D(8, 3, activation='relu', padding='same')(pool1)
pool2 = MaxPooling1D(2)(conv1_2)
conv1_3 = Conv1D(8, 3, activation='relu', padding='same')(pool2)
h = MaxPooling1D(3)(conv1_3)
# Decoder
conv2_1 = Conv1D(8,3, activation='relu', padding='same')(h)
up1 = UpSampling1D(3)(conv2_1)
conv2_2 = Conv1D(8,3, activation='relu', padding='same')(up1)
up2 = UpSampling1D(2)(conv2_2)
conv2_3 = Conv1D(16,3, activation='relu')(up2)
up3 = UpSampling1D(2)(conv2_3)
r = Conv1D(1,3, activation='sigmoid', padding='same')(up3)
the summary is
Model: "model"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input_1 (InputLayer) [(None, 36, 1)] 0
conv1d (Conv1D) (None, 36, 16) 64
max_pooling1d (MaxPooling1D (None, 18, 16) 0
)
conv1d_1 (Conv1D) (None, 18, 8) 392
max_pooling1d_1 (MaxPooling (None, 9, 8) 0
1D)
conv1d_2 (Conv1D) (None, 9, 8) 200
max_pooling1d_2 (MaxPooling (None, 3, 8) 0
1D)
conv1d_3 (Conv1D) (None, 3, 8) 200
up_sampling1d (UpSampling1D (None, 9, 8) 0
)
conv1d_4 (Conv1D) (None, 9, 8) 200
up_sampling1d_1 (UpSampling (None, 18, 8) 0
1D)
conv1d_5 (Conv1D) (None, 16, 16) 400 <-------
up_sampling1d_2 (UpSampling (None, 32, 16) 0
1D)
conv1d_6 (Conv1D) (None, 32, 1) 49
as u can see i put the arrow where the output changes and i cannot understand why, it causes to have different output from input
what can i do? there is a way to understand how to put the best parameter gived the input shape?
I am currently working on a DL project where I would like to reproduce the work from an existing research paper. It is about detecting sleep episodes from EEG data using CNN model. The source code is using Keras and I have hard time to reproduce the model in Pytorch.
The existing Keras model runs very fast and after 2 epochs it has the following stats ( This is 4 classes classification problem):
Training loss = 0.431
Training accuracy = 0.842
On validation set:
Kappa score per class = [0.373 0.5571 0.033 0.129]
Overall kappa = 0.319
accuracy = 0.687
precision = [0.993 0.514 0.029 0.115 ]
recall = [0.696 0.719 0.180 0.573]
f1 = [0.818 0.600 0.05 0.192]
I created the Pytorch version of the model, and run on the same data. It runs much slower and after 6 epochs this is what I got:
Training loss = 0.379899
Training accuracy = 0.869
On validation set:
Kappa score per class = [ 0.111 0.079 -0.016 0.040]
Overall kappa = 0.078
accuracy = 0.559
precision = [0.682 0.432 0.012 0.105]
recall = [0.797 0.106 0.035 0.155 ]
f1 = [0.735 0.170 0.018 0.125]
The only difference in these 2 models are in the initialization,optimizers, and loss function:
The Keras model uses Nadam optimizer, the Pytorch model uses SGD(I am not aware that Nadam is available in Pytorch)
The Keras model uses glorot-normal for kernel initialization, Pytorch model uses xavier_uniform.
The Keras model uses “Categorical Cross Entropy” loss function with softmax as the last output layer, Pytorch model uses “CrossEntrropyLoss”. I do not use Softtmax in the last layer since CrossEntropyLoss internally uses LogSoftmax.
I have been spending a lot of time trying to understand why my Pytorch model performs much worse with no avail. I would appreciate if gurus and experts could help advise here :slight_smile:
Here is the Keras model:
def build_model(data_dim, n_channels, n_cl):
eeg_channels = 1
act_conv = 'relu'
init_conv = 'glorot_normal'
dp_conv = 0.3
def cnn_block(input_shape):
input = Input(shape=input_shape)
x = GaussianNoise(0.0005)(input)
x = Conv2D(32, (3, 1), strides=(1, 1), padding='same', kernel_initializer=init_conv)(x)
x = BatchNormalization()(x)
x = Activation(act_conv)(x)
x = MaxPooling2D(pool_size=(2, 1), padding='same')(x)
x = Conv2D(64, (3, 1), strides=(1, 1), padding='same', kernel_initializer=init_conv)(x)
x = BatchNormalization()(x)
x = Activation(act_conv)(x)
x = MaxPooling2D(pool_size=(2, 1), padding='same')(x)
for i in range(4):
x = Conv2D(128, (3, 1), strides=(1, 1), padding='same', kernel_initializer=init_conv)(x)
x = BatchNormalization()(x)
x = Activation(act_conv)(x)
x = MaxPooling2D(pool_size=(2, 1), padding='same')(x)
for i in range(6):
x = Conv2D(256, (3, 1), strides=(1, 1), padding='same', kernel_initializer=init_conv)(x)
x = BatchNormalization()(x)
x = Activation(act_conv)(x)
x = MaxPooling2D(pool_size=(2, 1), padding='same')(x)
flatten1 = Flatten()(x)
cnn_eeg = Model(inputs=input, outputs=flatten1)
return cnn_eeg
hidden_units1 = 256
dp_dense = 0.5
eeg_channels = 1
eog_channels = 2
input_eeg = Input(shape=( data_dim, 1, 3))
cnn_eeg = cnn_block(( data_dim, 1, 3))
x_eeg = cnn_eeg(input_eeg)
x = BatchNormalization()(x_eeg)
x = Dropout(dp_dense)(x)
x = Dense(units=hidden_units1, activation=act_conv, kernel_initializer=init_conv)(x)
x = BatchNormalization()(x)
x = Dropout(dp_dense)(x)
predictions = Dense(units=n_cl, activation='softmax', kernel_initializer=init_conv)(x)
model = Model(inputs=[input_eeg] , outputs=[predictions])
return [cnn_eeg, model]
The model is used as follows:
[cnn_eeg, model] = build_model(data_dim, n_channels, n_cl)
Nadam = optimizers.Nadam( )
model.compile(optimizer='Nadam', loss='categorical_crossentropy', metrics=['accuracy'], sample_weight_mode=None)
print(cnn_eeg.summary())
print(model.summary())
model.fit_generator(generator_train, steps_per_epoch = steps_per_epoch, class_weight = weight, epochs = 1, verbose=1, callbacks=[history], initial_epoch=0 )
Printout of the model:
Layer (type) Output Shape Param #
input_2 (InputLayer) [(None, 3200, 1, 3)] 0
gaussian_noise (GaussianNois (None, 3200, 1, 3) 0
conv2d (Conv2D) (None, 3200, 1, 32) 320
batch_normalization (BatchNo (None, 3200, 1, 32) 128
activation (Activation) (None, 3200, 1, 32) 0
max_pooling2d (MaxPooling2D) (None, 1600, 1, 32) 0
conv2d_1 (Conv2D) (None, 1600, 1, 64) 6208
batch_normalization_1 (Batch (None, 1600, 1, 64) 256
activation_1 (Activation) (None, 1600, 1, 64) 0
max_pooling2d_1 (MaxPooling2 (None, 800, 1, 64) 0
conv2d_2 (Conv2D) (None, 800, 1, 128) 24704
batch_normalization_2 (Batch (None, 800, 1, 128) 512
activation_2 (Activation) (None, 800, 1, 128) 0
max_pooling2d_2 (MaxPooling2 (None, 400, 1, 128) 0
conv2d_3 (Conv2D) (None, 400, 1, 128) 49280
batch_normalization_3 (Batch (None, 400, 1, 128) 512
activation_3 (Activation) (None, 400, 1, 128) 0
max_pooling2d_3 (MaxPooling2 (None, 200, 1, 128) 0
conv2d_4 (Conv2D) (None, 200, 1, 128) 49280
batch_normalization_4 (Batch (None, 200, 1, 128) 512
activation_4 (Activation) (None, 200, 1, 128) 0
max_pooling2d_4 (MaxPooling2 (None, 100, 1, 128) 0
conv2d_5 (Conv2D) (None, 100, 1, 128) 49280
batch_normalization_5 (Batch (None, 100, 1, 128) 512
activation_5 (Activation) (None, 100, 1, 128) 0
max_pooling2d_5 (MaxPooling2 (None, 50, 1, 128) 0
conv2d_6 (Conv2D) (None, 50, 1, 256) 98560
batch_normalization_6 (Batch (None, 50, 1, 256) 1024
activation_6 (Activation) (None, 50, 1, 256) 0
max_pooling2d_6 (MaxPooling2 (None, 25, 1, 256) 0
conv2d_7 (Conv2D) (None, 25, 1, 256) 196864
batch_normalization_7 (Batch (None, 25, 1, 256) 1024
activation_7 (Activation) (None, 25, 1, 256) 0
max_pooling2d_7 (MaxPooling2 (None, 13, 1, 256) 0
conv2d_8 (Conv2D) (None, 13, 1, 256) 196864
batch_normalization_8 (Batch (None, 13, 1, 256) 1024
activation_8 (Activation) (None, 13, 1, 256) 0
max_pooling2d_8 (MaxPooling2 (None, 7, 1, 256) 0
conv2d_9 (Conv2D) (None, 7, 1, 256) 196864
batch_normalization_9 (Batch (None, 7, 1, 256) 1024
activation_9 (Activation) (None, 7, 1, 256) 0
max_pooling2d_9 (MaxPooling2 (None, 4, 1, 256) 0
conv2d_10 (Conv2D) (None, 4, 1, 256) 196864
batch_normalization_10 (Batc (None, 4, 1, 256) 1024
activation_10 (Activation) (None, 4, 1, 256) 0
max_pooling2d_10 (MaxPooling (None, 2, 1, 256) 0
conv2d_11 (Conv2D) (None, 2, 1, 256) 196864
batch_normalization_11 (Batc (None, 2, 1, 256) 1024
activation_11 (Activation) (None, 2, 1, 256) 0
max_pooling2d_11 (MaxPooling (None, 1, 1, 256) 0
flatten (Flatten) (None, 256) 0
Total params: 1,270,528
Trainable params: 1,266,240
Non-trainable params: 4,288
None
Model: "model_1"
Layer (type) Output Shape Param #
input_1 (InputLayer) [(None, 3200, 1, 3)] 0
model (Functional) (None, 256) 1270528
batch_normalization_12 (Batc (None, 256) 1024
dropout (Dropout) (None, 256) 0
dense (Dense) (None, 256) 65792
batch_normalization_13 (Batc (None, 256) 1024
dropout_1 (Dropout) (None, 256) 0
dense_1 (Dense) (None, 4) 1028
Total params: 1,339,396
Trainable params: 1,334,084
Non-trainable params: 5,312
Below is my Pytorch model:
class MSECNN16s(nn.Module):
def __init__(self):
# input shape = (batchsize, 3, 1, windowsize=16*200=3200)
super(MSECNN16s, self).__init__()
self.conv1 = nn.Conv2d(3, 32, kernel_size=(1,3), padding=(0,1))
self.conv2 = nn.Conv2d(32, 64, kernel_size=(1,3), padding=(0,1))
self.conv3 = nn.Conv2d(64, 128, kernel_size=(1,3), padding=(0,1))
self.conv4 = nn.ModuleList()
for x in range(3):
conv = nn.Conv2d(128, 128, kernel_size=(1,3), padding=(0,1))
nn.init.xavier_uniform_(conv.weight)
nn.init.zeros_(conv.bias)
self.conv4.append(conv)
self.conv5 = nn.Conv2d(128, 256, kernel_size=(1,3), padding=(0,1))
self.conv6 = nn.ModuleList()
for x in range(5):
conv = nn.Conv2d(256, 256, kernel_size=(1,3), padding=(0,1))
nn.init.xavier_uniform_(conv.weight)
nn.init.zeros_(conv.bias)
self.conv6.append(conv)
self.fc1 = nn.Linear(256, 256)
self.fc2 = nn.Linear(256, 4)
nn.init.xavier_uniform_(self.conv1.weight)
nn.init.zeros_(self.conv1.bias)
nn.init.xavier_uniform_(self.conv2.weight)
nn.init.zeros_(self.conv2.bias)
nn.init.xavier_uniform_(self.conv3.weight)
nn.init.zeros_(self.conv3.bias)
nn.init.xavier_uniform_(self.conv5.weight)
nn.init.zeros_(self.conv5.bias)
nn.init.xavier_uniform_(self.fc1.weight)
nn.init.zeros_(self.fc1.bias)
nn.init.xavier_uniform_(self.fc2.weight)
nn.init.zeros_(self.fc2.bias)
def forward(self, x):
# x = (batchsize, 1, 3, windowsize=16*200)
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
x = x.permute(0, 2, 1, 3) #convert to (batchsize, 3, 1, 3200)
std = 0.0005
x = (torch.randn_like(x) * std) + x
x = self.conv1(x) #output: (batchsize, 32, 1, 3200)
x = nn.BatchNorm2d(32).to(device)(x)
x = F.relu(x)
x = nn.MaxPool2d(kernel_size=(1,2))(x) #output: (batchsize, 32, 1, 1600)
x = self.conv2(x) #output: (batchsize, 64, 1, 1600)
x = nn.BatchNorm2d(64).to(device)(x)
x = F.relu(x)
x = nn.MaxPool2d(kernel_size=(1,2))(x) #output: (batchsize, 64, 1, 800)
x = self.conv3(x) #output: (batchsize, 128, 1, 800)
x = nn.BatchNorm2d(128).to(device)(x)
x = F.relu(x)
x = nn.MaxPool2d(kernel_size=(1,2))(x) #output: (batchsize, 128, 1, 400)
for conv in self.conv4:
x = conv(x)
x = nn.BatchNorm2d(128).to(device)(x)
x = F.relu(x)
padding = (0,0)
if x.shape[-1] % 2 > 0: padding = (0,1)
x = nn.MaxPool2d(kernel_size=(1,2), padding = padding)(x) # output channels will be 200->100->50
x = self.conv5(x) #output: (batchsize, 256, 1, 50)
x = nn.BatchNorm2d(256).to(device)(x)
x = F.relu(x)
x = nn.MaxPool2d(kernel_size=(1,2))(x) #output: (batchsize, 256, 1, 25)
for conv in self.conv6:
x = conv(x)
x = nn.BatchNorm2d(256).to(device)(x)
x = F.relu(x)
padding = (0,0)
if x.shape[-1] % 2 > 0: padding = (0,1)
x = nn.MaxPool2d(kernel_size=(1,2), padding = padding)(x) # output channels will be 13->7->4->2->1
# x is (batchsize, 256, 1, 1)
x = x.squeeze() #x is (batchsize, 256)
x = nn.BatchNorm1d(256).to(device)(x)
x = nn.Dropout(p=0.5)(x)
x = self.fc1(x)
x = F.relu(x)
x = nn.BatchNorm1d(256).to(device)(x)
x = nn.Dropout(p=0.5)(x)
x = self.fc2(x)
return x
with the following optimizer and criterion:
model = MSECNN16s()
model.to(device)
criterion = nn.CrossEntropyLoss(weight=torch.Tensor([1.0, 11.1, 102.9, 38.1]).to(device))
# load the optimizer
optimizer = torch.optim.SGD(model.parameters(), lr=0.01, momentum=0.9, weight_decay=0.0001 )
Any advice is appreciated! many thanks!
Regards
Edwin
Can anyone please help me to convert this model to PyTorch? I already tried to convert from Keras to PyTorch like this How can I convert this keras cnn model to pytorch version but training results were different. Thank you.
input_3d = (1, 64, 96, 96)
pool_3d = (2, 2, 2)
model = Sequential()
model.add(Convolution3D(8, 3, 3, 3, name='conv1', input_shape=input_3d,
data_format='channels_first'))
model.add(MaxPooling3D(pool_size=pool_3d, name='pool1'))
model.add(Convolution3D(8, 3, 3, 3, name='conv2',data_format='channels_first'))
model.add(MaxPooling3D(pool_size=pool_3d, name='pool2'))
model.add(Convolution3D(8, 3, 3, 3, name='conv3',data_format='channels_first'))
model.add(MaxPooling3D(pool_size=pool_3d, name='pool3'))
model.add(Flatten())
model.add(Dense(2000, activation='relu', name='dense1'))
model.add(Dropout(0.5, name='dropout1'))
model.add(Dense(500, activation='relu', name='dense2'))
model.add(Dropout(0.5, name='dropout2'))
model.add(Dense(3, activation='softmax', name='softmax'))
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
conv1 (Conv3D) (None, 8, 60, 94, 94) 224
_________________________________________________________________
pool1 (MaxPooling3D) (None, 8, 30, 47, 47) 0
_________________________________________________________________
conv2 (Conv3D) (None, 8, 28, 45, 45) 1736
_________________________________________________________________
pool2 (MaxPooling3D) (None, 8, 14, 22, 22) 0
_________________________________________________________________
conv3 (Conv3D) (None, 8, 12, 20, 20) 1736
_________________________________________________________________
pool3 (MaxPooling3D) (None, 8, 6, 10, 10) 0
_________________________________________________________________
flatten_1 (Flatten) (None, 4800) 0
_________________________________________________________________
dense1 (Dense) (None, 2000) 9602000
_________________________________________________________________
dropout1 (Dropout) (None, 2000) 0
_________________________________________________________________
dense2 (Dense) (None, 500) 1000500
_________________________________________________________________
dropout2 (Dropout) (None, 500) 0
_________________________________________________________________
softmax (Dense) (None, 3) 1503
=================================================================
Your PyTorch equivalent of the Keras model would look like this:
class CNN(nn.Module):
def __init__(self, ):
super(CNN, self).__init__()
self.maxpool = nn.MaxPool3d((2, 2, 2))
self.conv1 = nn.Conv3d(in_channels=1, out_channels=8, kernel_size=3)
self.conv2 = nn.Conv3d(in_channels=8, out_channels=8, kernel_size=3)
self.conv3 = nn.Conv3d(in_channels=8, out_channels=8, kernel_size=3)
self.linear1 = nn.Linear(4800, 2000)
self.dropout1 = nn.Dropout3d(0.5)
self.linear2 = nn.Linear(2000, 500)
self.dropout2 = nn.Dropout3d(0.5)
self.linear3 = nn.Linear(500, 3)
def forward(self, x):
out = self.maxpool(self.conv1(x))
out = self.maxpool(self.conv2(out))
out = self.maxpool(self.conv3(out))
# Flattening process
b, c, d, h, w = out.size() # batch_size, channels, depth, height, width
out = out.view(-1, c * d * h * w)
out = self.dropout1(self.linear1(out))
out = self.dropout2(self.linear2(out))
out = self.linear3(out)
out = torch.softmax(out, 1)
return out
A driver program to test the model:
inputs = torch.randn(8, 1, 64, 96, 96)
model = CNN()
outputs = model(inputs)
print(outputs.shape) # torch.Size([8, 3])
You can save keras weight and reload then in pytorch.
the steps are
Step 0: Train a Model in Keras. ...
Step 1: Recreate & Initialize Your Model Architecture in PyTorch. ...
Step 2: Import Your Keras Model and Copy the Weights. ...
Step 3: Load Those Weights onto Your PyTorch Model. ...
Step 4: Test and Save Your Pytorch Model.
You Can follow example here https://gereshes.com/2019/06/24/how-to-transfer-a-simple-keras-model-to-pytorch-the-hard-way/
I have recently started learning about Image Segmentation and UNet. I am trying to do a multi class Image Segmentation where I have 7 classes and input is a (256, 256, 3) rgb image and output is (256, 256, 1) grayscale image where each intensity value corresponds to one class. I am doing pixel wise softmax. I am using sparse categorical cross entropy so as to avoid doing One Hot Encoding.
def soft1(x):
return keras.activations.softmax(x, axis = -1)
def conv2d_block(input_tensor, n_filters, kernel_size = 3, batchnorm = True):
x = Conv2D(filters = n_filters, kernel_size = (kernel_size, kernel_size),\
kernel_initializer = 'he_normal', padding = 'same')(input_tensor)
if batchnorm:
x = BatchNormalization()(x)
x = Activation('relu')(x)
x = Conv2D(filters = n_filters, kernel_size = (kernel_size, kernel_size),\
kernel_initializer = 'he_normal', padding = 'same')(input_tensor)
if batchnorm:
x = BatchNormalization()(x)
x = Activation('relu')(x)
return x
def get_unet(input_img, n_classes, n_filters = 16, dropout = 0.1, batchnorm = True):
# Contracting Path
c1 = conv2d_block(input_img, n_filters * 1, kernel_size = 3, batchnorm = batchnorm)
p1 = MaxPooling2D((2, 2))(c1)
p1 = Dropout(dropout)(p1)
c2 = conv2d_block(p1, n_filters * 2, kernel_size = 3, batchnorm = batchnorm)
p2 = MaxPooling2D((2, 2))(c2)
p2 = Dropout(dropout)(p2)
c3 = conv2d_block(p2, n_filters * 4, kernel_size = 3, batchnorm = batchnorm)
p3 = MaxPooling2D((2, 2))(c3)
p3 = Dropout(dropout)(p3)
c4 = conv2d_block(p3, n_filters * 8, kernel_size = 3, batchnorm = batchnorm)
p4 = MaxPooling2D((2, 2))(c4)
p4 = Dropout(dropout)(p4)
c5 = conv2d_block(p4, n_filters = n_filters * 16, kernel_size = 3, batchnorm = batchnorm)
# Expansive Path
u6 = Conv2DTranspose(n_filters * 8, (3, 3), strides = (2, 2), padding = 'same')(c5)
u6 = concatenate([u6, c4])
u6 = Dropout(dropout)(u6)
c6 = conv2d_block(u6, n_filters * 8, kernel_size = 3, batchnorm = batchnorm)
u7 = Conv2DTranspose(n_filters * 4, (3, 3), strides = (2, 2), padding = 'same')(c6)
u7 = concatenate([u7, c3])
u7 = Dropout(dropout)(u7)
c7 = conv2d_block(u7, n_filters * 4, kernel_size = 3, batchnorm = batchnorm)
u8 = Conv2DTranspose(n_filters * 2, (3, 3), strides = (2, 2), padding = 'same')(c7)
u8 = concatenate([u8, c2])
u8 = Dropout(dropout)(u8)
c8 = conv2d_block(u8, n_filters * 2, kernel_size = 3, batchnorm = batchnorm)
u9 = Conv2DTranspose(n_filters * 1, (3, 3), strides = (2, 2), padding = 'same')(c8)
u9 = concatenate([u9, c1])
u9 = Dropout(dropout)(u9)
c9 = conv2d_block(u9, n_filters * 1, kernel_size = 3, batchnorm = batchnorm)
outputs = Conv2D(n_classes, (1, 1))(c9)
outputs = Reshape((image_height*image_width, 1, n_classes), input_shape = (image_height, image_width, n_classes))(outputs)
outputs = Activation(soft1)(outputs)
model = Model(inputs=[input_img], outputs=[outputs])
print(outputs.shape)
return model
My Model Summary is:
Model: "model_2"
__________________________________________________________________________________________________
Layer (type) Output Shape Param # Connected to
==================================================================================================
input_12 (InputLayer) (None, 256, 256, 3) 0
__________________________________________________________________________________________________
conv2d_211 (Conv2D) (None, 256, 256, 16) 448 input_12[0][0]
__________________________________________________________________________________________________
batch_normalization_200 (BatchN (None, 256, 256, 16) 64 conv2d_211[0][0]
__________________________________________________________________________________________________
activation_204 (Activation) (None, 256, 256, 16) 0 batch_normalization_200[0][0]
__________________________________________________________________________________________________
max_pooling2d_45 (MaxPooling2D) (None, 128, 128, 16) 0 activation_204[0][0]
__________________________________________________________________________________________________
dropout_89 (Dropout) (None, 128, 128, 16) 0 max_pooling2d_45[0][0]
__________________________________________________________________________________________________
conv2d_213 (Conv2D) (None, 128, 128, 32) 4640 dropout_89[0][0]
__________________________________________________________________________________________________
batch_normalization_202 (BatchN (None, 128, 128, 32) 128 conv2d_213[0][0]
__________________________________________________________________________________________________
activation_206 (Activation) (None, 128, 128, 32) 0 batch_normalization_202[0][0]
__________________________________________________________________________________________________
max_pooling2d_46 (MaxPooling2D) (None, 64, 64, 32) 0 activation_206[0][0]
__________________________________________________________________________________________________
dropout_90 (Dropout) (None, 64, 64, 32) 0 max_pooling2d_46[0][0]
__________________________________________________________________________________________________
conv2d_215 (Conv2D) (None, 64, 64, 64) 18496 dropout_90[0][0]
__________________________________________________________________________________________________
batch_normalization_204 (BatchN (None, 64, 64, 64) 256 conv2d_215[0][0]
__________________________________________________________________________________________________
activation_208 (Activation) (None, 64, 64, 64) 0 batch_normalization_204[0][0]
__________________________________________________________________________________________________
max_pooling2d_47 (MaxPooling2D) (None, 32, 32, 64) 0 activation_208[0][0]
__________________________________________________________________________________________________
dropout_91 (Dropout) (None, 32, 32, 64) 0 max_pooling2d_47[0][0]
__________________________________________________________________________________________________
conv2d_217 (Conv2D) (None, 32, 32, 128) 73856 dropout_91[0][0]
__________________________________________________________________________________________________
batch_normalization_206 (BatchN (None, 32, 32, 128) 512 conv2d_217[0][0]
__________________________________________________________________________________________________
activation_210 (Activation) (None, 32, 32, 128) 0 batch_normalization_206[0][0]
__________________________________________________________________________________________________
max_pooling2d_48 (MaxPooling2D) (None, 16, 16, 128) 0 activation_210[0][0]
__________________________________________________________________________________________________
dropout_92 (Dropout) (None, 16, 16, 128) 0 max_pooling2d_48[0][0]
__________________________________________________________________________________________________
conv2d_219 (Conv2D) (None, 16, 16, 256) 295168 dropout_92[0][0]
__________________________________________________________________________________________________
batch_normalization_208 (BatchN (None, 16, 16, 256) 1024 conv2d_219[0][0]
__________________________________________________________________________________________________
activation_212 (Activation) (None, 16, 16, 256) 0 batch_normalization_208[0][0]
__________________________________________________________________________________________________
conv2d_transpose_45 (Conv2DTran (None, 32, 32, 128) 295040 activation_212[0][0]
__________________________________________________________________________________________________
concatenate_45 (Concatenate) (None, 32, 32, 256) 0 conv2d_transpose_45[0][0]
activation_210[0][0]
__________________________________________________________________________________________________
dropout_93 (Dropout) (None, 32, 32, 256) 0 concatenate_45[0][0]
__________________________________________________________________________________________________
conv2d_221 (Conv2D) (None, 32, 32, 128) 295040 dropout_93[0][0]
__________________________________________________________________________________________________
batch_normalization_210 (BatchN (None, 32, 32, 128) 512 conv2d_221[0][0]
__________________________________________________________________________________________________
activation_214 (Activation) (None, 32, 32, 128) 0 batch_normalization_210[0][0]
__________________________________________________________________________________________________
conv2d_transpose_46 (Conv2DTran (None, 64, 64, 64) 73792 activation_214[0][0]
__________________________________________________________________________________________________
concatenate_46 (Concatenate) (None, 64, 64, 128) 0 conv2d_transpose_46[0][0]
activation_208[0][0]
__________________________________________________________________________________________________
dropout_94 (Dropout) (None, 64, 64, 128) 0 concatenate_46[0][0]
__________________________________________________________________________________________________
conv2d_223 (Conv2D) (None, 64, 64, 64) 73792 dropout_94[0][0]
__________________________________________________________________________________________________
batch_normalization_212 (BatchN (None, 64, 64, 64) 256 conv2d_223[0][0]
__________________________________________________________________________________________________
activation_216 (Activation) (None, 64, 64, 64) 0 batch_normalization_212[0][0]
__________________________________________________________________________________________________
conv2d_transpose_47 (Conv2DTran (None, 128, 128, 32) 18464 activation_216[0][0]
__________________________________________________________________________________________________
concatenate_47 (Concatenate) (None, 128, 128, 64) 0 conv2d_transpose_47[0][0]
activation_206[0][0]
__________________________________________________________________________________________________
dropout_95 (Dropout) (None, 128, 128, 64) 0 concatenate_47[0][0]
__________________________________________________________________________________________________
conv2d_225 (Conv2D) (None, 128, 128, 32) 18464 dropout_95[0][0]
__________________________________________________________________________________________________
batch_normalization_214 (BatchN (None, 128, 128, 32) 128 conv2d_225[0][0]
__________________________________________________________________________________________________
activation_218 (Activation) (None, 128, 128, 32) 0 batch_normalization_214[0][0]
__________________________________________________________________________________________________
conv2d_transpose_48 (Conv2DTran (None, 256, 256, 16) 4624 activation_218[0][0]
__________________________________________________________________________________________________
concatenate_48 (Concatenate) (None, 256, 256, 32) 0 conv2d_transpose_48[0][0]
activation_204[0][0]
__________________________________________________________________________________________________
dropout_96 (Dropout) (None, 256, 256, 32) 0 concatenate_48[0][0]
__________________________________________________________________________________________________
conv2d_227 (Conv2D) (None, 256, 256, 16) 4624 dropout_96[0][0]
__________________________________________________________________________________________________
batch_normalization_216 (BatchN (None, 256, 256, 16) 64 conv2d_227[0][0]
__________________________________________________________________________________________________
activation_220 (Activation) (None, 256, 256, 16) 0 batch_normalization_216[0][0]
__________________________________________________________________________________________________
conv2d_228 (Conv2D) (None, 256, 256, 7) 119 activation_220[0][0]
__________________________________________________________________________________________________
reshape_12 (Reshape) (None, 65536, 1, 7) 0 conv2d_228[0][0]
__________________________________________________________________________________________________
activation_221 (Activation) (None, 65536, 1, 7) 0 reshape_12[0][0]
==================================================================================================
Total params: 1,179,511
Trainable params: 1,178,039
Non-trainable params: 1,472
__________________________________________________________________________________________________
Is my model right? Shouldn't the final output be (65536, 1, 1) as I am using softmax?
The code is compiling but dice coefficient is very low.
Your model should end in (256,256,7).
That is 7 classes per pixel, and the shape should agree with your output images that are (256,256,1). This will work only for 'sparse_categorical_crossentropy' or a custom loss.
So, up to conv_228 the model seems fine (didn't look in detail, though).
There is no need for anything that comes after this convolution.
You can place the softmax directly in the conv_228 or directly after.
y_train should be (256,256,1) for this.
Your output in fact represents its pixel of your image. For its pixel, you have as an output of 1x7. Since it is sigmoid the values that this representation takes are between 0-1. Therefore the output fires when you have the desired class and therefore segmentation. If it was (65536, 1, 1) you should have not categorical but dense representation.
My CNN model contains convolution layer and dense layers. I am able to visualize images and filter of convolution layers with help of below code, but unable to see output images after dense layers (only images, because there is no filters). When i tried using below code i am getting error:
File "<ipython-input-25-e8e4d4494672>", line 35, in <module>
num_of_featuremaps=feature_maps.shape[2]
IndexError: tuple index out of range
#and after that some blank space
code is following:
def get_featuremaps(model, layer_idx, X_batch):
get_activations = K.function([model.layers[0].input, K.learning_phase()],[model.layers[layer_idx].output,])
activations = get_activations([X_batch,0])
return activations
layer_num=11
filter_num=0
test_image=x[0]
test_image_show=test_image[:,:,0]
plt.axis('off')
test_image= np.expand_dims(test_image, axis=0)
print (test_image.shape)
activations = get_featuremaps(model, int(layer_num),test_image)
print (np.shape(activations))
feature_maps = activations[0][0]
print (np.shape(feature_maps))
if K.image_dim_ordering()=='th':
feature_maps=np.rollaxis((np.rollaxis(feature_maps,2,0)),2,0)
print (feature_maps.shape)
fig=plt.figure(figsize=(16,16))
#plt.imshow(feature_maps[:,:,filter_num],cmap='gray')
#plt.savefig("featuremaps-layer-{}".format(layer_num) + "-filternum-{}".format(filter_num)+'.jpg')
num_of_featuremaps=feature_maps.shape[2]
fig=plt.figure(figsize=(16,16))
plt.title("featuremaps-layer-{}".format(layer_num))
subplot_num=int(np.ceil(np.sqrt(num_of_featuremaps)))
for i in range(int(num_of_featuremaps)):
ax = fig.add_subplot(subplot_num, subplot_num, i+1)
ax.imshow(feature_maps[:,:,i],cmap='gray')
plt.xticks([])
plt.yticks([])
plt.tight_layout()
plt.show()
from mpl_toolkits.axes_grid1 import make_axes_locatable
def nice_imshow(ax, data, vmin=None, vmax=None, cmap=None):
"""Wrapper around pl.imshow"""
if cmap is None:
cmap = cm.jet
if vmin is None:
vmin = data.min()
if vmax is None:
vmax = data.max()
divider = make_axes_locatable(ax)
cax = divider.append_axes("right", size="5%", pad=0.05)
im = ax.imshow(data, vmin=vmin, vmax=vmax, interpolation='nearest', cmap=cmap)
pl.colorbar(im, cax=cax)
model looks like following:
Layer (type) Output Shape Param #
=================================================================
conv2d_37 (Conv2D) (None, 49, 49, 32) 160
_________________________________________________________________
conv2d_38 (Conv2D) (None, 48, 48, 32) 4128
_________________________________________________________________
max_pooling2d_19 (MaxPooling (None, 24, 24, 32) 0
_________________________________________________________________
dropout_28 (Dropout) (None, 24, 24, 32) 0
_________________________________________________________________
conv2d_39 (Conv2D) (None, 23, 23, 64) 8256
_________________________________________________________________
conv2d_40 (Conv2D) (None, 22, 22, 64) 16448
_________________________________________________________________
max_pooling2d_20 (MaxPooling (None, 11, 11, 64) 0
_________________________________________________________________
dropout_29 (Dropout) (None, 11, 11, 64) 0
_________________________________________________________________
flatten_10 (Flatten) (None, 7744) 0
_________________________________________________________________
dense_19 (Dense) (None, 256) 1982720
_________________________________________________________________
dropout_30 (Dropout) (None, 256) 0
_________________________________________________________________
dense_20 (Dense) (None, 2) 514
=================================================================
Total params: 2,012,226
Trainable params: 2,012,226
Non-trainable params: 0
_____________________________________
where pl is pylab and plt is matplotlib.