Convert a simple cnn from keras to pytorch - keras

Can anyone please help me to convert this model to PyTorch? I already tried to convert from Keras to PyTorch like this How can I convert this keras cnn model to pytorch version but training results were different. Thank you.
input_3d = (1, 64, 96, 96)
pool_3d = (2, 2, 2)
model = Sequential()
model.add(Convolution3D(8, 3, 3, 3, name='conv1', input_shape=input_3d,
data_format='channels_first'))
model.add(MaxPooling3D(pool_size=pool_3d, name='pool1'))
model.add(Convolution3D(8, 3, 3, 3, name='conv2',data_format='channels_first'))
model.add(MaxPooling3D(pool_size=pool_3d, name='pool2'))
model.add(Convolution3D(8, 3, 3, 3, name='conv3',data_format='channels_first'))
model.add(MaxPooling3D(pool_size=pool_3d, name='pool3'))
model.add(Flatten())
model.add(Dense(2000, activation='relu', name='dense1'))
model.add(Dropout(0.5, name='dropout1'))
model.add(Dense(500, activation='relu', name='dense2'))
model.add(Dropout(0.5, name='dropout2'))
model.add(Dense(3, activation='softmax', name='softmax'))
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
conv1 (Conv3D) (None, 8, 60, 94, 94) 224
_________________________________________________________________
pool1 (MaxPooling3D) (None, 8, 30, 47, 47) 0
_________________________________________________________________
conv2 (Conv3D) (None, 8, 28, 45, 45) 1736
_________________________________________________________________
pool2 (MaxPooling3D) (None, 8, 14, 22, 22) 0
_________________________________________________________________
conv3 (Conv3D) (None, 8, 12, 20, 20) 1736
_________________________________________________________________
pool3 (MaxPooling3D) (None, 8, 6, 10, 10) 0
_________________________________________________________________
flatten_1 (Flatten) (None, 4800) 0
_________________________________________________________________
dense1 (Dense) (None, 2000) 9602000
_________________________________________________________________
dropout1 (Dropout) (None, 2000) 0
_________________________________________________________________
dense2 (Dense) (None, 500) 1000500
_________________________________________________________________
dropout2 (Dropout) (None, 500) 0
_________________________________________________________________
softmax (Dense) (None, 3) 1503
=================================================================

Your PyTorch equivalent of the Keras model would look like this:
class CNN(nn.Module):
def __init__(self, ):
super(CNN, self).__init__()
self.maxpool = nn.MaxPool3d((2, 2, 2))
self.conv1 = nn.Conv3d(in_channels=1, out_channels=8, kernel_size=3)
self.conv2 = nn.Conv3d(in_channels=8, out_channels=8, kernel_size=3)
self.conv3 = nn.Conv3d(in_channels=8, out_channels=8, kernel_size=3)
self.linear1 = nn.Linear(4800, 2000)
self.dropout1 = nn.Dropout3d(0.5)
self.linear2 = nn.Linear(2000, 500)
self.dropout2 = nn.Dropout3d(0.5)
self.linear3 = nn.Linear(500, 3)
def forward(self, x):
out = self.maxpool(self.conv1(x))
out = self.maxpool(self.conv2(out))
out = self.maxpool(self.conv3(out))
# Flattening process
b, c, d, h, w = out.size() # batch_size, channels, depth, height, width
out = out.view(-1, c * d * h * w)
out = self.dropout1(self.linear1(out))
out = self.dropout2(self.linear2(out))
out = self.linear3(out)
out = torch.softmax(out, 1)
return out
A driver program to test the model:
inputs = torch.randn(8, 1, 64, 96, 96)
model = CNN()
outputs = model(inputs)
print(outputs.shape) # torch.Size([8, 3])

You can save keras weight and reload then in pytorch.
the steps are
Step 0: Train a Model in Keras. ...
Step 1: Recreate & Initialize Your Model Architecture in PyTorch. ...
Step 2: Import Your Keras Model and Copy the Weights. ...
Step 3: Load Those Weights onto Your PyTorch Model. ...
Step 4: Test and Save Your Pytorch Model.
You Can follow example here https://gereshes.com/2019/06/24/how-to-transfer-a-simple-keras-model-to-pytorch-the-hard-way/

Related

AutoEncoder convu1d layer change the shape of output an cause ValueError: Dimensions must be equal,

i'm trying to build a AutoEncoder with the following configurations
x = Input(shape=(36,1))
# Encoder
conv1_1 = Conv1D(16, 3, activation='relu', padding='same')(x)
pool1 = MaxPooling1D(2)(conv1_1)
conv1_2 = Conv1D(8, 3, activation='relu', padding='same')(pool1)
pool2 = MaxPooling1D(2)(conv1_2)
conv1_3 = Conv1D(8, 3, activation='relu', padding='same')(pool2)
h = MaxPooling1D(3)(conv1_3)
# Decoder
conv2_1 = Conv1D(8,3, activation='relu', padding='same')(h)
up1 = UpSampling1D(3)(conv2_1)
conv2_2 = Conv1D(8,3, activation='relu', padding='same')(up1)
up2 = UpSampling1D(2)(conv2_2)
conv2_3 = Conv1D(16,3, activation='relu')(up2)
up3 = UpSampling1D(2)(conv2_3)
r = Conv1D(1,3, activation='sigmoid', padding='same')(up3)
the summary is
Model: "model"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input_1 (InputLayer) [(None, 36, 1)] 0
conv1d (Conv1D) (None, 36, 16) 64
max_pooling1d (MaxPooling1D (None, 18, 16) 0
)
conv1d_1 (Conv1D) (None, 18, 8) 392
max_pooling1d_1 (MaxPooling (None, 9, 8) 0
1D)
conv1d_2 (Conv1D) (None, 9, 8) 200
max_pooling1d_2 (MaxPooling (None, 3, 8) 0
1D)
conv1d_3 (Conv1D) (None, 3, 8) 200
up_sampling1d (UpSampling1D (None, 9, 8) 0
)
conv1d_4 (Conv1D) (None, 9, 8) 200
up_sampling1d_1 (UpSampling (None, 18, 8) 0
1D)
conv1d_5 (Conv1D) (None, 16, 16) 400 <-------
up_sampling1d_2 (UpSampling (None, 32, 16) 0
1D)
conv1d_6 (Conv1D) (None, 32, 1) 49
as u can see i put the arrow where the output changes and i cannot understand why, it causes to have different output from input
what can i do? there is a way to understand how to put the best parameter gived the input shape?

Unable to replicate Keras model to Pytorch

I am currently working on a DL project where I would like to reproduce the work from an existing research paper. It is about detecting sleep episodes from EEG data using CNN model. The source code is using Keras and I have hard time to reproduce the model in Pytorch.
The existing Keras model runs very fast and after 2 epochs it has the following stats ( This is 4 classes classification problem):
Training loss = 0.431
Training accuracy = 0.842
On validation set:
Kappa score per class = [0.373 0.5571 0.033 0.129]
Overall kappa = 0.319
accuracy = 0.687
precision = [0.993 0.514 0.029 0.115 ]
recall = [0.696 0.719 0.180 0.573]
f1 = [0.818 0.600 0.05 0.192]
I created the Pytorch version of the model, and run on the same data. It runs much slower and after 6 epochs this is what I got:
Training loss = 0.379899
Training accuracy = 0.869
On validation set:
Kappa score per class = [ 0.111 0.079 -0.016 0.040]
Overall kappa = 0.078
accuracy = 0.559
precision = [0.682 0.432 0.012 0.105]
recall = [0.797 0.106 0.035 0.155 ]
f1 = [0.735 0.170 0.018 0.125]
The only difference in these 2 models are in the initialization,optimizers, and loss function:
The Keras model uses Nadam optimizer, the Pytorch model uses SGD(I am not aware that Nadam is available in Pytorch)
The Keras model uses glorot-normal for kernel initialization, Pytorch model uses xavier_uniform.
The Keras model uses “Categorical Cross Entropy” loss function with softmax as the last output layer, Pytorch model uses “CrossEntrropyLoss”. I do not use Softtmax in the last layer since CrossEntropyLoss internally uses LogSoftmax.
I have been spending a lot of time trying to understand why my Pytorch model performs much worse with no avail. I would appreciate if gurus and experts could help advise here :slight_smile:
Here is the Keras model:
def build_model(data_dim, n_channels, n_cl):
eeg_channels = 1
act_conv = 'relu'
init_conv = 'glorot_normal'
dp_conv = 0.3
def cnn_block(input_shape):
input = Input(shape=input_shape)
x = GaussianNoise(0.0005)(input)
x = Conv2D(32, (3, 1), strides=(1, 1), padding='same', kernel_initializer=init_conv)(x)
x = BatchNormalization()(x)
x = Activation(act_conv)(x)
x = MaxPooling2D(pool_size=(2, 1), padding='same')(x)
x = Conv2D(64, (3, 1), strides=(1, 1), padding='same', kernel_initializer=init_conv)(x)
x = BatchNormalization()(x)
x = Activation(act_conv)(x)
x = MaxPooling2D(pool_size=(2, 1), padding='same')(x)
for i in range(4):
x = Conv2D(128, (3, 1), strides=(1, 1), padding='same', kernel_initializer=init_conv)(x)
x = BatchNormalization()(x)
x = Activation(act_conv)(x)
x = MaxPooling2D(pool_size=(2, 1), padding='same')(x)
for i in range(6):
x = Conv2D(256, (3, 1), strides=(1, 1), padding='same', kernel_initializer=init_conv)(x)
x = BatchNormalization()(x)
x = Activation(act_conv)(x)
x = MaxPooling2D(pool_size=(2, 1), padding='same')(x)
flatten1 = Flatten()(x)
cnn_eeg = Model(inputs=input, outputs=flatten1)
return cnn_eeg
hidden_units1 = 256
dp_dense = 0.5
eeg_channels = 1
eog_channels = 2
input_eeg = Input(shape=( data_dim, 1, 3))
cnn_eeg = cnn_block(( data_dim, 1, 3))
x_eeg = cnn_eeg(input_eeg)
x = BatchNormalization()(x_eeg)
x = Dropout(dp_dense)(x)
x = Dense(units=hidden_units1, activation=act_conv, kernel_initializer=init_conv)(x)
x = BatchNormalization()(x)
x = Dropout(dp_dense)(x)
predictions = Dense(units=n_cl, activation='softmax', kernel_initializer=init_conv)(x)
model = Model(inputs=[input_eeg] , outputs=[predictions])
return [cnn_eeg, model]
The model is used as follows:
[cnn_eeg, model] = build_model(data_dim, n_channels, n_cl)
Nadam = optimizers.Nadam( )
model.compile(optimizer='Nadam', loss='categorical_crossentropy', metrics=['accuracy'], sample_weight_mode=None)
print(cnn_eeg.summary())
print(model.summary())
model.fit_generator(generator_train, steps_per_epoch = steps_per_epoch, class_weight = weight, epochs = 1, verbose=1, callbacks=[history], initial_epoch=0 )
Printout of the model:
Layer (type) Output Shape Param #
input_2 (InputLayer) [(None, 3200, 1, 3)] 0
gaussian_noise (GaussianNois (None, 3200, 1, 3) 0
conv2d (Conv2D) (None, 3200, 1, 32) 320
batch_normalization (BatchNo (None, 3200, 1, 32) 128
activation (Activation) (None, 3200, 1, 32) 0
max_pooling2d (MaxPooling2D) (None, 1600, 1, 32) 0
conv2d_1 (Conv2D) (None, 1600, 1, 64) 6208
batch_normalization_1 (Batch (None, 1600, 1, 64) 256
activation_1 (Activation) (None, 1600, 1, 64) 0
max_pooling2d_1 (MaxPooling2 (None, 800, 1, 64) 0
conv2d_2 (Conv2D) (None, 800, 1, 128) 24704
batch_normalization_2 (Batch (None, 800, 1, 128) 512
activation_2 (Activation) (None, 800, 1, 128) 0
max_pooling2d_2 (MaxPooling2 (None, 400, 1, 128) 0
conv2d_3 (Conv2D) (None, 400, 1, 128) 49280
batch_normalization_3 (Batch (None, 400, 1, 128) 512
activation_3 (Activation) (None, 400, 1, 128) 0
max_pooling2d_3 (MaxPooling2 (None, 200, 1, 128) 0
conv2d_4 (Conv2D) (None, 200, 1, 128) 49280
batch_normalization_4 (Batch (None, 200, 1, 128) 512
activation_4 (Activation) (None, 200, 1, 128) 0
max_pooling2d_4 (MaxPooling2 (None, 100, 1, 128) 0
conv2d_5 (Conv2D) (None, 100, 1, 128) 49280
batch_normalization_5 (Batch (None, 100, 1, 128) 512
activation_5 (Activation) (None, 100, 1, 128) 0
max_pooling2d_5 (MaxPooling2 (None, 50, 1, 128) 0
conv2d_6 (Conv2D) (None, 50, 1, 256) 98560
batch_normalization_6 (Batch (None, 50, 1, 256) 1024
activation_6 (Activation) (None, 50, 1, 256) 0
max_pooling2d_6 (MaxPooling2 (None, 25, 1, 256) 0
conv2d_7 (Conv2D) (None, 25, 1, 256) 196864
batch_normalization_7 (Batch (None, 25, 1, 256) 1024
activation_7 (Activation) (None, 25, 1, 256) 0
max_pooling2d_7 (MaxPooling2 (None, 13, 1, 256) 0
conv2d_8 (Conv2D) (None, 13, 1, 256) 196864
batch_normalization_8 (Batch (None, 13, 1, 256) 1024
activation_8 (Activation) (None, 13, 1, 256) 0
max_pooling2d_8 (MaxPooling2 (None, 7, 1, 256) 0
conv2d_9 (Conv2D) (None, 7, 1, 256) 196864
batch_normalization_9 (Batch (None, 7, 1, 256) 1024
activation_9 (Activation) (None, 7, 1, 256) 0
max_pooling2d_9 (MaxPooling2 (None, 4, 1, 256) 0
conv2d_10 (Conv2D) (None, 4, 1, 256) 196864
batch_normalization_10 (Batc (None, 4, 1, 256) 1024
activation_10 (Activation) (None, 4, 1, 256) 0
max_pooling2d_10 (MaxPooling (None, 2, 1, 256) 0
conv2d_11 (Conv2D) (None, 2, 1, 256) 196864
batch_normalization_11 (Batc (None, 2, 1, 256) 1024
activation_11 (Activation) (None, 2, 1, 256) 0
max_pooling2d_11 (MaxPooling (None, 1, 1, 256) 0
flatten (Flatten) (None, 256) 0
Total params: 1,270,528
Trainable params: 1,266,240
Non-trainable params: 4,288
None
Model: "model_1"
Layer (type) Output Shape Param #
input_1 (InputLayer) [(None, 3200, 1, 3)] 0
model (Functional) (None, 256) 1270528
batch_normalization_12 (Batc (None, 256) 1024
dropout (Dropout) (None, 256) 0
dense (Dense) (None, 256) 65792
batch_normalization_13 (Batc (None, 256) 1024
dropout_1 (Dropout) (None, 256) 0
dense_1 (Dense) (None, 4) 1028
Total params: 1,339,396
Trainable params: 1,334,084
Non-trainable params: 5,312
Below is my Pytorch model:
class MSECNN16s(nn.Module):
def __init__(self):
# input shape = (batchsize, 3, 1, windowsize=16*200=3200)
super(MSECNN16s, self).__init__()
self.conv1 = nn.Conv2d(3, 32, kernel_size=(1,3), padding=(0,1))
self.conv2 = nn.Conv2d(32, 64, kernel_size=(1,3), padding=(0,1))
self.conv3 = nn.Conv2d(64, 128, kernel_size=(1,3), padding=(0,1))
self.conv4 = nn.ModuleList()
for x in range(3):
conv = nn.Conv2d(128, 128, kernel_size=(1,3), padding=(0,1))
nn.init.xavier_uniform_(conv.weight)
nn.init.zeros_(conv.bias)
self.conv4.append(conv)
self.conv5 = nn.Conv2d(128, 256, kernel_size=(1,3), padding=(0,1))
self.conv6 = nn.ModuleList()
for x in range(5):
conv = nn.Conv2d(256, 256, kernel_size=(1,3), padding=(0,1))
nn.init.xavier_uniform_(conv.weight)
nn.init.zeros_(conv.bias)
self.conv6.append(conv)
self.fc1 = nn.Linear(256, 256)
self.fc2 = nn.Linear(256, 4)
nn.init.xavier_uniform_(self.conv1.weight)
nn.init.zeros_(self.conv1.bias)
nn.init.xavier_uniform_(self.conv2.weight)
nn.init.zeros_(self.conv2.bias)
nn.init.xavier_uniform_(self.conv3.weight)
nn.init.zeros_(self.conv3.bias)
nn.init.xavier_uniform_(self.conv5.weight)
nn.init.zeros_(self.conv5.bias)
nn.init.xavier_uniform_(self.fc1.weight)
nn.init.zeros_(self.fc1.bias)
nn.init.xavier_uniform_(self.fc2.weight)
nn.init.zeros_(self.fc2.bias)
def forward(self, x):
# x = (batchsize, 1, 3, windowsize=16*200)
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
x = x.permute(0, 2, 1, 3) #convert to (batchsize, 3, 1, 3200)
std = 0.0005
x = (torch.randn_like(x) * std) + x
x = self.conv1(x) #output: (batchsize, 32, 1, 3200)
x = nn.BatchNorm2d(32).to(device)(x)
x = F.relu(x)
x = nn.MaxPool2d(kernel_size=(1,2))(x) #output: (batchsize, 32, 1, 1600)
x = self.conv2(x) #output: (batchsize, 64, 1, 1600)
x = nn.BatchNorm2d(64).to(device)(x)
x = F.relu(x)
x = nn.MaxPool2d(kernel_size=(1,2))(x) #output: (batchsize, 64, 1, 800)
x = self.conv3(x) #output: (batchsize, 128, 1, 800)
x = nn.BatchNorm2d(128).to(device)(x)
x = F.relu(x)
x = nn.MaxPool2d(kernel_size=(1,2))(x) #output: (batchsize, 128, 1, 400)
for conv in self.conv4:
x = conv(x)
x = nn.BatchNorm2d(128).to(device)(x)
x = F.relu(x)
padding = (0,0)
if x.shape[-1] % 2 > 0: padding = (0,1)
x = nn.MaxPool2d(kernel_size=(1,2), padding = padding)(x) # output channels will be 200->100->50
x = self.conv5(x) #output: (batchsize, 256, 1, 50)
x = nn.BatchNorm2d(256).to(device)(x)
x = F.relu(x)
x = nn.MaxPool2d(kernel_size=(1,2))(x) #output: (batchsize, 256, 1, 25)
for conv in self.conv6:
x = conv(x)
x = nn.BatchNorm2d(256).to(device)(x)
x = F.relu(x)
padding = (0,0)
if x.shape[-1] % 2 > 0: padding = (0,1)
x = nn.MaxPool2d(kernel_size=(1,2), padding = padding)(x) # output channels will be 13->7->4->2->1
# x is (batchsize, 256, 1, 1)
x = x.squeeze() #x is (batchsize, 256)
x = nn.BatchNorm1d(256).to(device)(x)
x = nn.Dropout(p=0.5)(x)
x = self.fc1(x)
x = F.relu(x)
x = nn.BatchNorm1d(256).to(device)(x)
x = nn.Dropout(p=0.5)(x)
x = self.fc2(x)
return x
with the following optimizer and criterion:
model = MSECNN16s()
model.to(device)
criterion = nn.CrossEntropyLoss(weight=torch.Tensor([1.0, 11.1, 102.9, 38.1]).to(device))
# load the optimizer
optimizer = torch.optim.SGD(model.parameters(), lr=0.01, momentum=0.9, weight_decay=0.0001 )
Any advice is appreciated! many thanks!
Regards
Edwin

Shapes Incompatible in Keras with CNN

I am implementing a network that takes a 2d image and outputs a 3D binary voxels for it.
I am using an autoencoder with LSTM module.
The current shape of images and voxels are as follows:
print(x_train.shape)
print(y_train.shape)
>>> (792, 127, 127, 3)
>>> (792, 32, 32, 32)
792 RGB images 127 x 127
792 corresponding voxels with 3D Binary Tensor (32 x 32 x 32)
Running the following encoder model:
from tensorflow.keras import Sequential
from tensorflow.keras.layers import Conv2D, LeakyReLU, MaxPooling2D, Dense, Flatten, Conv3D, MaxPool3D, GRU, Reshape, UpSampling3D
from tensorflow import keras
enc_filter = [96, 128, 256, 256, 256, 256]
fc_filters = [1024]
model = Sequential()
epochs = 5
batch_size = 24
input_shape=(127,127,3)
model.add(Conv2D(enc_filter[0], kernel_size=(7, 7), strides=(1,1),activation='relu',input_shape=input_shape))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(LeakyReLU(alpha=0.1))
model.add(Flatten())
model.add(Dense(1024, activation='relu'))
model.compile(loss=keras.losses.categorical_crossentropy,
optimizer=keras.optimizers.SGD(lr=0.01),
metrics=['accuracy'])
model.fit(x_train, y_train,
batch_size=batch_size,
epochs=epochs)
yields the following:
ValueError: Shapes (24, 32, 32, 32) and (24, 1024) are incompatible
Can someone address why the shapes are incompatible? I tried removing layers and test others but all yields compatibility issues.
Your model has a dense layer with 1024 output, but you are passing 32,32,32 shaped array.
You need to reshape your model output so that it has proper shape.
This is a dummy model, you need to change the parameters to find the suitable architecture.
from tensorflow.keras import Sequential
from tensorflow.keras.layers import Conv2D, LeakyReLU, MaxPooling2D, Dense, Flatten, Conv3D, MaxPool3D, GRU, Reshape, UpSampling3D
from tensorflow import keras
import numpy as np
# dummy data
x_train = np.random.randn(792, 127, 127, 3)
y_train = np.random.randn(792, 32, 32, 32)
enc_filter = [96, 128, 256, 2]
fc_filters = [1024]
model = Sequential()
epochs = 5
batch_size = 24
input_shape=(127,127,3)
model.add(Conv2D(enc_filter[0], kernel_size=(7, 7), strides=(1,1),activation='relu',input_shape=input_shape))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(LeakyReLU(alpha=0.1))
model.add(Conv2D(enc_filter[1], kernel_size=(7, 7), strides=(1,1),activation='relu',input_shape=input_shape))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(LeakyReLU(alpha=0.1))
model.add(Conv2D(enc_filter[2], kernel_size=(7, 7), strides=(1,1),activation='relu',input_shape=input_shape))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(LeakyReLU(alpha=0.1))
model.add(Conv2D(enc_filter[3], kernel_size=(7, 7), strides=(1,1),activation='relu',input_shape=input_shape)) # bottolneck
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(LeakyReLU(alpha=0.1))
model.add(Flatten())
model.add(Dense(32*32*32, activation='relu'))
model.add(Reshape((32,32,32)))
model.compile(loss=keras.losses.categorical_crossentropy,
optimizer=keras.optimizers.SGD(lr=0.01),
metrics=['accuracy'])
model.summary()
model.fit(x_train, y_train,
batch_size=batch_size,
epochs=epochs)
Model: "sequential_10"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
conv2d_24 (Conv2D) (None, 121, 121, 96) 14208
_________________________________________________________________
max_pooling2d_24 (MaxPooling (None, 60, 60, 96) 0
_________________________________________________________________
leaky_re_lu_24 (LeakyReLU) (None, 60, 60, 96) 0
_________________________________________________________________
conv2d_25 (Conv2D) (None, 54, 54, 128) 602240
_________________________________________________________________
max_pooling2d_25 (MaxPooling (None, 27, 27, 128) 0
_________________________________________________________________
leaky_re_lu_25 (LeakyReLU) (None, 27, 27, 128) 0
_________________________________________________________________
conv2d_26 (Conv2D) (None, 21, 21, 256) 1605888
_________________________________________________________________
max_pooling2d_26 (MaxPooling (None, 10, 10, 256) 0
_________________________________________________________________
leaky_re_lu_26 (LeakyReLU) (None, 10, 10, 256) 0
_________________________________________________________________
conv2d_27 (Conv2D) (None, 4, 4, 2) 25090
_________________________________________________________________
max_pooling2d_27 (MaxPooling (None, 2, 2, 2) 0
_________________________________________________________________
leaky_re_lu_27 (LeakyReLU) (None, 2, 2, 2) 0
_________________________________________________________________
flatten_10 (Flatten) (None, 8) 0
_________________________________________________________________
dense_1 (Dense) (None, 32768) 294912
_________________________________________________________________
reshape_10 (Reshape) (None, 32, 32, 32) 0
=================================================================
Total params: 2,542,338
Trainable params: 2,542,338
Non-trainable params: 0
In the summary, you can see I add a dense layer with 32x32x32 neurons and then reshape it.

Custom Loss Function in TF2.0

For a image segmentation problem, I need to write a custom loss function. I am getting below said error.
Code Base: https://www.tensorflow.org/tutorials/images/segmentation
Last layer:
Conv2DTrans (128,128,2) [Note that in my case it is only 2 values]
def call(self, y_true, y_pred):
ytrue = ytrue.numpy()
.....
Error:
AttributeError: 'Tensor' object has no attribute 'numpy'
I tried py_function and numpy_function but both return the same error
and also
with tf.compat.v1.Session() as sess:
for i,j in enumerate(sess.run(y_true),sess.run(y_pred)):
Current Model Layers:
Layer (type) Output Shape Param # Connected to
==================================================================================================
input_82 (InputLayer) [(None, 128, 128, 3) 0
__________________________________________________________________________________________________
model_80 (Model) [(None, 64, 64, 96), 1841984 input_82[0][0]
__________________________________________________________________________________________________
sequential_160 (Sequential) (None, 8, 8, 512) 1476608 model_80[1][4]
__________________________________________________________________________________________________
concatenate_160 (Concatenate) (None, 8, 8, 1088) 0 sequential_160[0][0]
model_80[1][3]
__________________________________________________________________________________________________
sequential_161 (Sequential) (None, 16, 16, 256) 2507776 concatenate_160[0][0]
__________________________________________________________________________________________________
concatenate_161 (Concatenate) (None, 16, 16, 448) 0 sequential_161[0][0]
model_80[1][2]
__________________________________________________________________________________________________
sequential_162 (Sequential) (None, 32, 32, 128) 516608 concatenate_161[0][0]
__________________________________________________________________________________________________
concatenate_162 (Concatenate) (None, 32, 32, 272) 0 sequential_162[0][0]
model_80[1][1]
__________________________________________________________________________________________________
sequential_163 (Sequential) (None, 64, 64, 64) 156928 concatenate_162[0][0]
__________________________________________________________________________________________________
concatenate_163 (Concatenate) (None, 64, 64, 160) 0 sequential_163[0][0]
model_80[1][0]
__________________________________________________________________________________________________
conv2d_transpose_204 (Conv2DTra (None, 128, 128, 2) 2882 concatenate_163[0][0]
==================================================================================================
I need a numpy array to focus only more on 1's and not on zeroes. Now the metric and accuracy are overwhelmed by the presence of lot of zeros.
def tumor_loss(y_true,y_pred):
y_true = y_true.reshape((SHAPE,SHAPE))
y_pred = y_pred.reshape((SHAPE,SHAPE))
y_true_ind = np.where(y_true ==1)[1]
y_pred_ind = np.where(y_pred==1)[1]
if np.array_equal(y_true_ind,y_pred_ind):
return 0
if y_true_ind.shape[0] > y_pred_ind.shape[0]:
return y_true_ind.shape[0] - np.setdiff1d(y_true_ind,y_pred_ind).shape[0]
else:
return y_true_ind.shape[0] - np.setdiff1d(y_pred_ind,y_true_ind).shape[0]
If you are running on tf version >= 2.0, try using
model.compile(loss=custom_loss, optimizer='adam', run_eagerly=True)
if you are using Keras api.

Keras to Pytorch model translation and input size

I am following a Keras tutorial and want to shadow it in Pytorch, so am translating. I'm not strongly familiar with either and am coming unstuck on the input size parameter especially, but also the final layer - do I need another Linear layer? Can anyone translate the following to a Pytorch sequential definition?
visible = Input(shape=(64,64,1))
conv1 = Conv2D(32, kernel_size=4, activation='relu')(visible)
pool1 = MaxPooling2D(pool_size=(2, 2))(conv1)
conv2 = Conv2D(16, kernel_size=4, activation='relu')(pool1)
pool2 = MaxPooling2D(pool_size=(2, 2))(conv2)
hidden1 = Dense(10, activation='relu')(pool2)
output = Dense(1, activation='sigmoid')(hidden1)
model = Model(inputs=visible, outputs=output)
This is the output of the model:
Layer (type) Output Shape Param #
_________________________________________________________________
input_1 (InputLayer) (None, 64, 64, 1) 0
conv2d_1 (Conv2D) (None, 61, 61, 32) 544
max_pooling2d_1 (MaxPooling2 (None, 30, 30, 32) 0
conv2d_2 (Conv2D) (None, 27, 27, 16) 8208
max_pooling2d_2 (MaxPooling2 (None, 13, 13, 16) 0
dense_1 (Dense) (None, 13, 13, 10) 170
dense_2 (Dense) (None, 13, 13, 1) 11
Total params: 8,933
Trainable params: 8,933
Non-trainable params: 0
What I have worked out lacks a specification for the shape of the input, and I am also a bit perplexed at the translation of stride in the specified Keras model as it uses stride 2 in the MaxPooling2D but doesn't specify this elsewhere - it is perhaps a toy example.
model = nn.Sequential(
nn.Conv2d(1, 32, 4),
nn.ReLU(),
nn.MaxPool2d(2, 2),
nn.Conv2d(1, 16, 4),
nn.ReLU(),
nn.MaxPool2d(2, 2),
nn.Linear(10, 1),
nn.Sigmoid(),
)

Resources