multiclass-segemenratation using pytorch and unet - pytorch

I am doing landuse classification with 4 classes and as an output from softmax function of unet model i gained output of [8,4,128,128] and my mask image is [8,1,128,128] so to calculate loss I used nn.cross entropy function should i have to make any modification for good results as I find the output of testing image was 4 channels images which are not like mask image.``

Related

My confusion matrix showing 16*16 instead of 8*8

cm = confusion_matrix(test_labels, prediction_RF)
print(cm)
sns.heatmap(cm, annot=True)
I'm using CNN as feature extractor and then feed the model into Random Forest. Previously I used the same procedure to on a dummy CNN model. It showed the output confusion matrix 8x8 (since I have 8 classes) when I try to see my Confusion Matrix on VGG16 model, I get 16x16 matrix. And I also get 0.0 accuracy on VGG16. But I'm still getting decent result. The matrix I get on VGG16 is given below.
Matrix on VGG16

Output of CNN should be image

I am pretty new to deep learning, so I got one question:
Assume an input Grayscale image of shape (128,128,1). Target (Output) is as well an (128,128,1) sized image, e.g. for segmentation, depth prediction etc.. Usually with valid padding the size of the image shrinks after several convolution layers.
What are decent (maybe not the toughest one) variants to keep the size or predict a same sized image? Is it via same-padding? Is it via tranpose convolution or upsampling? Should I use a FCN at the end and reshape them to the image size? I am using pytorch. I would be glad for any hints, because I didn't find much in the internet.
Best
TLDR; You want to look at Deconv networks (Convolution transpose) that help regenerate an image using convolution operations. You want to build an encoder-decoder convolution architecture that compresses an image to a latent representation using convolutions and then decodes an image from this compressed representation. For image segmentation, a popular architecture is U-net.
NOTE: I cant answer for pytorch, so I will he sharing the Tensorflow equivalent. Please feel to ignore the code, but since you are looking for the concept, I can help you with what you need to solve this.
You are trying to generate an image as the output of the network.
A series convolution operation help to Downsample an image. Since you need an output 2D matrix (gray scale image), you want to Upsample as well. Such a network is called a Deconv network.
The first series of layers convolve over the input, 'flattening' them into a vector of channels. The next set of layers use 2D Conv Transpose or Deconv operations to change the channels back into a 2D matrix (Gray scale image)
Refer to this image for reference -
Here is a sample code that shows you how you can take a (10,3,1) image to a (12,10,1) image using a deconv net.
You can find the conv2dtranspose layer implementation in pytorch here.
from tensorflow.keras import layers, Model, utils
inp = layers.Input((128,128,1)) ##
x = layers.Conv2D(2, (3,3))(inp) ## Convolution part
x = layers.Conv2D(4, (3,3))(x) ##
x = layers.Conv2D(6, (3,3))(x) ##
##########
x = layers.Conv2DTranspose(6, (3,3))(x)
x = layers.Conv2DTranspose(4, (3,3))(x) ## ## Deconvolution part
out = layers.Conv2DTranspose(1, (3,3))(x) ##
model = Model(inp, out)
utils.plot_model(model, show_shapes=True, show_layer_names=False)
Also, if you are looking for tried and tested architectures in this domain, check out U-net; U-Net: Convolutional Networks for Biomedical Image Segmentation. This is an encoder-decoder (conv2d, conv2d-transpose) architecture that uses a concept called skip connections to avoid information loss and generate better image segmentation masks.

Multi-class segmentation in Keras

I'm trying to implement a multi-class segmentation in Keras:
input image is grayscale (i.e 1 channel)
ground truth image has 3 channels, each pixel is a one-hot vector of length 3
prediction is standard U-Net trained with categorical_crossentropy outputting 3 channels (softmax-ed)
What is wrong with this setup? The training loss has some weird behaviour:
in my lucky cases it behaves as expected (decreases)
90 % of the time it's stuck at ~0.9
My implementation can be found here
I don't think there is anything wrong with the code: if my ground truth is 1-channel (i.e 0s everywhere and 1s somewhere) and use binary_crossentropy + sigmoid as final activation I see no weird behaviour.
I'll answer my own question. The solution is to weight each class i.e using a weighted cross entropy loss

Multi input & output CNN

I have the following problem:
Input: a set of 6 images
Output: a probability for each image determining whether the image is the correct one out of the 6 images
I know how to create a CNN with keras, but not how to have multiple images as an input.
How would one solve this problem?
One way I can think of is to use a pre-trained model (VGG16 etc.) and extract out the vectors from some intermediate layer, then concat 6 vectors together then feed it into a neural network (or some other classification model) and train it as a multiclass classification task.
You can also use an Autoencoder and take the anomaly detection approach.

Visualizing convoluational layers in autoencoder

I have built a variational autoencoder using 2D convolutions (Conv2D) in the encoder and decoder. I'm using Keras. In total I have 2 layers with 32 and 64 filters each and a a kernel size of 4x4 and stride 2x2 each. My input images are (64, 80, 1). I'm using the MSE loss. Now, I would like to visualize the individual convolutional layers (i.e. what they learn) as done here.
So, first I load my model using load_weights() function and then I call visualize_layer(encoder, 'conv2d_1') from above mentioned code where conv2d_1 is the layer name of the first convolutional layer in my encoder.
When I do so I'm getting the error message
tensorflow.python.framework.errors_impl.UnimplementedError: Fused conv implementation does not support grouped convolutions for now.
[[{{node conv2d_1/BiasAdd}}]]
When I use the VGG16 model as in the example code it works. Does somebody know how I can adapt the code to work for my case?

Resources