CNN: Softmax layer for pixel-wise classification - conv-neural-network

I want to understand in more details how a softmax layer can look in a CNN for semantic segmentation / pixelwise classification of an image. The CNN outputs an image of class labels, where each pixel of the original image gets a label.
After passing a test image through the network, the next-to-last layer outputs N channels of the resolution of the original image. My question is, how the softmax layer transforms these N channels to the final image of labels.
Assumed we have C classes (# possible labels). My suggestion is that for each pixel, its N neurons of the previous layer are connected to C neurons in the softmax layer, where each of the C neurons represents one class. Using the softmax activation function, the sum of the C outputs (for this pixel) is equal to 1 (which facilitates training of the network). Last, each pixel is classified as the class with the highest probability (given by softmax values).
This would mean, that the softmax layer consists of C * #pixels neurons. Is my suggestion correct? I didn't find an explanation for this and hope that you can help me.
Thanks for helping!

The answer is softmax layer Do not transforms these N channels to the final image of labels
Assuming you have a output of N channel your question is how do you convert it to a 3 channel for the final output.
The answer is you dont. Each of those N channel represents a class. The way to go is that you should have a dummy array with same height and weight and 3 channels.
Now you fist have to abstractly encode each class with a color, like streets as green, cars as red etc.
Assume for height = 5 and width = 5, channel 7 has the max value. Now,
-> if the channel 7 represents car the you need to put a red pixel on the dummy array where height = 5 and width = 5.
-> if the channel 7 represents street the you need to put a green pixel on the dummy array where height = 5 and width = 5.
So you are trying to look for which of the N class a pixel belongs to. And based on the class you will redraw the pixel in a unique color on the dummy array.
This dummy array is called the mask.
For example, assume this is a input
We are trying to locate the tumor area of the brain using pixel wise classification. Here the number of classes are 2, Tumor present and not present. So the softmax layer outputs a 2 channel object where channel 1 says tumor present and channel 2 says otherwise.
So whenever for height = X and width = Y, channel 1 has higher value we make a white pixel of the dummmy[X][Y] image. When the channel 2 has higher value we make a black pixel.
After that we get a mask like this,
Which doesnt make that much sense. But when we overlay the two image, we get this
So basically you will try to create the mask image (2nd one) from your output with N Channel. And overlaying them will get you the final output

Related

Placing modified regions of interest in the orginal location

Lets say i have an CNN intermediate layer output tensor call it X with shape (B,C,H,W) batch, channels, height and width. I extract the regions of interest (ROIs) from the tensor based on some manually chosen criteria i.e i have box coordinates. Assume all the ROIs have same shape (B,N,C,h,w). N is number of ROIs, h is height, and w is width of ROI respectively. Lets call the ROI tensor Y. Now i perform a differentiable operation on Y (assume convolution), this operation does not alter the dimension or shape of the ROIs. Lets call the modified ROI tensor as Y’(shape: B,N,C,h,w).
Now i want to replace the locations where Y are extracted from X with Y’. This modified X is further processed in the subsequent layers of the model. So essentially if i do the following things
Y = X[location criteria]
Y’ = some_operation(Y)
X[location criteria] = Y’
The above operation mentioned has inplace change of X, pytorch computational graph cannot keep track of it. How to modify the value of X without causing error?

What is the upsampling method called 'area' used for?

The PyTorch function torch.nn.functional.interpolate contains several modes for upsampling, such as: nearest, linear, bilinear, bicubic, trilinear, area.
What is the area upsampling modes used for?
As jodag said, it is resizing using adaptive average pooling. While the answer at the link aims to explain what adaptive average pooling is, I find the explanation a bit vague.
TL;DR the area mode of torch.nn.functional.interpolate is probably one of the most intuitive ways to think of when one wants to downsample an image.
You can think of it as applying an averaging Low-Pass Filter(LPF) to the original image and then sampling. Applying an LPF before sampling is to prevent potential aliasing in the downsampled image. Aliasing can result in Moiré patterns in the downscaled image.
It is probably called "area" because it (roughly) preserves the area ratio between the input and output shapes when averaging the input pixels. More specifically, every pixel in the output image will be the average of a respective region in the input image where the 1/area of this region will be roughly the ratio between output image's area and input image's area.
Furthermore, the interpolate function with mode = 'area' calls the source function adaptie_avg_pool2d (implemented in C++) which assigns each pixel in the output tensor the average of all pixel intensities within a computed region of the input. That region is computed per pixel and can vary in size for different pixels. The way it is computed is by multiplying the output pixel's height and width by the ratio between the input and output (in that order) height and width (respectively) and then taking once the floor (for the region's starting index) and once the ceil (for the region's ending index) of the resulting value.
Here's an in-depth analysis of what happens in nn.AdaptiveAvgPool2d:
First of all, as stated there you can find the source code for adaptive average pooling (in C++) here: source
Taking a look at the function where the magic happens (or at least the magic on CPU for a single frame), static void adaptive_avg_pool2d_single_out_frame, we have 5 nested loops, running over channel dimension, then width, then height and within the body of the 3rd loop the magic happens:
First compute the region within the input image which is used to calculate the value of the current pixel (recall we had width and height loop to run over all pixels in the output).
How is this done?
Using a simple computation of start and end indices for height and width as follows: floor((input_height/output_height) * current_output_pixel_height) for the start and ceil((input_height/output_height) * (current_output_pixel_height+1)) and similarly for the width.
Then, all that is done is to simply average the intensities of all pixels in that region and current channel and place the result in the current output pixel.
I wrote a simple Python snippet that does the same thing, in the same fashion (loops, naive) and produces equivalent results. It takes tensor a and uses adaptive average pool to resize a to shape output_shape in 2 ways - once using the built-in nn.AdaptiveAvgPool2d and once with my translation into Python of the source function in C++: static void adaptive_avg_pool2d_single_out_frame. Built-in function's result is saved into b and my translation is saved into b_hat. You can see that the results are equivalent (you can further play with the spatial shapes and validate this):
import torch
from math import floor, ceil
from torch import nn
a = torch.randn(1, 3, 15, 17)
out_shape = (10, 11)
b = nn.AdaptiveAvgPool2d(out_shape)(a)
b_hat = torch.zeros(b.shape)
for d in range(a.shape[1]):
for w in range(b_hat.shape[3]):
for h in range(b_hat.shape[2]):
startW = floor(w * a.shape[3] / out_shape[1])
endW = ceil((w + 1) * a.shape[3] / out_shape[1])
startH = floor(h * a.shape[2] / out_shape[0])
endH = ceil((h + 1) * a.shape[2] / out_shape[0])
b_hat[0, d, h, w] = torch.mean(a[0, d, startH: endH, startW: endW])
'''
Prints Mean Squared Error = 0 (or a very small number, due to precision error)
as both outputs are the same, proof of output equivalence:
'''
print(nn.MSELoss()(b_hat, b))
Looking at the source code it appears area interpolation is equivalent to resizing a tensor via adaptive average pooling. You can refer to this question for an explanation of adaptive average pooling. Therefore area interpolation is more applicable to downsampling than upsampling.

Should Kernel size be same as word size in 1D Convolution?

In CNN literature, it is often illustrated that kernel size is same as size of the longest word in the vocabulary list that one has, when it sweeps across a sentence.
So if we use embedding to represent the text, then shouldn't the kernel size be same as the embedding dimension so that it gives the same effect as sweeping word by word?
I see difference sizes of kernel used, despite the word length.
Well... these are 1D convolutions, for which the kernels are 3 dimensional.
It's true that one of these 3 dimensions must match the embedding size (otherwise it would be pointless to have this size)
These three dimensions are:
(length_or_size, input_channels, output_channels)
Where:
length_or_size (kernel_size): anything you want. In the picture, there are 6 different filters with sizes 4, 4, 3, 3, 2, 2, represented by the "vertical" dimension.
input_channels (automatically the embedding_size): the size of the embedding - this is somwehat mandatory (in Keras this is automatic and almost invisible), otherwise the multiplications wouldn't use the entire embedding, which is pointless. In the picture, the "horizontal" dimension of the filters is constantly 5 (the same as the word size - this is not a spatial dimension).
output_channels (filters): anything you want, but it seems the picture is talking about 1 channel only per filter, since it's totally ignored, and if represented would be something like "depth".
So, you're probably confusing which dimensions are which. When you define a conv layer, you do:
Conv1D(filters = output_channels, kernel_size=length_or_size)
While the input_channels come from the embedding (or the previous layer) automatically.
Creating this model in Keras
To create this model, it would be something like:
sentence_length = 7
embedding_size=5
inputs = Input((sentence_length,))
out = Embedding(total_words_in_dic, embedding_size)
Now, supposing these filters have 1 channel only (since the image doesn't seem to consider their depth...), we can join them in pairs of 2 channels:
size1 = 4
size2 = 3
size3 = 2
output_channels=2
out1 = Conv1D(output_channels, size1, activation=activation_function)(out)
out2 = Conv1D(output_channels, size2, activation=activation_function)(out)
out3 = Conv1D(output_channels, size3, activation=activation_function)(out)
Now, let's collapse the spatial dimensions and remain with the two channels:
out1 = GlobalMaxPooling1D()(out1)
out2 = GlobalMaxPooling1D()(out2)
out3 = GlobalMaxPooling1D()(out3)
And create the 6 channel output:
out = Concatenate()([out1,out2,out3])
Now there is a mistery jump from 6 channels to 2 channels which cannot be explained by the picture. Perhaps they're applying a Dense layer or something.......
#????????????????
out = Dense(2, activation='softmax')(out)
model = Model(inputs, out)

Weights Matrix Final Fully Connected Layer

My question is, I think, too simple, but it's giving me headaches. I think I'm missing either something conceptually in Neural Networks or Tensorflow is returning some wrong layer.
I have a network in which last layer outputs 4800 units. The penultimate layer has 2000 units. I expect my weight matrix for last layer to have the shape (4800, 2000) but when I print out the shape in Tensorflow I see (2000, 4800). Please can someone confirm which shape of weight matrix the last layer should have? Depending on the answer, I can further debug the issue. Thanks.
Conceptually, a neural network layer is often written like y = W*x where * is matrix multiplication, x is an input vector and y an output vector. If x has 2000 units and y 4800, then indeed W should have size (4800, 2000), i.e. 4800 rows and 2000 columns.
However, in implementations we usually work on a batch of inputs X. Say X is (b, 2000) where b is your batch size. We don't want to transform each element of X individually by doing W*x as above since this would be inefficient.
Instead we would like to transform all inputs at the same time. This can be done via Y = X*W.T where W.T is the transpose of W. You can work out that this essentially applies W*x to each row of X (i.e. each input). Y is then a (b, 4800) matrix containing all transformed inputs.
In Tensorflow, the weight matrix is simply saved in this transposed state, since it is usually the form that is needed anyway. Thus, we have a matrix with shape (2000, 4800) (the shape of W.T).

Output shape of a convolutional layer

I built a convolutional neural network in Keras.
model.add(Convolution1D(nb_filter=111, filter_length=5, border_mode='valid', activation="relu", subsample_length=1))
According to the CS231 lecture a convolving operation creates a feature map (i.e. activation map) for each filter which are then stacked together. IN my case the convolutional layer has a 300 dimensional input. Hence, I expect the following computation:
Each filter has a window size of 5. Consequently, each filter produces 300-5+1=296 convolutions.
As there are 111 filters there should be a 111*296 output of the convolutional layer.
However, the actual output shapes look differently:
convolutional_layer = model.layers[1]
conv_weights, conv_biases = convolutional_layer.get_weights()
print(conv_weights.shape) # (5, 1, 300, 111)
print(conv_biases.shape) # (,111)
The shape of the bias values makes sense, because there is one bias value for each filter. However, I do not understand the shape of the weights. Apparently, the first dimension depends on the filter size. The third dimension is the number of input neurons, which should have been reduced by the convolution. The last dimension probably refers to the number of filters. This does not make sense, because how should I easily get the feature map for a specific filter?
Keras either uses Theano or Tensorflow as a backend. According to their documentation the output of a convolving operation is a 4d tensor (batch_size, output_channel, output_rows, output_columns).
Can somebody explain me the output shape in accordance with the CS231 lecture?
Your Weight dimension has to be [filter_height, filter_width, in_channel, out_channe]
With your example I think the input channel which is the depth of the input is 300 and you want the output channel to be 111
Total number of filters are 111 and not 300*111
As you have said by yourself each bias for every filter so 111 bias for 111 filters
Each filter out of 111 will produce a convolution on the input
The Weight shape in your case means that you are using a kernel patch of shape 5*1
The third dimension means that depth of input feature map is 300
The fourth dimension mean that depth of the output feature map is 111
Actually it makes very good sense. Your learn the weights of the filters. Each filter in turn produces an output (aka an activation map respective to your input data).
The first two axes of your conv_weights.shape are the dimensions of your filter that is being learned (as your already mentioned). Your filter_length is 5 x 1. Your input has 300 dimensions and you want to get 111 filters per dimension, so you end up with 300 * 111 filters of size 5 * 1 weights.
I assume that the feature map of filter #0 for dimension #0 is sth like your_weights[:, :, 0, 0].

Resources