Keras multi-class semantic segmentation label - keras

For semantic segmentations, you generally end up with the last layer being something like
output = Conv2D(num_classes, (1, 1), activation='softmax')
My question is, how do I prepare the labels for this? For example, if I have 10 classes to identify, each with a different colour. For each label image, do I need to apply masking for one particular colour, turn this into grayscale image so that I can compare with 1 filter from the model output? Or is there a way to pass one full RGB picture in as the label?

The output of your network will be an image with 10 channels, where each pixel will consist of a vector of probabilities that sum to one (due to the softmax). Example: [0.1,0.1,0.1,0.05,0.05,0.1,0.1,0.1,0.1,0.2]. You want your labels images to be in the same shape: an image with 10 channels, and each pixel is a binary vector with a 1 at the index of the class and 0 elsewhere. Your segmentation loss function is then the pixel-wise crossentropy.
For implementation: the softmax in keras has an axis parameter: https://keras.io/activations/#softmax

np_utils.to_categorical(labels, num_classes)
When labels are (row,col), the output shape will be: (row, col, num_classes)
example:
https://github.com/naomifridman/Unet_Brain_tumor_segmentation

Related

Fit a Gaussian curve with a neural network using Pytorch

Suppose the following model :
import torch.nn as nn
class PGN(nn.Module):
def __init__(self, input_size):
super(PGN, self).__init__()
self.linear = nn.Sequential(
nn.Linear(in_features=input_size, out_features=128),
nn.ReLU(),
nn.Linear(in_features=128, out_features=1)
)
def forward(self, x):
return self.linear(x)
I figure I have to modify the model to fit a 2-dimensional curve.
Is there a way to fit a Gaussian curve with mu=0 and sigma=0 using Pytorch? If so, can you show me?
A neural network can approximate an arbitrary function of any number of parameters to a space of any dimension.
To fit a 2 dimensional curve your network should be fed with vectors of size 2, that is a vector of x and y coordinates. The output is a single value of size 1.
For training you must generate ground truth data, that is a mapping between coordinates (x and y) and the value (z). The loss function should compare this ground truth value with the estimate of your network.
If it is just a tutorial to learn Pytorch and not a real application, you can define a function that for a given x and y output the gaussian value according to your parameters.
Then during training you randomly choose a x and y and feed this to the networks then do backprop with the true value.
For a function y = a*exp(-((x-b)^2)/2c^2),
Create this mathematical equation, for some values of x, (and a,b,c), get the outputs y. This will be your training set with x values as inputs and y values as output labels. Since this is not a linear equation, you will have to experiment with no of layers/neurons and other stuff, but it will give you a good enough approximation. For different values of a,b,c, generate your data for that and maybe try different things like adding those as inputs with x.

What is the meaning of the output shape in model.summary of KERAS?

Below is the RNNs model.
And In keras, the output shape of the below code is (N,10).
model=Sequential()
model.add(RNN(10, input_shape=(1, look_back)))
I know that N in the output shape (N,10) is the batch_size.
What I want to know is that '10' is the ?
Yes 10 is D_h. Basically RNN(units) denotes that the layer returns the vector h_t from the final timestep of the RNN with the size being the number you specify for the units parameter, which in this case is 10. Or in other words, units parameter denotes the dimensionality of the output vector of that particular layer.

Does 1D Convolutional layer support variable sequence lengths?

I have a series of processed audio files I am using as input into a CNN using Keras. Does the Keras 1D Convolutional layer support variable sequence lengths? The Keras documentation makes this unclear.
https://keras.io/layers/convolutional/
At the top of the documentation it mentions you can use (None, 128) for variable-length sequences of 128-dimensional vectors. Yet at the bottom it declares that the input shape must be a
3D tensor with shape: (batch_size, steps, input_dim)
Given the following example how should I input sequences of variable length into the network
Lets say I have two examples (a and b) containing X 1 dimensional vectors of length 100 that I want to feed into the 1DConv layer as input
a.shape = (100, 100)
b.shape = (200, 100)
Can I use an input shape of (2, None, 100)? Do I need to concatenate these tensors into c where
c.shape = (300, 100)
Then reshape it to be something
c_reshape.shape = (3, 100, 100)
Where 3 is the batch size, 100, is the number of steps, and the second 100 is the input size? The documentation on the input vector is not very clear.
Keras supports variable lengths by using None in the respective dimension when defining the model.
Notice that often input_shape refers to the shape without the batch size.
So, the 3D tensor with shape (batch_size, steps, input_dim) suits perfectly a model with input_shape=(steps, input_dim).
All you need to make this model accept variable lengths is use None in the steps dimension:
input_shape=(None, input_dim)
Numpy limitation
Now, there is a numpy limitation about variable lengths. You cannot create a numpy array with a shape that suits variable lengths.
A few solutions are available:
Pad your sequences with dummy values until they all reach the same size so you can put them into a numpy array of shape (batch_size, length, input_dim). Use Masking layers to disconsider the dummy values.
Train with separate numpy arrays of shape (1, length, input_dim), each array having its own length.
Group your images by sizes into smaller arrays.
Be careful with layers that don't support variable sizes
In convolutional models using variable sizes, you can't for instance, use Flatten, the result of the flatten would have a variable size if this were possible. And the following Dense layers would not be able to have a constant number of weights. This is impossible.
So, instead of Flatten, you should start using GlobalMaxPooling1D or GlobalAveragePooling1D layers.

Convolutional NN for text input in PyTorch

I am trying to implement a text classification model using a CNN. As far as I know, for text data, we should use 1d Convolutions. I saw an example in pytorch using Conv2d but I want to know how can I apply Conv1d for text? Or, it is actually not possible?
Here is my model scenario:
Number of in-channels: 1, Number of out-channels: 128
Kernel size : 3 (only want to consider trigrams)
Batch size : 16
So, I will provide tensors of shape, <16, 1, 28, 300> where 28 is the length of a sentence. I want to use Conv1d which will give me 128 feature maps of length 26 (as I am considering trigrams).
I am not sure, how to define nn.Conv1d() for this setting. I can use Conv2d but want to know is it possible to achieve the same using Conv1d?
This example of Conv1d and Pool1d layers into an RNN resolved my issue.
So, I need to consider the embedding dimension as the number of in-channels while using nn.Conv1d as follows.
m = nn.Conv1d(200, 10, 2) # in-channels = 200, out-channels = 10
input = Variable(torch.randn(10, 200, 5)) # 200 = embedding dim, 5 = seq length
feature_maps = m(input)
print(feature_maps.size()) # feature_maps size = 10,10,4
Although I don't work with text data, the input tensor in its current form would only work using conv2d. One possible way to use conv1d would be to concatenate the embeddings in a tensor of shape e.g. <16,1,28*300>. You can reshape the input with view In pytorch.

How to correctly get layer weights from Conv2D in keras?

I have Conv2D layer defines as:
Conv2D(96, kernel_size=(5, 5),
activation='relu',
input_shape=(image_rows, image_cols, 1),
kernel_initializer=initializers.glorot_normal(seed),
bias_initializer=initializers.glorot_uniform(seed),
padding='same',
name='conv_1')
This is the first layer in my network.
Input dimensions are 64 by 160, image is 1 channel.
I am trying to visualize weights from this convolutional layer but not sure how to get them.
Here is how I am doing this now:
1.Call
layer.get_weights()[0]
This returs an array of shape (5, 5, 1, 96). 1 is because images are 1-channel.
2.Take 5 by 5 filters by
layer.get_weights()[0][:,:,:,j][:,:,0]
Very ugly but I am not sure how to simplify this, any comments are very appreciated.
I am not sure in these 5 by 5 squares. Are they filters actually?
If not could anyone please tell how to correctly grab filters from the model?
I tried to display the weights like so only the first 25. I have the same question that you do is this the filter or something else. It doesn't seem to be the same filters that are derived from deep belief networks or stacked RBM's.
Here is the untrained visualized weights:
and here are the trained weights:
Strangely there is no change after training! If you compare them they are identical.
and then the DBN RBM filters layer 1 on top and layer 2 on bottom:
If i set kernel_intialization="ones" then I get filters that look good but the net loss never decreases though with many trial and error changes:
Here is the code to display the 2D Conv Weights / Filters.
ann = Sequential()
x = Conv2D(filters=64,kernel_size=(5,5),input_shape=(32,32,3))
ann.add(x)
ann.add(Activation("relu"))
...
x1w = x.get_weights()[0][:,:,0,:]
for i in range(1,26):
plt.subplot(5,5,i)
plt.imshow(x1w[:,:,i],interpolation="nearest",cmap="gray")
plt.show()
ann.fit(Xtrain, ytrain_indicator, epochs=5, batch_size=32)
x1w = x.get_weights()[0][:,:,0,:]
for i in range(1,26):
plt.subplot(5,5,i)
plt.imshow(x1w[:,:,i],interpolation="nearest",cmap="gray")
plt.show()
---------------------------UPDATE------------------------
So I tried it again with a learning rate of 0.01 instead of 1e-6 and used the images normalized between 0 and 1 instead of 0 and 255 by dividing the images by 255.0. Now the convolution filters are changing and the output of the first convolutional filter looks like so:
The trained filter you'll notice is changed (not by much) with a reasonable learning rate:
Here is image seven of the CIFAR-10 test set:
And here is the output of the first convolution layer:
And if I take the last convolution layer (no dense layers in between) and feed it to a classifier untrained it is similar to classifying raw images in terms of accuracy but if I train the convolution layers the last convolution layer output increases the accuracy of the classifier (random forest).
So I would conclude the convolution layers are indeed filters as well as weights.
In layer.get_weights()[0][:,:,:,:], the dimensions in [:,:,:,:] are x position of the weight, y position of the weight, the n th input to the corresponding conv layer (coming from the previous layer, note that if you try to obtain the weights of first conv layer then this number is 1 because only one input is driven to the first conv layer) and k th filter or kernel in the corresponding layer, respectively. So, the array shape returned by layer.get_weights()[0] can be interpreted as only one input is driven to the layer and 96 filters with 5x5 size are generated. If you want to reach one of the filters, you can type, lets say the 6th filter
print(layer.get_weights()[0][:,:,:,6].squeeze()).
However, if you need the filters of the 2nd conv layer (see model image link attached below), then notice for each of 32 input images or matrices you will have 64 filters. If you want to get the weights of any of them for example weights of the 4th filter generated for the 8th input image, then you should type
print(layer.get_weights()[0][:,:,8,4].squeeze()).
enter image description here

Resources