1d-convolution is pretty simple when it is done by hand. However, I want to implement what is done here using nn.Conv1d and it is not simple for me to do it. In this example h=[1,2,-1], x=[4,1,2,5] and the output is going to be y=[4,9,0,8,8,-5]. To do it using Pytorch we need to define h=nn.Conv1d(in, out, k) and x=torch.tensor(*) and y=h(x) should be the result.
Note: please do not use nn.Conv2d to implement it.
First, you should be aware that the term "convolution" used in basically all literature related to convolutional neural networks (CNNs) actually corresponds to the correlation operation not the convolution operation.
The only difference (for real-valued inputs) between correlation and convolution is that in convolution the kernel is flipped/mirrored before sliding it across the signal, whereas in correlation no such flipping occurs.
There are also some extra operations that convolution layers in CNNs perform that are not part of the definition of convolution. They apply an offset (a.k.a. bias), they operate on mini-batches, and they map multi-channel inputs to multi-channel outputs.
Therefore, in order to recreate a convolution operation using a convolution layer we should (i) disable bias, (ii) flip the kernel, and (iii) set batch-size, input channels, and output channels to one.
For example, a PyTorch implementation of the convolution operation using nn.Conv1d looks like this:
import torch
from torch import nn
x = torch.tensor([4, 1, 2, 5], dtype=torch.float)
k = torch.tensor([1, 2, -1], dtype=torch.float)
# Define these constants to differentiate the various usages of "1".
BATCH_SIZE, IN_CH, OUT_CH = 1, 1, 1
# Pad with len(k)-1 zeros to ensure all non-zero outputs are computed.
h = nn.Conv1d(IN_CH, OUT_CH, kernel_size=len(k), padding=len(k) - 1, bias=False)
# Copy flipped k into h.weight.
# h.weight is shape (OUT_CH, IN_CH, kernel_size), reshape k accordingly.
# Perform copy inside no_grad context to avoid autograd issues.
with torch.no_grad():
h.weight.copy_(torch.flip(k, dims=[0]).reshape(OUT_CH, IN_CH, -1))
# Input shape to h is assumed to be (BATCH_SIZE, IN_CH, SIGNAL_LENGTH), reshape x accordingly.
# Output shape of h is (BATCH_SIZE, OUT_CH, OUTPUT_LENGTH), reshape output to 1D signal.
y = h(x.reshape(BATCH_SIZE, IN_CH, -1)).reshape(-1)
which results in
>>> print(y)
tensor([ 4., 9., 0., 8., 8., -5.], grad_fn=<ViewBackward>)
Related
Like what nn.Conv2d or nn.AvgPool2d do with a tensor and a kernel size, I would like to calculate the variances of a tensor with a kernel size. How can I achieve this? I guess maybe source code of pytorch should be touched?
If it's only the variance you are after, you can use the fact that
var(x) = E[x^2] - E[x]^2
Using avg_pool2d you can estimate the local average of x and of x squared:
import torch.nn.functional as nnf
running_var = nnf.avg_pool2d(x**2, kernel_size=2, stride=1) - nnf.avg_pool2d(x, kernel_size=2,stride=1)**2
However, if you want a more general method of performing "sliding window" operations, you should become familiarized with unfold and fold:
u = nnf.unfold(x, kernel_size=2, stride=1) # get all kernel_size patches as vectors
running_var2 = torch.var(u, unbiased=False, dim=1)
# reshape back to original shape ("folding")
running_var2 = running_var2.reshape(x.shape[0], 1, x.shape[2]-1, x.shape[3]-1)
What is the difference between LSTM and LSTMCell in Pytorch (currently version 1.1)? It seems that LSTMCell is a special case of LSTM (i.e. with only one layer, unidirectional, no dropout).
Then, what's the purpose of having both implementations? Unless I'm missing something, it's trivial to use an LSTM object as an LSTMCell (or alternatively, it's pretty easy to use multiple LSTMCells to create the LSTM object)
Yes, you can emulate one by another, the reason for having them separate is efficiency.
LSTMCell is a cell that takes arguments:
Input of shape batch × input dimension;
A tuple of LSTM hidden states of shape batch x hidden dimensions.
It is a straightforward implementation of the equations.
LSTM is a layer applying an LSTM cell (or multiple LSTM cells) in a "for loop", but the loop is heavily optimized using cuDNN. Its input is
A three-dimensional tensor of inputs of shape batch × input length × input dimension;
Optionally, an initial state of the LSTM, i.e., a tuple of hidden states of shape batch × hidden dim (or tuple of such tuples if the LSTM is bidirectional)
You often might want to use the LSTM cell in a different context than apply it over a sequence, i.e. make an LSTM that operates over a tree-like structure. When you write a decoder in sequence-to-sequence models, you also call the cell in a loop and stop the loop when the end-of-sequence symbol is decoded.
Let me show some specific examples:
# LSTM example:
>>> rnn = nn.LSTM(10, 20, 2)
>>> input = torch.randn(5, 3, 10)
>>> h0 = torch.randn(2, 3, 20)
>>> c0 = torch.randn(2, 3, 20)
>>> output, (hn, cn) = rnn(input, (h0, c0))
# LSTMCell example:
>>> rnn = nn.LSTMCell(10, 20)
>>> input = torch.randn(3, 10)
>>> hx = torch.randn(3, 20)
>>> cx = torch.randn(3, 20)
>>> output = []
>>> for i in range(6):
hx, cx = rnn(input[i], (hx, cx))
output.append(hx)
The key difference:
LSTM: the argument 2, stands num_layers, number of recurrent layers. There are seq_len * num_layers=5 * 2 cells. No loop but more cells.
LSTMCell: in for loop (seq_len=5 times), each output of ith instance will be input of (i+1)th instance. There is only one cell, Truly Recurrent
If we set num_layers=1 in LSTM or add one more LSTMCell, the codes above will be the same.
Obviously, It is easier to apply parallel computing in LSTM.
I am training NN for the regression problem. So the output layer has a linear activation function. NN output is supposed to be between -20 to 30. My NN is performing good most of the time. However, sometimes it gives output more than 30 which is not desirable for my system. So does anyone know any activation function that can provide such kind of restriction on output or any suggestions on modifying linear activation function for my application?
I am using Keras with tenserflow backend for this application
What you can do is to activate your last layer with a sigmoid, the result will be between 0 and 1 and then create a custom layer in order to get the desired range :
def get_range(input, maxx, minn):
return (minn - maxx) * ((input - K.min(input, axis=1))/ (K.max(input, axis=1)*K.min(input, axis=1))) + maxx
and then add this to your network :
out = layers.Lambda(get_range, arguments={'maxx': 30, 'minn': -20})(sigmoid_output)
The output will be normalized between 'maxx' and 'minn'.
UPDATE
If you want to clip your data without normalizing all your outputs, do this instead :
def clip(input, maxx, minn):
return K.clip(input, minn, maxx)
out = layers.Lambda(clip, arguments={'maxx': 30, 'minn': -20})(sigmoid_output)
What you should do is normalize your target outputs to the range [-1, 1] or [0, 1], and then use a tanh (for [-1, 1]) or sigmoid (for [0, 1]) activation at the output, and train the model with normalize data.
Then you can denormalize the predictions to get values in your original ranges during inference.
My task is to visualize the plotted weights in a cnn layer, now when I passed parameters, filters = 32 and kernel_size = (3, 3), I am expecting the output to be 32 matrices each of 3x3 size by using .get_weights() function(to extract weights and biases), but I am getting a very weird nested output,
the output is as follows:
a = model.layers[0].get_weights()
a[0][0][0]
array([[ 2.87332404e-02, -2.80513391e-02,
**... 32 values ...**,
-1.55516148e-01, -1.26494586e-01, -1.36454999e-01,
1.61165968e-02, 7.63138831e-02],
[-5.21791205e-02, 3.13560963e-02, **... 32 values ...**,
-7.63987377e-02, 7.28923678e-02, 8.98564830e-02,
-3.02852653e-02, 4.07049060e-02],
[-7.04478994e-02, 1.33816227e-02,
**... 32 values ...**, -1.99537817e-02,
-1.67200342e-01, 1.15980692e-02]], dtype=float32)
I want to know that why I am getting this type of weird output and how can I get the weights in the perfect shape. Thanks in advance.
Weights in neural network are values that represent connection strength between input nodes and output nodes(or nodes in next layer).
Conv2D layer's weights usually have shape of (H, W, I, O), where:-
H is kernel height
W is kernel width
I is number of input channels
O is number of output channels
Conv2D weights can be interpreted as connection strength between a patch of input channels and nodes in output filter/feature map. This way you would have weights of shape(H, W) between each Input channels and each Output Channels. It should be noted that the weights are shared among different patches of the same channel.
Consider the following convolution of (8, 8, 1) input with (2, 2) kernel and output with (8, 8, 1). The weights of this layer has shape (2, 2, 1, 1)
The same input can be used to produce 2 feature map using 2 (2, 2) filters as follows. Now the shape of the weights would be (2, 2,1, 2).
Hope this will clarify how to interpret the shape of convolutional layers.
The shape of the kernel weights from a Conv2D layer is (kernel_size[0], kernel_size[1], n_input_channels, filters). So in your case
a = model.layers[0].get_weights()
print(a[0].shape)
# should print (3,3,z,32) if your input has shape (x, y, z)
If you want to print the weights from one of the filters, you can do
a[0][:,:,:,0]
I have a series of processed audio files I am using as input into a CNN using Keras. Does the Keras 1D Convolutional layer support variable sequence lengths? The Keras documentation makes this unclear.
https://keras.io/layers/convolutional/
At the top of the documentation it mentions you can use (None, 128) for variable-length sequences of 128-dimensional vectors. Yet at the bottom it declares that the input shape must be a
3D tensor with shape: (batch_size, steps, input_dim)
Given the following example how should I input sequences of variable length into the network
Lets say I have two examples (a and b) containing X 1 dimensional vectors of length 100 that I want to feed into the 1DConv layer as input
a.shape = (100, 100)
b.shape = (200, 100)
Can I use an input shape of (2, None, 100)? Do I need to concatenate these tensors into c where
c.shape = (300, 100)
Then reshape it to be something
c_reshape.shape = (3, 100, 100)
Where 3 is the batch size, 100, is the number of steps, and the second 100 is the input size? The documentation on the input vector is not very clear.
Keras supports variable lengths by using None in the respective dimension when defining the model.
Notice that often input_shape refers to the shape without the batch size.
So, the 3D tensor with shape (batch_size, steps, input_dim) suits perfectly a model with input_shape=(steps, input_dim).
All you need to make this model accept variable lengths is use None in the steps dimension:
input_shape=(None, input_dim)
Numpy limitation
Now, there is a numpy limitation about variable lengths. You cannot create a numpy array with a shape that suits variable lengths.
A few solutions are available:
Pad your sequences with dummy values until they all reach the same size so you can put them into a numpy array of shape (batch_size, length, input_dim). Use Masking layers to disconsider the dummy values.
Train with separate numpy arrays of shape (1, length, input_dim), each array having its own length.
Group your images by sizes into smaller arrays.
Be careful with layers that don't support variable sizes
In convolutional models using variable sizes, you can't for instance, use Flatten, the result of the flatten would have a variable size if this were possible. And the following Dense layers would not be able to have a constant number of weights. This is impossible.
So, instead of Flatten, you should start using GlobalMaxPooling1D or GlobalAveragePooling1D layers.