Local fully connected layer - Pytorch - pytorch

Assume we have a feature representation with kN neurons before the classification layer. Now, the classification layer produces an output layer of size N with only local connections.
That is, the kth neuron at the output is computed using input neurons at locations from kN to kN+N. Hence, every N locations in the input layer (with stride N) give single neuron value at the output.
This is done using conv1dlocal in Keras, however, the PyTorch does not seem to have this.
Weight matrix in standard linear layer: kNxN = kN^2 variables
Weight matrix with local linear layer: (kx1)#N times = NK variables

This is currently triaged on the PyTorch issue tracker, in the mean time you can get a similar behavious using fold and unfold. See this answer:
https://github.com/pytorch/pytorch/issues/499#issuecomment-503962218
class LocalLinear(nn.Module):
def __init__(self,in_features,local_features,kernel_size,padding=0,stride=1,bias=True):
super(LocalLinear, self).__init__()
self.kernel_size = kernel_size
self.stride = stride
self.padding = padding
fold_num = (in_features+2*padding-self.kernel_size)//self.stride+1
self.weight = nn.Parameter(torch.randn(fold_num,kernel_size,local_features))
self.bias = nn.Parameter(torch.randn(fold_num,local_features)) if bias else None
def forward(self, x:torch.Tensor):
x = F.pad(x,[self.padding]*2,value=0)
x = x.unfold(-1,size=self.kernel_size,step=self.stride)
x = torch.matmul(x.unsqueeze(2),self.weight).squeeze(2)+self.bias
return x

Related

How to do a weighted pooling in Mxnet?

I want to do a 2d convolutional operation that uses same 1x2x4 weight on every channel.
(Note: the input height & width are bigger than our kernel, so I can't just use a dot product.)
How can I do this is mxnet?
I tried to use the same instance of a signle 2d conv layer by concatenating it on every channel, but it is incredibly slow.
def Concat(*args, axis=1, **kwargs):
net = nn.HybridConcatenate(axis=axis,**kwargs)
net.add(*args)
return net
def Seq(*args):
net = nn.HybridSequential()
net.add(*args)
return net
class Trim_D1(nn.HybridBlock):
def __init__(self, from_, to, **kwargs):
super(Trim_D1, self).__init__(**kwargs)
self.from_ = from_
self.to = to
def forward(self, x):
return x[:,self.from_:self.to]
PooPool = nn.Conv2D(kernel_size=(2,4), strides=(2, 4), channels=1, activation=None, use_bias=False, weight_initializer=mx.init.Constant(1/8))
conc = ()
for i in range(40):
conc += Seq(
Trim_D1(i,i+1),
PooPool
),
WeightedPool= Concat(*conc)
Ideally I would also want my kernel weights to sum up to 1 in order to resemble the weighted average pooling.
Edit: I think I know how to do this. I'm going to edit Conv2D and _Conv source codes so that instead of creating weights of CxHxW dimension it creates a weight of 1xHxW dimension and uses a broadcasting during the convolutional operation. In order for weights to sum up to 1, additionally a softmax operation has to be applied.
Ok, apparently the weights are of in_channels x out_channels x H x W dimensions and broadcasting is not allowed during the convolutional operation. We could fix out_channels to 1 by using the num_groups same as the output channels, as for input channels, we can simply broadcast the same weight n number of times.
In _Conv.__init__ during initialization I discarded the first two dimensions so our kernel is only H x W now:
self.weight = Parameter('weight', shape=wshapes[1][2:],
init=weight_initializer,
allow_deferred_init=True)
In _Conv.hybrid_forward I am flattening our weight to 1D in order to perform softmax and then restore to the original 2D shape. Then I expand first two dimensions and repeat the first dimension as mentioned above:
orig_shape = weight.shape
act = getattr(F, self._op_name)(x, mx.nd.softmax(weight.reshape(-1)).reshape(orig_shape)[None,None,:].repeat(self._kwargs['num_group'],axis=0), name='fwd', **self._kwargs)

Understanding the architecture of an LSTM for sequence classification

I have this model in pytorch that I have been using for sequence classification.
class RoBERT_Model(nn.Module):
def __init__(self, hidden_size = 100):
self.hidden_size = hidden_size
super(RoBERT_Model, self).__init__()
self.lstm = nn.LSTM(768, hidden_size, num_layers=1, bidirectional=False)
self.out = nn.Linear(hidden_size, 2)
def forward(self, grouped_pooled_outs):
# chunks_emb = pooled_out.split_with_sizes(lengt) # splits the input tensor into a list of tensors where the length of each sublist is determined by length
seq_lengths = torch.LongTensor([x for x in map(len, grouped_pooled_outs)]) # gets the length of each sublist in chunks_emb and returns it as an array
batch_emb_pad = nn.utils.rnn.pad_sequence(grouped_pooled_outs, padding_value=-91, batch_first=True) # pads each sublist in chunks_emb to the largest sublist with value -91
batch_emb = batch_emb_pad.transpose(0, 1) # (B,L,D) -> (L,B,D)
lstm_input = nn.utils.rnn.pack_padded_sequence(batch_emb, seq_lengths, batch_first=False, enforce_sorted=False) # seq_lengths.cpu().numpy()
packed_output, (h_t, h_c) = self.lstm(lstm_input, ) # (h_t, h_c))
# output, _ = nn.utils.rnn.pad_packed_sequence(packed_output, padding_value=-91)
h_t = h_t.view(-1, self.hidden_size) # (-1, 100)
return self.out(h_t) # logits
The issue that I am having is that I am not entirely convinced of what data is being passed to the final classification layer. I believe what is being done is that only the final LSTM cell in the last layer is being used for classification. That is there are hidden_size features that are passed to the feedforward layer.
I have depicted what I believe is going on in this figure here:
Is this understanding correct? Am I missing anything?
Thanks.
Your code is a basic LSTM for classification, working with a single rnn layer.
In your picture you have multiple LSTM layers, while, in reality, there is only one, H_n^0 in the picture.
Your input to LSTM is of shape (B, L, D) as correctly pointed out in the comment.
packed_output and h_c is not used at all, hence you can change this line to: _, (h_t, _) = self.lstm(lstm_input) in order no to clutter the picture further
h_t is output of last step for each batch element, in general (B, D * L, hidden_size). As this neural network is not bidirectional D=1, as you have a single layer L=1 as well, hence the output is of shape (B, 1, hidden_size).
This output is reshaped into nn.Linear compatible (this line: h_t = h_t.view(-1, self.hidden_size)) and will give you output of shape (B, hidden_size)
This input is fed to a single nn.Linear layer.
In general, the output of the last time step from RNN is used for each element in the batch, in your picture H_n^0 and simply fed to the classifier.
By the way, having self.out = nn.Linear(hidden_size, 2) in classification is probably counter-productive; most likely your are performing binary classification and self.out = nn.Linear(hidden_size, 1) with torch.nn.BCEWithLogitsLoss might be used. Single logit contains information whether the label should be 0 or 1; everything smaller than 0 is more likely to be 0 according to nn, everything above 0 is considered as a 1 label.

What does PyTorch classifier output?

So i am new to deep learning and started learning PyTorch. I created a classifier model with following structure.
class model(nn.Module):
def __init__(self):
super(model, self).__init__()
resnet = models.resnet34(pretrained=True)
layers = list(resnet.children())[:8]
self.features1 = nn.Sequential(*layers[:6])
self.features2 = nn.Sequential(*layers[6:])
self.classifier = nn.Sequential(nn.BatchNorm1d(512), nn.Linear(512, 3))
def forward(self, x):
x = self.features1(x)
x = self.features2(x)
x = F.relu(x)
x = nn.AdaptiveAvgPool2d((1,1))(x)
x = x.view(x.shape[0], -1)
return self.classifier(x)
So basically I wanted to classify among three things {0,1,2}. While evaluating, I passed the image it returned a Tensor with three values like below
(tensor([[-0.1526, 1.3511, -1.0384]], device='cuda:0', grad_fn=<AddmmBackward>)
So my question is what are these three numbers? Are they probability ?
P.S. Please pardon me If I asked something too silly.
The final layer nn.Linear (fully connected layer) of self.classifier of your model produces values, that we can call a scores, for example, it may be: [10.3, -3.5, -12.0], the same you can see in your example as well: [-0.1526, 1.3511, -1.0384] which are not normalized and cannot be interpreted as probabilities.
As you can see it's just a kind of "raw unscaled" network output, in other words these values are not normalized, and it's hard to use them or interpret the results, that's why the common practice is converting them to normalized probability distribution by using softmax after the final layer, as #skinny_func has already described. After that you will get the probabilities in the range of 0 and 1, which is more intuitive representation.
So after training what you would want to do is to apply softmax to the output tensor to extract the probability of each class, then you choose the maximal value (highest probability).
in your case:
prob = torch.nn.functional.softmax(model(x), dim=1)
_, pred_class = torch.max(prob, dim=1)

How to compute gradient of the error with respect to the model input?

Given a simple 2 layer neural network, the traditional idea is to compute the gradient w.r.t. the weights/model parameters. For an experiment, I want to compute the gradient of the error w.r.t the input. Are there existing Pytorch methods that can allow me to do this?
More concretely, consider the following neural network:
import torch.nn as nn
import torch.nn.functional as F
class NeuralNet(nn.Module):
def __init__(self, n_features, n_hidden, n_classes, dropout):
super(NeuralNet, self).__init__()
self.fc1 = nn.Linear(n_features, n_hidden)
self.sigmoid = nn.Sigmoid()
self.fc2 = nn.Linear(n_hidden, n_classes)
self.dropout = dropout
def forward(self, x):
x = self.sigmoid(self.fc1(x))
x = F.dropout(x, self.dropout, training=self.training)
x = self.fc2(x)
return F.log_softmax(x, dim=1)
I instantiate the model and an optimizer for the weights as follows:
import torch.optim as optim
model = NeuralNet(n_features=args.n_features,
n_hidden=args.n_hidden,
n_classes=args.n_classes,
dropout=args.dropout)
optimizer_w = optim.SGD(model.parameters(), lr=0.001)
While training, I update the weights as usual. Now, given that I have values for the weights, I should be able to use them to compute the gradient w.r.t. the input. I am unable to figure out how.
def train(epoch):
t = time.time()
model.train()
optimizer.zero_grad()
output = model(features)
loss_train = F.nll_loss(output[idx_train], labels[idx_train])
acc_train = accuracy(output[idx_train], labels[idx_train])
loss_train.backward()
optimizer_w.step()
# grad_features = loss_train.backward() w.r.t to features
# features -= 0.001 * grad_features
for epoch in range(args.epochs):
train(epoch)
It is possible, just set input.requires_grad = True for each input batch you're feeding in, and then after loss.backward() you should see that input.grad holds the expected gradient. In other words, if your input to the model (which you call features in your code) is some M x N x ... tensor, features.grad will be a tensor of the same shape, where each element of grad holds the gradient with respect to the corresponding element of features. In my comments below, I use i as a generalized index - if your parameters has for instance 3 dimensions, replace it with features.grad[i, j, k], etc.
Regarding the error you're getting: PyTorch operations build a tree representing the mathematical operation they are describing, which is then used for differentiation. For instance c = a + b will create a tree where a and b are leaf nodes and c is not a leaf (since it results from other expressions). Your model is the expression, and its inputs as well as parameters are the leaves, whereas all intermediate and final outputs are not leaves. You can think of leaves as "constants" or "parameters" and of all other variables as of functions of those. This message tells you that you can only set requires_grad of leaf variables.
Your problem is that at the first iteration, features is random (or however else you initialize) and is therefore a valid leaf. After your first iteration, features is no longer a leaf, since it becomes an expression calculated based on the previous ones. In pseudocode, you have
f_1 = initial_value # valid leaf
f_2 = f_1 + your_grad_stuff # not a leaf: f_2 is a function of f_1
to deal with that you need to use detach, which breaks the links in the tree, and makes the autograd treat a tensor as if it was constant, no matter how it was created. In particular, no gradient calculations will be backpropagated through detach. So you need something like
features = features.detach() - 0.01 * features.grad
Note: perhaps you need to sprinkle a couple more detaches here and there, which is hard to say without seeing your whole code and knowing the exact purpose.

How to correctly implement backpropagation for machine learning the MNIST dataset?

So, I'm using Michael Nielson's machine learning book as a reference for my code (it is basically identical): http://neuralnetworksanddeeplearning.com/chap1.html
The code in question:
def backpropagate(self, image, image_value) :
# declare two new numpy arrays for the updated weights & biases
new_biases = [np.zeros(bias.shape) for bias in self.biases]
new_weights = [np.zeros(weight_matrix.shape) for weight_matrix in self.weights]
# -------- feed forward --------
# store all the activations in a list
activations = [image]
# declare empty list that will contain all the z vectors
zs = []
for bias, weight in zip(self.biases, self.weights) :
print(bias.shape)
print(weight.shape)
print(image.shape)
z = np.dot(weight, image) + bias
zs.append(z)
activation = sigmoid(z)
activations.append(activation)
# -------- backward pass --------
# transpose() returns the numpy array with the rows as columns and columns as rows
delta = self.cost_derivative(activations[-1], image_value) * sigmoid_prime(zs[-1])
new_biases[-1] = delta
new_weights[-1] = np.dot(delta, activations[-2].transpose())
# l = 1 means the last layer of neurons, l = 2 is the second-last, etc.
# this takes advantage of Python's ability to use negative indices in lists
for l in range(2, self.num_layers) :
z = zs[-1]
sp = sigmoid_prime(z)
delta = np.dot(self.weights[-l+1].transpose(), delta) * sp
new_biases[-l] = delta
new_weights[-l] = np.dot(delta, activations[-l-1].transpose())
return (new_biases, new_weights)
My algorithm can only get to the first round backpropagation before this error occurs:
File "D:/Programming/Python/DPUDS/DPUDS_Projects/Fall_2017/MNIST/network.py", line 97, in stochastic_gradient_descent
self.update_mini_batch(mini_batch, learning_rate)
File "D:/Programming/Python/DPUDS/DPUDS_Projects/Fall_2017/MNIST/network.py", line 117, in update_mini_batch
delta_biases, delta_weights = self.backpropagate(image, image_value)
File "D:/Programming/Python/DPUDS/DPUDS_Projects/Fall_2017/MNIST/network.py", line 160, in backpropagate
z = np.dot(weight, activation) + bias
ValueError: shapes (30,50000) and (784,1) not aligned: 50000 (dim 1) != 784 (dim 0)
I get why it's an error. The number of columns in weights doesn't match the number of rows in the pixel image, so I can't do matrix multiplication. Here's where I'm confused -- there are 30 neurons used in the backpropagation, each with 50,000 images being evaluated. My understanding is that each of the 50,000 should have 784 weights attached, one for each pixel. But when I modify the code accordingly:
count = 0
for bias, weight in zip(self.biases, self.weights) :
print(bias.shape)
print(weight[count].shape)
print(image.shape)
z = np.dot(weight[count], image) + bias
zs.append(z)
activation = sigmoid(z)
activations.append(activation)
count += 1
I still get a similar error:
ValueError: shapes (50000,) and (784,1) not aligned: 50000 (dim 0) != 784 (dim 0)
I'm just really confuzzled by all the linear algebra involved and I think I'm just missing something about the structure of the weight matrix. Any help at all would be greatly appreciated.
It looks like the issue is in your changes to the original code.
I’be downloaded example from the link you provided and it works without any errors:
Here is full source code I used:
import cPickle
import gzip
import numpy as np
import random
def load_data():
"""Return the MNIST data as a tuple containing the training data,
the validation data, and the test data.
The ``training_data`` is returned as a tuple with two entries.
The first entry contains the actual training images. This is a
numpy ndarray with 50,000 entries. Each entry is, in turn, a
numpy ndarray with 784 values, representing the 28 * 28 = 784
pixels in a single MNIST image.
The second entry in the ``training_data`` tuple is a numpy ndarray
containing 50,000 entries. Those entries are just the digit
values (0...9) for the corresponding images contained in the first
entry of the tuple.
The ``validation_data`` and ``test_data`` are similar, except
each contains only 10,000 images.
This is a nice data format, but for use in neural networks it's
helpful to modify the format of the ``training_data`` a little.
That's done in the wrapper function ``load_data_wrapper()``, see
below.
"""
f = gzip.open('../data/mnist.pkl.gz', 'rb')
training_data, validation_data, test_data = cPickle.load(f)
f.close()
return (training_data, validation_data, test_data)
def load_data_wrapper():
"""Return a tuple containing ``(training_data, validation_data,
test_data)``. Based on ``load_data``, but the format is more
convenient for use in our implementation of neural networks.
In particular, ``training_data`` is a list containing 50,000
2-tuples ``(x, y)``. ``x`` is a 784-dimensional numpy.ndarray
containing the input image. ``y`` is a 10-dimensional
numpy.ndarray representing the unit vector corresponding to the
correct digit for ``x``.
``validation_data`` and ``test_data`` are lists containing 10,000
2-tuples ``(x, y)``. In each case, ``x`` is a 784-dimensional
numpy.ndarry containing the input image, and ``y`` is the
corresponding classification, i.e., the digit values (integers)
corresponding to ``x``.
Obviously, this means we're using slightly different formats for
the training data and the validation / test data. These formats
turn out to be the most convenient for use in our neural network
code."""
tr_d, va_d, te_d = load_data()
training_inputs = [np.reshape(x, (784, 1)) for x in tr_d[0]]
training_results = [vectorized_result(y) for y in tr_d[1]]
training_data = zip(training_inputs, training_results)
validation_inputs = [np.reshape(x, (784, 1)) for x in va_d[0]]
validation_data = zip(validation_inputs, va_d[1])
test_inputs = [np.reshape(x, (784, 1)) for x in te_d[0]]
test_data = zip(test_inputs, te_d[1])
return (training_data, validation_data, test_data)
def vectorized_result(j):
"""Return a 10-dimensional unit vector with a 1.0 in the jth
position and zeroes elsewhere. This is used to convert a digit
(0...9) into a corresponding desired output from the neural
network."""
e = np.zeros((10, 1))
e[j] = 1.0
return e
class Network(object):
def __init__(self, sizes):
"""The list ``sizes`` contains the number of neurons in the
respective layers of the network. For example, if the list
was [2, 3, 1] then it would be a three-layer network, with the
first layer containing 2 neurons, the second layer 3 neurons,
and the third layer 1 neuron. The biases and weights for the
network are initialized randomly, using a Gaussian
distribution with mean 0, and variance 1. Note that the first
layer is assumed to be an input layer, and by convention we
won't set any biases for those neurons, since biases are only
ever used in computing the outputs from later layers."""
self.num_layers = len(sizes)
self.sizes = sizes
self.biases = [np.random.randn(y, 1) for y in sizes[1:]]
self.weights = [np.random.randn(y, x)
for x, y in zip(sizes[:-1], sizes[1:])]
def feedforward(self, a):
"""Return the output of the network if ``a`` is input."""
for b, w in zip(self.biases, self.weights):
a = sigmoid(np.dot(w, a)+b)
return a
def SGD(self, training_data, epochs, mini_batch_size, eta,
test_data=None):
"""Train the neural network using mini-batch stochastic
gradient descent. The ``training_data`` is a list of tuples
``(x, y)`` representing the training inputs and the desired
outputs. The other non-optional parameters are
self-explanatory. If ``test_data`` is provided then the
network will be evaluated against the test data after each
epoch, and partial progress printed out. This is useful for
tracking progress, but slows things down substantially."""
if test_data: n_test = len(test_data)
n = len(training_data)
for j in xrange(epochs):
random.shuffle(training_data)
mini_batches = [
training_data[k:k+mini_batch_size]
for k in xrange(0, n, mini_batch_size)]
for mini_batch in mini_batches:
self.update_mini_batch(mini_batch, eta)
if test_data:
print "Epoch {0}: {1} / {2}".format(
j, self.evaluate(test_data), n_test)
else:
print "Epoch {0} complete".format(j)
def update_mini_batch(self, mini_batch, eta):
"""Update the network's weights and biases by applying
gradient descent using backpropagation to a single mini batch.
The ``mini_batch`` is a list of tuples ``(x, y)``, and ``eta``
is the learning rate."""
nabla_b = [np.zeros(b.shape) for b in self.biases]
nabla_w = [np.zeros(w.shape) for w in self.weights]
for x, y in mini_batch:
delta_nabla_b, delta_nabla_w = self.backprop(x, y)
nabla_b = [nb+dnb for nb, dnb in zip(nabla_b, delta_nabla_b)]
nabla_w = [nw+dnw for nw, dnw in zip(nabla_w, delta_nabla_w)]
self.weights = [w-(eta/len(mini_batch))*nw
for w, nw in zip(self.weights, nabla_w)]
self.biases = [b-(eta/len(mini_batch))*nb
for b, nb in zip(self.biases, nabla_b)]
def backprop(self, x, y):
"""Return a tuple ``(nabla_b, nabla_w)`` representing the
gradient for the cost function C_x. ``nabla_b`` and
``nabla_w`` are layer-by-layer lists of numpy arrays, similar
to ``self.biases`` and ``self.weights``."""
nabla_b = [np.zeros(b.shape) for b in self.biases]
nabla_w = [np.zeros(w.shape) for w in self.weights]
# feedforward
activation = x
activations = [x] # list to store all the activations, layer by layer
zs = [] # list to store all the z vectors, layer by layer
for b, w in zip(self.biases, self.weights):
z = np.dot(w, activation)+b
zs.append(z)
activation = sigmoid(z)
activations.append(activation)
# backward pass
delta = self.cost_derivative(activations[-1], y) * \
sigmoid_prime(zs[-1])
nabla_b[-1] = delta
nabla_w[-1] = np.dot(delta, activations[-2].transpose())
# Note that the variable l in the loop below is used a little
# differently to the notation in Chapter 2 of the book. Here,
# l = 1 means the last layer of neurons, l = 2 is the
# second-last layer, and so on. It's a renumbering of the
# scheme in the book, used here to take advantage of the fact
# that Python can use negative indices in lists.
for l in xrange(2, self.num_layers):
z = zs[-l]
sp = sigmoid_prime(z)
delta = np.dot(self.weights[-l+1].transpose(), delta) * sp
nabla_b[-l] = delta
nabla_w[-l] = np.dot(delta, activations[-l-1].transpose())
return (nabla_b, nabla_w)
def evaluate(self, test_data):
"""Return the number of test inputs for which the neural
network outputs the correct result. Note that the neural
network's output is assumed to be the index of whichever
neuron in the final layer has the highest activation."""
test_results = [(np.argmax(self.feedforward(x)), y)
for (x, y) in test_data]
return sum(int(x == y) for (x, y) in test_results)
def cost_derivative(self, output_activations, y):
"""Return the vector of partial derivatives \partial C_x /
\partial a for the output activations."""
return (output_activations-y)
#### Miscellaneous functions
def sigmoid(z):
"""The sigmoid function."""
return 1.0/(1.0+np.exp(-z))
def sigmoid_prime(z):
"""Derivative of the sigmoid function."""
return sigmoid(z)*(1-sigmoid(z))
training_data, validation_data, test_data = load_data_wrapper()
net = Network([784, 30, 10])
net.SGD(training_data, 30, 10, 3.0, test_data=test_data)
Additional info:
However, I would recommend using one of existing frameworks, for example - Keras to don't reinvent the wheel
Also, it was checked with python 3.6:
Kudos on digging into Nielsen's code. It's a great resource to develop thorough understanding of NN principles. Too many people leap ahead to Keras without knowing what goes on under the hood.
Each training example doesn't get its own weights. Each of the 784 features does. If each example got its own weights then each weight set would overfit to its corresponding training example. Also, if you later used your trained network to run inference on a single test example, what would it do with 50,000 sets of weights when presented with just one handwritten digit? Instead, each of the 30 neurons in your hidden layer learns a set of 784 weights, one for each pixel, that offers high predictive accuracy when generalized to any handwritten digit.
Import network.py and instantiate a Network class like this without modifying any code:
net = network.Network([784, 30, 10])
..which gives you a network with 784 input neurons, 30 hidden neurons and 10 output neurons. Your weight matrices will have dimensions [30, 784] and [10, 30], respectively. When you feed the network an input array of dimensions [784, 1] the matrix multiplication that gave you an error is valid because dim 1 of the weight matrix equals dim 0 of the input array (both 784).
Your problem is not implementation of backprop but rather setting up a network architecture appropriate for the shape of your input data. If memory serves Nielsen leaves backprop as a black box in chapter 1 and doesn't dive into it until chapter 2. Keep at it, and good luck!

Resources