Pytorch beginner : tensor.new method - pytorch

everyone, I have a small question.
What is the purpose of the method tensor.new(..) in Pytorch, I didn't find anything in the documentation. It looks like it creates a new Tensor (like the name suggests), but why we don't just use torch.Tensor constructors instead of using this new method that requires an existing tensor.
Thank you in advance.

As the documentation of tensor.new() says:
Constructs a new tensor of the same data type as self tensor.
Also note:
For CUDA tensors, this method will create new tensor on the same device as this tensor.

It seems that in the newer versions of PyTorch there are many of various new_* methods that are intended to replace this "legacy" new method.
So if you have some tensor t = torch.randn((3, 4)) then you can construct a new one with the same type and device using one of these methods, depending on your goals:
t = torch.randn((3, 4))
a = t.new_tensor([1, 2, 3]) # same type, device, new data
b = t.new_empty((3, 4)) # same type, device, non-initialized
c = t.new_zeros((2, 3)) # same type, device, filled with zeros
...
for x in (t, a, b, c):
print(x.type(), x.device, x.size())
# torch.FloatTensor cpu torch.Size([3, 4])
# torch.FloatTensor cpu torch.Size([3])
# torch.FloatTensor cpu torch.Size([3, 4])
# torch.FloatTensor cpu torch.Size([2, 3])

Here is a simple use-case and example using new(), since without this the utility of this function is not very clear.
Suppose you want to add Gaussian noise to a tensor (or Variable) without knowing a priori what it's datatype is.
This will create a tensor of Gaussian noise, the same shape and data type as a Variable X:
noise_like_grad = X.data.new(X.size()).normal_(0,0.01)
This example also illustrates the usage of new(size), so that we get a tensor of same type and same size as X.

I've found an answer. It is used to create a new tensor with the same type.

Related

How to get a 2D output from linear layer in pytorch?

I would like to project a tensor into a space with an additional dimension.
I tried
torch.nn.Linear(
in_features=num_inputs,
out_features=(num_inputs, num_additional),
)
But this results in an error
A workaround would be to
torch.nn.Linear(
in_features=num_inputs,
out_features=num_inputs*num_additional,
)
and then change the view the output
output.view(batch_size, num_inputs, num_additional)
But I imagine this workaround will get tricky to read, especially when a projection into more than one additional dimension is desired.
Is there a more direct way to code this operation?
Perhaps the source code for linear can be changed
https://pytorch.org/docs/stable/_modules/torch/nn/modules/linear.html#Linear
To accept more dimensions for the weight and bias initialization, and F.linear seems like it would need to be replaced with a different function.
IMO the workaround you provided is already clear enough. However, if you want to express this as a single operation, you can always write your own module by subclassing torch.nn.Linear:
import numpy as np
import torch
class MultiDimLinear(torch.nn.Linear):
def __init__(self, in_features, out_shape, **kwargs):
self.out_shape = out_shape
out_features = np.prod(out_shape)
super().__init__(in_features, out_features, **kwargs)
def forward(self, x):
out = super().forward(x)
return out.reshape((len(x), *self.out_shape))
if __name__ == '__main__':
tmp = torch.empty((32, 10))
linear = MultiDimLinear(in_features=10, out_shape=(10, 10))
out = linear(tmp)
print(out.shape) # (32, 10, 10)
Another way would be to use torch.einsum
https://pytorch.org/docs/stable/generated/torch.einsum.html
torch.einsum can prevent summation across dimensions in tensor to tensor multiplication operations. This can allow separate multiplication operations to happen in parallel. [ I do not know if this would necessarily result in GPU efficiency; if the operations are still occurring in the same kernel. In fact, it may be slower https://github.com/pytorch/pytorch/issues/32591 ]
How this would work is to directly initialize the weight and bias tensors (look at source code for the torch linear layer for that code)
Say that the input (X) has dimensions (a, b), where a is the batch size.
Say that you want to pass this input through a series of classifiers, represented in a single weight tensor (W) with dimensions (c, d, e), where c is the number of classifiers, and e is the number of classes for the classifier
import torch
x = torch.arange(2*4).view(2, 4)
w = torch.arange(5*4*6).view(5, 4, 2)
torch.einsum('ab, cbe -> ace', x, w)
in the last line, a and b are the dimensions of the input as mentioned above. What might be the tricky part is c, b, and e are the dimensions of the classifiers weight tensor; I didn't use d, I used b instead. That is because the vector multiplication is happening along that dimension for the inputs tensor and the weight tensor. So that's why the left side of the einsum equation is ab, cbe. The right side of the einsum equation is simply what dimensions to exclude from summation.
The final dimensions we want is (a, c, e). a is the batch size, c is the number of classifiers, and e is the number of classes for each classifier. We do not want to add those values, so to preserve their separation, the left side of the equation is ace.
For those unfamiliar with einsum, this will be harder to read than the word around I created (though I highly recommend learning it, because it gets very easy and intuitive very fast even though it's a bit tricky at first https://www.youtube.com/watch?v=pkVwUVEHmfI )
However, for paralyzing certain operations (especially on GPU), it seems that einsum is the only way to do it. For example so that in my previous example, I didn't want to use a classification head yet, I just wanted to project to multiple dimensions.
import torch
x = torch.arange(2*4).view(2, 4)
w = torch.arange(5*4*6).view(5, 4, 4)
y = torch.einsum('ab, cbe -> ace', x, w)
And say I do a few other operations to y, perhaps some non linear operations, activations, etc.
z = f(y)
z will still have the dimensions 2, 5, 4. Batch size two, 5 hidden states per batch, and the dimension of those hidden states are 4.
And then I want to apply a classifier to each separate tensor.
w2 = torch.arange(4*2).view(4, 2)
final = torch.einsum('fgh, hj -> fgj', z, w2)
Quick refresh, 2 is the batch size, 5 is the number of classifier, and 2 is the number of outputs for each classifier.
The output dimensions, f, g, j (2, 5, 2) will not be summed across, and thus will be preserved in the output.
As cited in the github link, this may be slower than just using regular linear layers. There may be efficiencies in a very large number of parallel operations.

Is there a way to compute a circulant matrix in Pytorch?

I want a similar function as in https://docs.scipy.org/doc/scipy/reference/generated/scipy.linalg.circulant.html to create a circulant matrix using PyTorch. I need this as a part of my Deep Learning model (in order to reduce over-parametrization in some of my Fully Connected layers as suggested in https://arxiv.org/abs/1907.08448 (Fig.3))
The input of the function shall be a 1D torch tensor, and the output should be the 2D circulant matrix.
You can make use of unfold to extract sliding windows. But to get the correct order you need to flip (later unflip) the tensors, and first concatenate the flipped tensor to itself.
circ=lambda v:torch.cat([f:=v.flip(0),f[:-1]]).unfold(0,len(v),1).flip(0)
Here is a generic function for pytorch tensors, to get the circulant matrix for one dimension. It's based on unfold and it works for 2d circulant matrix or high-dimension tensors.
def circulant(tensor, dim):
"""get a circulant version of the tensor along the {dim} dimension.
The additional axis is appended as the last dimension.
E.g. tensor=[0,1,2], dim=0 --> [[0,1,2],[2,0,1],[1,2,0]]"""
S = tensor.shape[dim]
tmp = torch.cat([tensor.flip((dim,)), torch.narrow(tensor.flip((dim,)), dim=dim, start=0, length=S-1)], dim=dim)
return tmp.unfold(dim, S, 1).flip((-1,))
Essentially, this is a PyTorch version of scipy.linalg.circulant and works for multi-dimension tensors.
Also a similar question: Create array/tensor of cycle shifted arrays

Is there a element-map function in pytorch?

I'm new in PyTorch and I come from functional programming languages(where map function is used everywhere). The problem is that I have a tensor and I want to do some operations on each element of the tensor. The operation may be various so I need a function like this:
map : (Numeric -> Numeric) -> Tensor -> Tensor
e.g. map(lambda x: x if x < 255 else -1, tensor) # the example is simple, but the lambda may be very complex
Is there such a function in PyTorch? How should I implement such function?
Most mathematical operations that are implemented for tensors (and similarly for ndarrays in numpy) are actually applied element wise, so you could write for instance
mask = tensor < 255
result = tensor * mask + (-1) * ~mask
This is a quite general appraoch. For the case that you have right now where you only want to modify certain elements, you can also apply "logical indexing" that let's you overwrite the current tensor:
tensor[mask < 255] = -1
So in python there actually is a map() function but usually there are better ways to do it (better in python; in other languages - like Haskell - map/fmap is obviously prefered in most contexts).
So the key take-away here is that the preferred method is taking advantage of the vectorization. This also makes the code more efficient as those tensor operations are implemented in a low level language, while map() is nothing but a python-for loop that is a lot slower.

In PyTorch, what makes a tensor have non-contiguous memory?

According to this SO and this PyTorch discussion, PyTorch's view function works only on contiguous memory, while reshape does not. In the second link, the author even claims:
[view] will raise an error on a non-contiguous tensor.
But when does a tensor have non-contiguous memory?
This is a very good answer, which explains the topic in the context of NumPy. PyTorch works essentially the same. Its docs don't generally mention whether function outputs are (non)contiguous, but that's something that can be guessed based on the kind of the operation (with some experience and understanding of the implementation). As a rule of thumb, most operations preserve contiguity as they construct new tensors. You may see non-contiguous outputs if the operation works on the array inplace and change its striding. A couple of examples below
import torch
t = torch.randn(10, 10)
def check(ten):
print(ten.is_contiguous())
check(t) # True
# flip sets the stride to negative, but element j is still adjacent to
# element i, so it is contiguous
check(torch.flip(t, (0,))) # True
# if we take every 2nd element, adjacent elements in the resulting array
# are not adjacent in the input array
check(t[::2]) # False
# if we transpose, we lose contiguity, as in case of NumPy
check(t.transpose(0, 1)) # False
# if we transpose twice, we first lose and then regain contiguity
check(t.transpose(0, 1).transpose(0, 1)) # True
In general, if you have non-contiguous tensor t, you can make it contiguous by calling t = t.contiguous(). If t is contiguous, call to t.contiguous() is essentially a no-op, so you can do that without risking a big performance hit.
I think your title contiguous memory is a bit misleading. As I understand, contiguous in PyTorch means if the neighboring elements in the tensor are actually next to each other in memory. Let's take a simple example:
x = torch.tensor([[1, 2, 3], [4, 5, 6]]) # x is contiguous
y = torch.transpose(0, 1) # y is non-contiguous
According to documentation of tranpose():
Returns a tensor that is a transposed version of input. The given dimensions dim0 and dim1 are swapped.
The resulting out tensor shares it’s underlying storage with the input tensor, so changing the content of one would change the content of the other.
So that x and y in the above example share the same memory space. But if you check their contiguity with is_contiguous(), you will find that x is contiguous and y is not. Now you will find that contiguity does not refer to contiguous memory.
Since x is contiguous, x[0][0] and x[0][1] are next to each other in memory. But y[0][0] and y[0][1] is not. That is what contiguous means.

How to correctly implement a batch-input LSTM network in PyTorch?

This release of PyTorch seems provide the PackedSequence for variable lengths of input for recurrent neural network. However, I found it's a bit hard to use it correctly.
Using pad_packed_sequence to recover an output of a RNN layer which were fed by pack_padded_sequence, we got a T x B x N tensor outputs where T is the max time steps, B is the batch size and N is the hidden size. I found that for short sequences in the batch, the subsequent output will be all zeros.
Here are my questions.
For a single output task where the one would need the last output of all the sequences, simple outputs[-1] will give a wrong result since this tensor contains lots of zeros for short sequences. One will need to construct indices by sequence lengths to fetch the individual last output for all the sequences. Is there more simple way to do that?
For a multiple output task (e.g. seq2seq), usually one will add a linear layer N x O and reshape the batch outputs T x B x O into TB x O and compute the cross entropy loss with the true targets TB (usually integers in language model). In this situation, do these zeros in batch output matters?
Question 1 - Last Timestep
This is the code that i use to get the output of the last timestep. I don't know if there is a simpler solution. If it is, i'd like to know it. I followed this discussion and grabbed the relative code snippet for my last_timestep method. This is my forward.
class BaselineRNN(nn.Module):
def __init__(self, **kwargs):
...
def last_timestep(self, unpacked, lengths):
# Index of the last output for each sequence.
idx = (lengths - 1).view(-1, 1).expand(unpacked.size(0),
unpacked.size(2)).unsqueeze(1)
return unpacked.gather(1, idx).squeeze()
def forward(self, x, lengths):
embs = self.embedding(x)
# pack the batch
packed = pack_padded_sequence(embs, list(lengths.data),
batch_first=True)
out_packed, (h, c) = self.rnn(packed)
out_unpacked, _ = pad_packed_sequence(out_packed, batch_first=True)
# get the outputs from the last *non-masked* timestep for each sentence
last_outputs = self.last_timestep(out_unpacked, lengths)
# project to the classes using a linear layer
logits = self.linear(last_outputs)
return logits
Question 2 - Masked Cross Entropy Loss
Yes, by default the zero padded timesteps (targets) matter. However, it is very easy to mask them. You have two options, depending on the version of PyTorch that you use.
PyTorch 0.2.0: Now pytorch supports masking directly in the CrossEntropyLoss, with the ignore_index argument. For example, in language modeling or seq2seq, where i add zero padding, i mask the zero padded words (target) simply like this:
loss_function = nn.CrossEntropyLoss(ignore_index=0)
PyTorch 0.1.12 and older: In the older versions of PyTorch, masking was not supported, so you had to implement your own workaround. I solution that i used, was masked_cross_entropy.py, by jihunchoi. You may be also interested in this discussion.
A few days ago, I found this method which uses indexing to accomplish the same task with a one-liner.
I have my dataset batch first ([batch size, sequence length, features]), so for me:
unpacked_out = unpacked_out[np.arange(unpacked_out.shape[0]), lengths - 1, :]
where unpacked_out is the output of torch.nn.utils.rnn.pad_packed_sequence.
I have compared it with the method described here, which looks similar to the last_timestep() method Christos Baziotis is using above (also recommended here), and the results are the same in my case.

Resources