Why tensor.view() is not working in pytorch? - pytorch

I have the following piece of code.
embedded = self.embedding(input).view(1, 1, -1)
embedded = self.drop(embedded)
print(embedded[0].size(), hidden[0].size())
concatenated_output = torch.cat((embedded[0], hidden[0]), 1)
The last line of the code is giving me the following error.
RuntimeError: inconsistent tensor sizes at /data/users/soumith/miniconda2/conda-bld/pytorch-0.1.9_1487344852722/work/torch/lib/THC/generic/THCTensorMath.cu:141
Please note, when I am printing the tensor shapes at line no. 3, I am getting the following output.
torch.size([1, 300]) torch.size([1, 1, 300])
Why I am getting [1, 300] shape for embedded tensor even though I have used the view method as view(1, 1, -1)?
Any help would be appreciated!

embedded was a 3d-tensor and hidden was a tuple of two elements (hidden states and cell states) where each element is a 3d-tensor. hidden was the output from LSTM layer. In PyTorch, LSTM returns hidden states [h] and cell states [c] as a tuple which made me confused about the error.
So, I updated the last line of the code as follows and it solved the problem.
concatenated_output = torch.cat((embedded, hidden[0]), 1)

Related

Pytorch is creating non empty Tensor with torch.empty((x,y)

I just want to create an empty tensor containing only zero values. but here is what I am getting so far. Source: https://pytorch.org/docs/stable/generated/torch.empty.html
a=torch.empty((2,3), dtype=torch.int32, device = 'cuda')
a
tensor([[16843009, 1, 1],
[ 0, 1, 0]], device='cuda:0', dtype=torch.int32)
Screenshot as proof:
My question is, Why?? Is it a bug or what
If you want a tensor of zeros, use torch.zeros.
torch.empty allocates a tensor but it does not initialise the contents, meaning that the tensor will contain whatever data happened to occupy that region of memory already.

only one element tensors can be converted to Python scalars error with pca.fit_transform

I am trying to perform dimensionality reduction using PCA, where outputs is a list of tensors where each tensor has a shape of (1, 3, 32,32). Here is the code:
from sklearn.decomposition import PCA
pca = PCA(10)
pca_result = pca.fit_transform(output)
But I keep getting this error, regardless of whatever I tried:
ValueError: only one element tensors can be converted to Python scalars
I know that the tensors with size(1,3, 32,32) is making the issue, since its looking for 1 element as the error puts it, but do not know how to solve it.
I have tried flattening each tensor with looping over output (don't know if its the right way of solving this issue), using the following code but it leads to error in pca:
new_outputs = []
for i in outputs:
for j in i:
j = j.cpu()
j = j.detach().numpy()
j = j.flatten()
new_outputs.append(j)
pca_result = pca.fit_transform(new_output)
I would appreciate if anybody can help with this error whether the flattening approach I took, is correct.
PS:I have read the existing posts (post1,post2) discussing this error but none of them could solve my problem.
Assuming your Tensors are stored in a matrix with shape like (10, 3, 32, 32) where 10 corresponds to number of Tensors, you should flatten each like that:
import torch
from sklearn.decomposition import PCA
data= torch.rand((10, 3, 32, 32))
pca = PCA(10)
pca_result = pca.fit_transform(data.flatten(start_dim=1))
data.flatten(start_dim=1) makes your data to be in shape (10, 3*32*32)
The error you posted is actually related to one of the post you linked. The PCA estimator expects array-like object with fit() method and you provided a list of Tensors.

Multi-dimensional tensor dot product in pytorch

I have two tensors of shapes (8, 1, 128) as follows.
q_s.shape
Out[161]: torch.Size([8, 1, 128])
p_s.shape
Out[162]: torch.Size([8, 1, 128])
Above two tensors represent a batch of eight 128 dimensional vectors. I want the dot product of batch q_s with batch p_s. How can I do this? I tried to use torch.tensordot function as follows. It works as expected as well. But it also does the extra work, which I don't want it to do. See the following example.
dt = torch.tensordot(q_s, p_s, dims=([1,2], [1,2]))
dt
Out[176]:
tensor([[0.9051, 0.9156, 0.7834, 0.8726, 0.8581, 0.7858, 0.7881, 0.8063],
[1.0235, 1.5533, 1.2155, 1.2048, 1.3963, 1.1310, 1.1724, 1.0639],
[0.8762, 1.3490, 1.2923, 1.0926, 1.4703, 0.9566, 0.9658, 0.8558],
[0.8136, 1.0611, 0.9131, 1.1636, 1.0969, 0.9443, 0.9587, 0.8521],
[0.6104, 0.9369, 0.9576, 0.8773, 1.3042, 0.7900, 0.8378, 0.6136],
[0.8623, 0.9678, 0.8163, 0.9727, 1.1161, 1.6464, 0.9765, 0.7441],
[0.6911, 0.8392, 0.6931, 0.7325, 0.8239, 0.7757, 1.0456, 0.6657],
[0.8493, 0.8174, 0.8041, 0.9013, 0.8003, 0.7451, 0.7408, 1.1771]],
grad_fn=<AsStridedBackward>)
dt.shape
Out[177]: torch.Size([8, 8])
As we can see, this produces the tensor of size (8,8) with the dot products I want lying on the diagonal. Is there any different way to obtain a smaller required tensor of shape (8,1), which just contains the elements lying on the diagonal in above result. To be more clear, the elements lying on the diagonal are the correct required dot products we want as a dot product of two batches. Element at index [0][0] is dot product of q_s[0] and p_s[0]. Element at index [1][1] is dot product of q_s[1] and p_s[1] and so on.
Is there a better way to obtain the desired dot product in pytorch?
You can do it directly:
a = torch.rand(8, 1, 128)
b = torch.rand(8, 1, 128)
torch.sum(a * b, dim=(1, 2))
# tensor([29.6896, 30.4994, 32.9577, 30.2220, 33.9913, 35.1095, 32.3631, 30.9153])
torch.diag(torch.tensordot(a, b, dim=([1,2], [1,2])))
# tensor([29.6896, 30.4994, 32.9577, 30.2220, 33.9913, 35.1095, 32.3631, 30.9153])
If you set axis=2 in the sum you will get a tensor with shape (8, 1).

Pytorch summary only works for one specific input size for U-Net

I am trying to implement the UNet architecture in Pytorch. When I print the model using print(model) I get the correct architecture:
but when I try to print the summary using (or any other input size for that matter):
from torchsummary import summary
summary(model, input_size=(13, 572, 572))
I get an error:
RuntimeError: invalid argument 0: Sizes of tensors must match except in dimension 1. Got 70 and 71 in dimension 2 at /Users/distiller/project/conda/conda-bld/pytorch_1579022061893/work/aten/src/TH/generic/THTensor.cpp:612
However, it works perfectly if I give the input_size as input_size=(3, 224, 224))( like it worked for this person here). I am so baffled.
Can someone help me what's wrong?
Edit: I have used the model architecture from here.
This UNet architecture you provided doesn't support that shape (unless the depth parameter is <= 3). Ultimately the reason for this is that the size of a downsampling operation isn't invertible since multiple input shapes map to the same output shape. For example consider
>> torch.nn.functional.max_pool2d(torch.zeros(1, 1, 10, 10), 2).shape
torch.Size([1, 1, 5, 5])
>> torch.nn.functional.max_pool2d(torch.zeros(1, 1, 11, 11), 2).shape
torch.Size([1, 1, 5, 5])
So the question is, given only the output shape is 5x5, what was the shape of the input? Was it 10x10 or 11x11? This same phenomenon applies to downsampling via strided convolutions.
The problem is that the UNet class tries to combine features from the downsampling half to the network to the features in the upsampling half. If it "guesses wrong" about the original shape during upsampling then you will receive a dimension mismatch error.
To avoid this issue you'll need to ensure that the height and width of your input data are multiples of 2**(depth-1). So, for the default depth=5 you need the input image height and width to be a multiple of 16 (e.g. 560 or 576). Alternatively, since 572 is divisible by 4 then you could also set depth=3 to make it work.

expected input to have 4 dimensions, but got array with shape

I have this error
Error when checking input: expected input_13 to have 4 dimensions, but got array with shape (7, 100, 100)
For the following code how should I reshape array to fit with 4-dimensions, I searched for it but didn't understand the previous solutions. Please ask if not clear its very common issue in convolution neural network.
inputs=Input(shape=(100,100,1))
x=Conv2D(16,(3,3), padding='same')(inputs)
x=Activation('relu')(x)
x=Conv2D(8,(3,3))(x)
x=Activation('relu')(x)
x=MaxPooling2D(pool_size=(2,2))(x)
x=Dropout(0.2)(x)
x=Dense(num_classes)(x)
x=Activation('softmax')(x)
output=Activation('softmax')(x)
model=Model([inputs], output)
If x is your data array you should simply apply the following transformation:
x = x.reshape((-1, 100, 100, 1))

Resources