I'm trying to slice a PyTorch tensor my_tensor of dimensions s x b x c so that the slicing along the first dimension varies according to a tensor indices of length b, to the effect of:
my_tensor[0:indices, torch.arange(0, b, dtype=torch.long), :] = something
The code above doesn't work and receives the error TypeError: tuple indices must be integers or slices, not tuple.
What I'm aiming for is, for example, if indices = torch.tensor([3, 5, 4]) then:
my_tensor[0:3, 0, :] = something
my_tensor[0:5, 1, :] = something
my_tensor[0:4, 2, :] = something
I'm hoping for a tensorized way to do this so I don't have to resort to a for loop. Also, the method needs to be compatible with TorchScript. Thanks very much.
I am trying to perform dimensionality reduction using PCA, where outputs is a list of tensors where each tensor has a shape of (1, 3, 32,32). Here is the code:
from sklearn.decomposition import PCA
pca = PCA(10)
pca_result = pca.fit_transform(output)
But I keep getting this error, regardless of whatever I tried:
ValueError: only one element tensors can be converted to Python scalars
I know that the tensors with size(1,3, 32,32) is making the issue, since its looking for 1 element as the error puts it, but do not know how to solve it.
I have tried flattening each tensor with looping over output (don't know if its the right way of solving this issue), using the following code but it leads to error in pca:
new_outputs = []
for i in outputs:
for j in i:
j = j.cpu()
j = j.detach().numpy()
j = j.flatten()
new_outputs.append(j)
pca_result = pca.fit_transform(new_output)
I would appreciate if anybody can help with this error whether the flattening approach I took, is correct.
PS:I have read the existing posts (post1,post2) discussing this error but none of them could solve my problem.
Assuming your Tensors are stored in a matrix with shape like (10, 3, 32, 32) where 10 corresponds to number of Tensors, you should flatten each like that:
import torch
from sklearn.decomposition import PCA
data= torch.rand((10, 3, 32, 32))
pca = PCA(10)
pca_result = pca.fit_transform(data.flatten(start_dim=1))
data.flatten(start_dim=1) makes your data to be in shape (10, 3*32*32)
The error you posted is actually related to one of the post you linked. The PCA estimator expects array-like object with fit() method and you provided a list of Tensors.
I have two tensors of shapes (8, 1, 128) as follows.
q_s.shape
Out[161]: torch.Size([8, 1, 128])
p_s.shape
Out[162]: torch.Size([8, 1, 128])
Above two tensors represent a batch of eight 128 dimensional vectors. I want the dot product of batch q_s with batch p_s. How can I do this? I tried to use torch.tensordot function as follows. It works as expected as well. But it also does the extra work, which I don't want it to do. See the following example.
dt = torch.tensordot(q_s, p_s, dims=([1,2], [1,2]))
dt
Out[176]:
tensor([[0.9051, 0.9156, 0.7834, 0.8726, 0.8581, 0.7858, 0.7881, 0.8063],
[1.0235, 1.5533, 1.2155, 1.2048, 1.3963, 1.1310, 1.1724, 1.0639],
[0.8762, 1.3490, 1.2923, 1.0926, 1.4703, 0.9566, 0.9658, 0.8558],
[0.8136, 1.0611, 0.9131, 1.1636, 1.0969, 0.9443, 0.9587, 0.8521],
[0.6104, 0.9369, 0.9576, 0.8773, 1.3042, 0.7900, 0.8378, 0.6136],
[0.8623, 0.9678, 0.8163, 0.9727, 1.1161, 1.6464, 0.9765, 0.7441],
[0.6911, 0.8392, 0.6931, 0.7325, 0.8239, 0.7757, 1.0456, 0.6657],
[0.8493, 0.8174, 0.8041, 0.9013, 0.8003, 0.7451, 0.7408, 1.1771]],
grad_fn=<AsStridedBackward>)
dt.shape
Out[177]: torch.Size([8, 8])
As we can see, this produces the tensor of size (8,8) with the dot products I want lying on the diagonal. Is there any different way to obtain a smaller required tensor of shape (8,1), which just contains the elements lying on the diagonal in above result. To be more clear, the elements lying on the diagonal are the correct required dot products we want as a dot product of two batches. Element at index [0][0] is dot product of q_s[0] and p_s[0]. Element at index [1][1] is dot product of q_s[1] and p_s[1] and so on.
Is there a better way to obtain the desired dot product in pytorch?
You can do it directly:
a = torch.rand(8, 1, 128)
b = torch.rand(8, 1, 128)
torch.sum(a * b, dim=(1, 2))
# tensor([29.6896, 30.4994, 32.9577, 30.2220, 33.9913, 35.1095, 32.3631, 30.9153])
torch.diag(torch.tensordot(a, b, dim=([1,2], [1,2])))
# tensor([29.6896, 30.4994, 32.9577, 30.2220, 33.9913, 35.1095, 32.3631, 30.9153])
If you set axis=2 in the sum you will get a tensor with shape (8, 1).
I have this error
Error when checking input: expected input_13 to have 4 dimensions, but got array with shape (7, 100, 100)
For the following code how should I reshape array to fit with 4-dimensions, I searched for it but didn't understand the previous solutions. Please ask if not clear its very common issue in convolution neural network.
inputs=Input(shape=(100,100,1))
x=Conv2D(16,(3,3), padding='same')(inputs)
x=Activation('relu')(x)
x=Conv2D(8,(3,3))(x)
x=Activation('relu')(x)
x=MaxPooling2D(pool_size=(2,2))(x)
x=Dropout(0.2)(x)
x=Dense(num_classes)(x)
x=Activation('softmax')(x)
output=Activation('softmax')(x)
model=Model([inputs], output)
If x is your data array you should simply apply the following transformation:
x = x.reshape((-1, 100, 100, 1))
I have the following piece of code.
embedded = self.embedding(input).view(1, 1, -1)
embedded = self.drop(embedded)
print(embedded[0].size(), hidden[0].size())
concatenated_output = torch.cat((embedded[0], hidden[0]), 1)
The last line of the code is giving me the following error.
RuntimeError: inconsistent tensor sizes at /data/users/soumith/miniconda2/conda-bld/pytorch-0.1.9_1487344852722/work/torch/lib/THC/generic/THCTensorMath.cu:141
Please note, when I am printing the tensor shapes at line no. 3, I am getting the following output.
torch.size([1, 300]) torch.size([1, 1, 300])
Why I am getting [1, 300] shape for embedded tensor even though I have used the view method as view(1, 1, -1)?
Any help would be appreciated!
embedded was a 3d-tensor and hidden was a tuple of two elements (hidden states and cell states) where each element is a 3d-tensor. hidden was the output from LSTM layer. In PyTorch, LSTM returns hidden states [h] and cell states [c] as a tuple which made me confused about the error.
So, I updated the last line of the code as follows and it solved the problem.
concatenated_output = torch.cat((embedded, hidden[0]), 1)