Covnert a List of Tensors to a Tensor - pytorch

I have a list of tensors like this:
[tensor(-2.9222, grad_fn=<SqueezeBackward1>), tensor(-2.8192, grad_fn=<SqueezeBackward1>), tensor(-3.1894, grad_fn=<SqueezeBackward1>), tensor(-2.9048, grad_fn=<SqueezeBackward1>)]
I want it to be in this form:
tensor([-0.5575, -0.9004, -0.8491, ..., -0.7345, -0.6729, -0.7553],
grad_fn=<SqueezeBackward1>)
How can I do this?
I appreciate the help.

Since these tensor are 0-dimensional, torch.catwill not work but you can use torch.stack (which creates a new dimension along which to concatenate):
a = torch.tensor(1.0, requires_grad=True)
b = torch.tensor(2.0, requires_grad=True)
torch.stack([a,b], dim=0)
>>>tensor([1.,2.], grad_fn=<StackBackward>)

Related

Slicing a tensor with a dimension varying

I'm trying to slice a PyTorch tensor my_tensor of dimensions s x b x c so that the slicing along the first dimension varies according to a tensor indices of length b, to the effect of:
my_tensor[0:indices, torch.arange(0, b, dtype=torch.long), :] = something
The code above doesn't work and receives the error TypeError: tuple indices must be integers or slices, not tuple.
What I'm aiming for is, for example, if indices = torch.tensor([3, 5, 4]) then:
my_tensor[0:3, 0, :] = something
my_tensor[0:5, 1, :] = something
my_tensor[0:4, 2, :] = something
I'm hoping for a tensorized way to do this so I don't have to resort to a for loop. Also, the method needs to be compatible with TorchScript. Thanks very much.

slice Pytorch tensors which are saved in a list

I have the following code segment to generate random samples. The generated samples is a list, where each entry of the list is a tensor. Each tensor has two elements. I would like to extract the first element from all tensors in the list; and extract the second element from all tensors in the list as well. How to perform this kind of tensor slice operation
import torch
import pyro.distributions as dist
num_samples = 250
# note that both covariance matrices are diagonal
mu1 = torch.tensor([0., 5.])
sig1 = torch.tensor([[2., 0.], [0., 3.]])
dist1 = dist.MultivariateNormal(mu1, sig1)
samples1 = [pyro.sample('samples1', dist1) for _ in range(num_samples)]
samples1
I'd recommend torch.cat with a list comprehension:
col1 = torch.cat([t[0] for t in samples1])
col2 = torch.cat([t[1] for t in samples1])
Docs for torch.cat: https://pytorch.org/docs/stable/generated/torch.cat.html
ALTERNATIVELY
You could turn your list of 1D tensors into a single big 2D tensor using torch.stack, then do a normal slice:
samples1_t = torch.stack(samples1)
col1 = samples1_t[:, 0] # : means all rows
col2 = samples1_t[:, 1]
Docs for torch.stack: https://pytorch.org/docs/stable/generated/torch.stack.html
I should mention PyTorch tensors come with unpacking out of the box, this means you can unpack the first axis into multiple variables without additional considerations. Here torch.stack will output a tensor of shape (rows, cols), we just need to transpose it to (cols, rows) and unpack:
>>> c1, c2 = torch.stack(samples1).T
So you get c1 and c2 shaped (rows,):
>>> c1
tensor([0.6433, 0.4667, 0.6811, 0.2006, 0.6623, 0.7033])
>>> c2
tensor([0.2963, 0.2335, 0.6803, 0.1575, 0.9420, 0.6963])
Other answers that suggest .stack() or .cat() are perfectly fine from PyTorch perspective.
However, since the context of the question involves pyro, may I add the following:
Since you are doing IID samples
[pyro.sample('samples1', dist1) for _ in range(num_samples)]
A better way to do it with pyro is
dist1 = dist.MultivariateNormal(mu1, sig1).expand([num_samples])
This tells pyro that the distribution is batched with a batch size of num_samples. Sampling from this will produce
>> dist1.sample()
tensor([[-0.8712, 6.6087],
[ 1.6076, -0.2939],
[ 1.4526, 6.1777],
...
[-0.0168, 7.5085],
[-1.6382, 2.1878]])
Now its easy to solve your original question. Just slice it like
samples = dist1.sample()
samples[:, 0] # all first elements
samples[:, 1] # all second elements

Using tensordot with torch.sparse tensors

Is it possible to use a similar method as "tensordot" with torch.sparse tensors?
I am trying to apply a 4 dimensional tensor onto a 2 dimensional tensor. This is possible using torch or numpy. However, I did not find the way to do it using torch.sparse without making the sparse tensor dense using ".to_dense()".
More precisely, here is what I want to do without using ".to_dense()":
import torch
import torch.sparse
nb_x = 4
nb_y = 3
coordinates = torch.LongTensor([[0,1,2],[0,1,2],[0,1,2],[0,1,2]])
values = torch.FloatTensor([1,2,3])
tensor4D = torch.sparse.FloatTensor(coordinates,values,torch.Size([nb_x,nb_y,nb_x,nb_y]))
inp = torch.rand((nb_x,nb_y))
#what I want to do
out = torch.tensordot(tensor4D.to_dense(),inp,dims=([2,3],[0,1]))
print(inp)
print(out)
(here is the output: torch_code)
Alternatively, here is a similar code using numpy:
import numpy as np
tensor4D = np.zeros((4,3,4,3))
tensor4D[0,0,0,0] = 1
tensor4D[1,1,1,1] = 2
tensor4D[2,2,2,2] = 3
inp = np.random.rand(4,3)
out = np.tensordot(tensor4D,inp)
print(inp)
print(out)
(here is the output: numpy_code)
Thanks for helping!
Your specific tensordot can be cast to a simple matrix multiplication by "squeezing" the first two and last two dimensions of tensor4D.
In short, what you want to do is
raw = tensor4D.view(nb_x*nb_y, nb_x*nb_y) # inp.flatten()
out = raw.view(nb_x, nb_y)
However, since view and reshape are not implemented for sparse tensors, you'll have to it manually:
sz = tensor4D.shape
coeff = torch.tensor([[1, sz[1], 0, 0], [0, 0, 1, sz[3]]])
reshaped = torch.sparse.FloatTensor(coeff # idx, tensor4D._values(), torch.Size([nb_x*nb_y, nb_x*nb_y]))
# once we reshaped tensord4D it's all downhill from here
raw = torch.sparse.mm(reshaped, inp.flatten()[:, None])
out = raw.reshape(nb_x, nb_y)
print(out)
And the output is
tensor([[0.4180, 0.0000, 0.0000],
[0.0000, 0.6025, 0.0000],
[0.0000, 0.0000, 0.5897],
[0.0000, 0.0000, 0.0000]])
Indeed, this works very well, thank you for your answer!
The weakness of this method seems to me that it is hard to generalize.
In fact, "inp" and "out" are supposed to be images. Here, they are black and white images since there are only two dimensions: height and width.
If instead, I take RGB images, then I will have to consider 6D tensors acting on 3D tensors. I can still apply the same trick by "squeezing" the first three dimensions together and the last three dimensions together. However it seems to me that it will become more involving very quickly (maybe I am wrong). While using tensordot instead would be much more simpler for generalization.
Therefore, I am going to use the solution you proposed, but I am still interested if someone finds an other solution.

Pytorch sum over a list of tensors along an axis

I have a list of tensors of the same shape.
I would like to sum the entire list of tensors along an axis.
Does torch.cumsum perform this op along a dim?
If so it requires the list to be converted to a single tensor and summed over?
you don't need cumsum, sum is your friend
and yes you should first convert them into a single tensor with stack or cat based on your needs, something like this:
import torch
my_list = [torch.randn(3, 5), torch.randn(3, 5)]
result = torch.stack(my_list, dim=0).sum(dim=0).sum(dim=0)
print(result.shape) #torch.Size([5])

Tensorflow Expanding Placeholder Dimensions

I have a 2-d placeholder tensor with dimensions of (2,2). How can I expand the columns (same number dimensions) so that the new tensor is (2,3) and assign a constant value to the new column?
For example, the current data may look like
[[2,2], [2,2]]
And I want to transform through tensorflow to (prepending a constant of 1):
[[1,2,2], [1,2,2]]
You can use the tf.concat() op to concatenate a constant with your placeholder:
placeholder = tf.placeholder(tf.int32, shape=[2, 2])
prefix_column = tf.constant([[1], [1]])
expanded_placeholder = tf.concat([prefix_column, placeholder], axis=1)

Resources