Tensorflow Expanding Placeholder Dimensions - python-3.x

I have a 2-d placeholder tensor with dimensions of (2,2). How can I expand the columns (same number dimensions) so that the new tensor is (2,3) and assign a constant value to the new column?
For example, the current data may look like
[[2,2], [2,2]]
And I want to transform through tensorflow to (prepending a constant of 1):
[[1,2,2], [1,2,2]]

You can use the tf.concat() op to concatenate a constant with your placeholder:
placeholder = tf.placeholder(tf.int32, shape=[2, 2])
prefix_column = tf.constant([[1], [1]])
expanded_placeholder = tf.concat([prefix_column, placeholder], axis=1)

Related

How to get data in a tensor with some coordinates effeciently?

There is a 5-dimensional tensor dense_volume1 and I have many 4-dimensional index tensors coord inside of 'coords'.
How can I index the original value tensor efficiently?
The code below seems too slow:
for coord in coords:
feats.append(dense_volume1[coord[3], :, coord[0], coord[1], coord[2]])
Programmatically indexing with coord
You can permute the axes of dense_volume1 such that you are able to index with sequential dimensions from coord. In other words, by building a tensor dv such that:
dense_volume1[coord[3], :, coord[0], coord[1], coord[2]]
Is equal to
dv[coord[0], coord[1], coord[2], coord[3], :] # trailing ':' is not necessary
You can do so with torch.permute:
>>> dv = dense_volume1.permute(2,3,4,0,1) # the returned tensor is a view
Then you can simply index with coord by splitting its first axis:
>>> for coord in coords:
... feats.append(dv[tuple(coord)])

Expand the tensor by several dimensions

In PyTorch, given a tensor of size=[3], how to expand it by several dimensions to the size=[3,2,5,5] such that the added dimensions have the corresponding values from the original tensor. For example, making size=[3] vector=[1,2,3] such that the first tensor of size [2,5,5] has values 1, the second one has all values 2, and the third one all values 3.
In addition, how to expand the vector of size [3,2] to [3,2,5,5]?
One way to do it I can think is by means of creating a vector of the same size with ones-Like and then einsum but I think there should be an easier way.
You can first unsqueeze the appropriate number of singleton dimensions, then expand to a view at the target shape with torch.Tensor.expand:
>>> x = torch.rand(3)
>>> target = [3,2,5,5]
>>> x[:, None, None, None].expand(target)
A nice workaround is to use torch.Tensor.reshape or torch.Tensor.view to do perform multiple unsqueezing:
>>> x.view(-1, 1, 1, 1).expand(target)
This allows for a more general approach to handle any arbitrary target shape:
>>> x.view(len(x), *(1,)*(len(target)-1)).expand(target)
For an even more general implementation, where x can be multi-dimensional:
>>> x = torch.rand(3, 2)
# just to make sure the target shape is valid w.r.t to x
>>> assert list(x.shape) == list(target[:x.ndim])
>>> x.view(*x.shape, *(1,)*(len(target)-x.ndim)).expand(target)

Slicing a tensor with a dimension varying

I'm trying to slice a PyTorch tensor my_tensor of dimensions s x b x c so that the slicing along the first dimension varies according to a tensor indices of length b, to the effect of:
my_tensor[0:indices, torch.arange(0, b, dtype=torch.long), :] = something
The code above doesn't work and receives the error TypeError: tuple indices must be integers or slices, not tuple.
What I'm aiming for is, for example, if indices = torch.tensor([3, 5, 4]) then:
my_tensor[0:3, 0, :] = something
my_tensor[0:5, 1, :] = something
my_tensor[0:4, 2, :] = something
I'm hoping for a tensorized way to do this so I don't have to resort to a for loop. Also, the method needs to be compatible with TorchScript. Thanks very much.

slice Pytorch tensors which are saved in a list

I have the following code segment to generate random samples. The generated samples is a list, where each entry of the list is a tensor. Each tensor has two elements. I would like to extract the first element from all tensors in the list; and extract the second element from all tensors in the list as well. How to perform this kind of tensor slice operation
import torch
import pyro.distributions as dist
num_samples = 250
# note that both covariance matrices are diagonal
mu1 = torch.tensor([0., 5.])
sig1 = torch.tensor([[2., 0.], [0., 3.]])
dist1 = dist.MultivariateNormal(mu1, sig1)
samples1 = [pyro.sample('samples1', dist1) for _ in range(num_samples)]
samples1
I'd recommend torch.cat with a list comprehension:
col1 = torch.cat([t[0] for t in samples1])
col2 = torch.cat([t[1] for t in samples1])
Docs for torch.cat: https://pytorch.org/docs/stable/generated/torch.cat.html
ALTERNATIVELY
You could turn your list of 1D tensors into a single big 2D tensor using torch.stack, then do a normal slice:
samples1_t = torch.stack(samples1)
col1 = samples1_t[:, 0] # : means all rows
col2 = samples1_t[:, 1]
Docs for torch.stack: https://pytorch.org/docs/stable/generated/torch.stack.html
I should mention PyTorch tensors come with unpacking out of the box, this means you can unpack the first axis into multiple variables without additional considerations. Here torch.stack will output a tensor of shape (rows, cols), we just need to transpose it to (cols, rows) and unpack:
>>> c1, c2 = torch.stack(samples1).T
So you get c1 and c2 shaped (rows,):
>>> c1
tensor([0.6433, 0.4667, 0.6811, 0.2006, 0.6623, 0.7033])
>>> c2
tensor([0.2963, 0.2335, 0.6803, 0.1575, 0.9420, 0.6963])
Other answers that suggest .stack() or .cat() are perfectly fine from PyTorch perspective.
However, since the context of the question involves pyro, may I add the following:
Since you are doing IID samples
[pyro.sample('samples1', dist1) for _ in range(num_samples)]
A better way to do it with pyro is
dist1 = dist.MultivariateNormal(mu1, sig1).expand([num_samples])
This tells pyro that the distribution is batched with a batch size of num_samples. Sampling from this will produce
>> dist1.sample()
tensor([[-0.8712, 6.6087],
[ 1.6076, -0.2939],
[ 1.4526, 6.1777],
...
[-0.0168, 7.5085],
[-1.6382, 2.1878]])
Now its easy to solve your original question. Just slice it like
samples = dist1.sample()
samples[:, 0] # all first elements
samples[:, 1] # all second elements

Pytorch sum over a list of tensors along an axis

I have a list of tensors of the same shape.
I would like to sum the entire list of tensors along an axis.
Does torch.cumsum perform this op along a dim?
If so it requires the list to be converted to a single tensor and summed over?
you don't need cumsum, sum is your friend
and yes you should first convert them into a single tensor with stack or cat based on your needs, something like this:
import torch
my_list = [torch.randn(3, 5), torch.randn(3, 5)]
result = torch.stack(my_list, dim=0).sum(dim=0).sum(dim=0)
print(result.shape) #torch.Size([5])

Resources