Left shift tensor in PyTorch - python-3.x

I have a tensor a of shape (1, N, 1). I need to left shift the tensor along dimension 1 and add a new value as replacement. I have found a way to make this work and following is the code.
a = torch.from_numpy(np.array([1, 2, 3]))
a = a.unsqueeze(0).unsqeeze(2) # (1, 3, 1), my data resembles this shape, therefore the two unsqueeze
# want to left shift a along dim 1 and insert a new value at the end
# I achieve the required shifts using the following code
b = a.squeeze
c = b.roll(shifts=-1)
c[-1] = 4
c = c.unsqueeze(0).unsqueeze(2)
# c = [[[2], [3], [4]]]
My question is, is there a simpler way to do this? Thanks.

You don't actually need to squeeze and perform your operations followed by unsqeezing your input tensor a. Instead you can directly do those two operations as follows:
# No need to squeeze
c = torch.roll(a, shifts=-1, dims=1)
c[:,-1,:] = 4
# No need to unsqeeze
# c = [[[2], [3], [4]]]

Related

compress two consecutive dimension in PyTorch

I'm looking for such a function Tensor.compress(*dims) where dims=(int ...) is a consecutive sequence of integer:
a = torch.rand(2,2,3)
b = a.compress(0,1)
b.size()
>>> (4,3)
I know view would work, however, in case I don't know the shape of a in advance, I have to do an extra operation to acquire its size and then do view, which is not what I want.
You do not have to explicitly "do the math", torch.view can do some of it for you if you use -1 as the shape of one of the dimensions:
b = a.view(-1, *a.shape[2:])
b.shape
>>> torch.Size([4, 3])

Find n smallest values in a list of tensors

I am trying to find the indices of the n smallest values in a list of tensors in pytorch. Since these tensors might contain many non-unique values, I cannot simply compute percentiles to obtain the indices. The ordering of non-unique values does not matter however.
I came up with the following solution but am wondering if there is a more elegant way of doing it:
import torch
n = 10
tensor_list = [torch.randn(10, 10), torch.zeros(20, 20), torch.ones(30, 10)]
all_sorted, all_sorted_idx = torch.sort(torch.cat([t.view(-1) for t in tensor_list]))
cum_num_elements = torch.cumsum(torch.tensor([t.numel() for t in tensor_list]), dim=0)
cum_num_elements = torch.cat([torch.tensor([0]), cum_num_elements])
split_indeces_lt = [all_sorted_idx[:n] < cum_num_elements[i + 1] for i, _ in enumerate(cum_num_elements[1:])]
split_indeces_ge = [all_sorted_idx[:n] >= cum_num_elements[i] for i, _ in enumerate(cum_num_elements[:-1])]
split_indeces = [all_sorted_idx[:n][torch.logical_and(lt, ge)] - c for lt, ge, c in zip(split_indeces_lt, split_indeces_ge, cum_num_elements[:-1])]
n_smallest = [t.view(-1)[idx] for t, idx in zip(tensor_list, split_indeces)]
Ideally a solution would pick a random subset of the non-unique values instead of picking the entries of the first tensor of the list.
Pytorch does provide a more elegant (I think) way to do it, with torch.unique_consecutive (see here)
I'm going to work on a tensor, not a list of tensors because as you did yourself, there's just a cat to do. Unraveling the indices afterward is not hard either.
# We want to find the n=3 min values and positions in t
n = 3
t = torch.tensor([1,2,3,2,0,1,4,3,2])
# To get a random occurrence, we create a random permutation
randomizer = torch.randperm(len(t))
# first, we sort t, and get the indices
sorted_t, idx_t = t[randomizer].sort()
# small util function to extract only the n smallest values and positions
head = lambda v,w : (v[:n], w[:n])
# use unique_consecutive to remove duplicates
uniques_t, counts_t = head(*torch.unique_consecutive(sorted_t, return_counts=True))
# counts_t.cumsum gives us the position of the unique values in sorted_t
uniq_idx_t = torch.cat([torch.tensor([0]), counts_t.cumsum(0)[:-1]], 0)
# And now, we have the positions of uniques_t values in t :
final_idx_t = randomizer[idx_t[uniq_idx_t]]
print(uniques_t, final_idx_t)
#>>> tensor([0,1,2]), tensor([4,0,1])
#>>> tensor([0,1,2]), tensor([4,5,8])
#>>> tensor([0,1,2]), tensor([4,0,8])
EDIT : I think the added permutation solves your need-random-occurrence problem

How to use built-in `slice` to access 2-d blocks in 2-d array?

I have a 2-d numpy array, for which I would like to modify 2-d blocks (like a 3x3 sub-block on a 9x9 sudoku board). Instead of using fancy indexing, I would like to use the built-in slice. Is there a way to make this work? I am thinking that the stride argument (third arg of slice) can be used to do this somehow, but I can't quite figure it out. My attempt is below.
import numpy as np
# make sample array (dim-1)
x = np.linspace(1, 81, 81).astype(int)
i = slice(0, 3)
print(x[i])
# [1 2 3]
# make sample array (dim-2)
X = x.reshape((9, 9))
Say I wanted to access the first 3 rows and first 3 columns of X. I can do it with fancy indexing as:
print(X[:3, :3])
# [[ 1 2 3]
# [10 11 12]
# [19 20 21]]
Trying to use similar logic to the dim-1 case with slice:
j = np.array([slice(0,3), slice(0,3)]) # wrong way to acccess
print(X[j])
Throws the following error:
IndexError: arrays used as indices must be of integer (or boolean) type
If you subscript with X[:3, :3], then behind the curtains you pass a tuple, so (slice(3), slice(3)).
So you can construct a j with:
j = (slice(3), slice(3))
or you can obtain the a, b block with:
j = (slice(3*a, 3*a+3), slice(3*b, 3*b+3))
so here a=0 and b=1 for example will yield the X[0:3, 3:6] part. So a block that contains the first three rows and second three columns.
or you can make a tuple with a variable number of items. For example for an n-dimensional array, you can make an n-tuple that each has a slice(3) object:
j = (slice(3),) * n

Preserve dimension while reshaping

Let's say we have a tensor of size B x C x W x H (as common for batches of images), and we want to reshape it to B x M where M = C*W*H. Is there a built in way to do so without explicitly mentioning B?
If we know B in advance we can do following, even without explicitly knowing any of the three C,W,H:
a = torch.randn(20,3,512,512)
b = a.reshape((20, -1)) #we can use -1 to infer the dimension `M`
But can we also do so without knowing B?
(I know we could obviously find B using B = a.shape[0], but my question is whether it is possible without knowing B either.)
The only other way would be to calculate the second dimension and use -1 for the first.
a = torch.randn(20,3,512,512)
print(a.shape)
b = a.reshape((20, -1))
print(b.shape)
b = a.reshape((-1, 786432)) # 3*512*512
print(b.shape)
torch.Size([20, 3, 512, 512])
torch.Size([20, 786432])
torch.Size([20, 786432])
Because there can be only one -1 when reshape.
In principle you could make it a generic function working with any batch size by simply using first dimension of input, e.g.:
a = torch.randn(20, 3, 512, 512)
b = a.reshape((a.shape[0], -1))
You can wrap it in a function and just call it whenever necessary.

Rearranging a 3-D array using indices from sorting?

I have a 3-D array of random numbers of size [channels = 3, height = 10, width = 10].
Then I sorted it using sort command from pytorch along the columns and obtained the indices as well.
The corresponding index is shown below:
Now, I would like to return to the original matrix using these indices. I currently use for loops to do this (without considering the batches). The code is:
import torch
torch.manual_seed(1)
ch = 3
h = 10
w = 10
inp_unf = torch.randn(ch,h,w)
inp_sort, indices = torch.sort(inp_unf,1)
resort = torch.zeros(inp_sort.shape)
for i in range(ch):
for j in range(inp_sort.shape[1]):
for k in range (inp_sort.shape[2]):
temp = inp_sort[i,j,k]
resort[i,indices[i,j,k],k] = temp
I would like it to be vectorized considering batches as well i.e.input size is [batch, channel, height, width].
Using Tensor.scatter_()
You can directly scatter the sorted tensor back into its original state using the indices provided by sort():
torch.zeros(ch,h,w).scatter_(dim=1, index=indices, src=inp_sort)
The intuition is based on the previous answer below. As scatter() is basically the reverse of gather(), inp_reunf = inp_sort.gather(dim=1, index=reverse_indices) is the same as inp_reunf.scatter_(dim=1, index=indices, src=inp_sort):
Previous answer
Note: while correct, this is probably less performant, as calling the sort() operation a 2nd time.
You need to obtain the sorting "reverse indices", which can be done by "sorting the indices returned by sort()".
In other words, given x_sort, indices = x.sort(), you have x[indices] -> x_sort ; while what you want is reverse_indices such that x_sort[reverse_indices] -> x.
This can be obtained as follows: _, reverse_indices = indices.sort().
import torch
torch.manual_seed(1)
ch, h, w = 3, 10, 10
inp_unf = torch.randn(ch,h,w)
inp_sort, indices = inp_unf.sort(dim=1)
_, reverse_indices = indices.sort(dim=1)
inp_reunf = inp_sort.gather(dim=1, index=reverse_indices)
print(torch.equal(inp_unf, inp_reunf))
# True

Resources