Tensor mean across 4 dimensions - pytorch

I have a PyTorch tensor like
x = torch.arange(1, 601)
x = x.reshape(20, 5, -1).float()
This looks like
tensor([[[ 1., 2., 3., 4., 5., 6.],
[ 7., 8., 9., 10., 11., 12.],
[ 13., 14., 15., 16., 17., 18.],
[ 19., 20., 21., 22., 23., 24.],
[ 25., 26., 27., 28., 29., 30.]],
[[ 31., 32., 33., 34., 35., 36.],
[ 37., 38., 39., 40., 41., 42.],
[ 43., 44., 45., 46., 47., 48.],
[ 49., 50., 51., 52., 53., 54.],
[ 55., 56., 57., 58., 59., 60.]],
[[ 61., 62., 63., 64., 65., 66.],
[ 67., 68., 69., 70., 71., 72.],
[ 73., 74., 75., 76., 77., 78.],
[ 79., 80., 81., 82., 83., 84.],
[ 85., 86., 87., 88., 89., 90.]],
I'd like to add each row of the same index in each block. Meaning I want to sum across axis 0 each first row:
[ 1., 2., 3., 4., 5., 6.]
[ 31., 32., 33., 34., 35., 36.]
[ 61., 62., 63., 64., 65., 66.]
:
:
and then sum across axis 0 for each second row
[ 7., 8., 9., 10., 11., 12.]
[ 37., 38., 39., 40., 41., 42.]
[ 67., 68., 69., 70., 71., 72.]
:
:
etc.
How would I do that in PyTorch?

You can achieve this by flattening the last two axes, splitting on that flattened axis, and stacking the result back into a single tensor. Here are the steps:
>>> x = torch.arange(1, 91, dtype=float).reshape(3, 5, -1)
tensor([[[ 1., 2., 3., 4., 5., 6.],
[ 7., 8., 9., 10., 11., 12.],
[13., 14., 15., 16., 17., 18.],
[19., 20., 21., 22., 23., 24.],
[25., 26., 27., 28., 29., 30.]],
[[31., 32., 33., 34., 35., 36.],
[37., 38., 39., 40., 41., 42.],
[43., 44., 45., 46., 47., 48.],
[49., 50., 51., 52., 53., 54.],
[55., 56., 57., 58., 59., 60.]],
[[61., 62., 63., 64., 65., 66.],
[67., 68., 69., 70., 71., 72.],
[73., 74., 75., 76., 77., 78.],
[79., 80., 81., 82., 83., 84.],
[85., 86., 87., 88., 89., 90.]]], dtype=torch.float64)
First flatten axis=1 and axis=2:
>>> x.flatten(1)
tensor([[ 1., 2., 3., 4., 5., 6., 7., 8., 9., 10., 11., 12., 13., 14., 15., 16., 17., 18., 19., 20., 21., 22., 23., 24., 25., 26., 27., 28., 29., 30.],
[31., 32., 33., 34., 35., 36., 37., 38., 39., 40., 41., 42., 43., 44., 45., 46., 47., 48., 49., 50., 51., 52., 53., 54., 55., 56., 57., 58., 59., 60.],
[61., 62., 63., 64., 65., 66., 67., 68., 69., 70., 71., 72., 73., 74., 75., 76., 77., 78., 79., 80., 81., 82., 83., 84., 85., 86., 87., 88., 89., 90.]], dtype=torch.float64)
We're now looking to split into (x.size(2),)*x.size(1), here it corresponds to (6, 6, 6, 6, 6).
>>> x.flatten(1).split((x.size(2),)*x.size(1), dim=1)
The above will return a tuple containing the splits, to reconstruct the tensor use torch.stack:
>>> torch.stack(x.flatten(1).split((x.size(2),)*x.size(1), dim=1))
tensor([[[ 1., 2., 3., 4., 5., 6.],
[31., 32., 33., 34., 35., 36.],
[61., 62., 63., 64., 65., 66.]],
[[ 7., 8., 9., 10., 11., 12.],
[37., 38., 39., 40., 41., 42.],
[67., 68., 69., 70., 71., 72.]],
[[13., 14., 15., 16., 17., 18.],
[43., 44., 45., 46., 47., 48.],
[73., 74., 75., 76., 77., 78.]],
[[19., 20., 21., 22., 23., 24.],
[49., 50., 51., 52., 53., 54.],
[79., 80., 81., 82., 83., 84.]],
[[25., 26., 27., 28., 29., 30.],
[55., 56., 57., 58., 59., 60.],
[85., 86., 87., 88., 89., 90.]]], dtype=torch.float64)

Related

Pytorch pairwise concatenation of tensors

I'd like to compute a pairwise concatenation over a specific dimension in a batched manner.
For instance,
x = torch.tensor([[[0],[1],[2]],[[3],[4],[5]]])
x.shape = torch.Size([2, 3, 1])
I would like to get y such that y is the concatenation of all pairs of vectors across one dimension, ie:
y = torch.tensor([[[[0,0],[0,1],[0,2]],[[1,0],[1,1],[1,2]], [[2,0], [2,1], [2,2]]],
[[[3,3],[3,4],[3,5]],[[4,3],[4,4],[4,5]], [[5,3],[5,4],[5,5]]]])
y.shape = torch.Size([2, 3, 3, 2])
So essentially, for each x[i,:], you generate all pairs of vectors and you concatenate them on the last dimension.
Is there a straightforward way of doing that?
Without loops and using torch.arange(). The trick is to broadcast instead of using a for loop. That will apply the operation over all elements in the dimension with the : character.
​
x = torch.tensor([
[[0.0000, 1.0000, 2.0000],
[3.0000, 4.0000, 5.0000],
[0.0000, -1.0000, -2.0000],
[-3.0000, -4.0000, -5.0000]],
[[0.0000, 10.0000, 20.0000],
[30.0000, 40.0000, 50.0000],
[0.0000, -10.0000, -20.0000],
[-30.0000, -40.0000, -50.0000]
]
])
​
idx_pairs = torch.cartesian_prod(torch.arange(x.shape[1]), torch.arange(x.shape[1]))
y = x[:, idx_pairs].view(x.shape[0], x.shape[1], x.shape[1], -1)
tensor([[[[ 0., 1., 2., 0., 1., 2.],
[ 0., 1., 2., 3., 4., 5.],
[ 0., 1., 2., 0., -1., -2.],
[ 0., 1., 2., -3., -4., -5.]],
[[ 3., 4., 5., 0., 1., 2.],
[ 3., 4., 5., 3., 4., 5.],
[ 3., 4., 5., 0., -1., -2.],
[ 3., 4., 5., -3., -4., -5.]],
[[ 0., -1., -2., 0., 1., 2.],
[ 0., -1., -2., 3., 4., 5.],
[ 0., -1., -2., 0., -1., -2.],
[ 0., -1., -2., -3., -4., -5.]],
[[ -3., -4., -5., 0., 1., 2.],
[ -3., -4., -5., 3., 4., 5.],
[ -3., -4., -5., 0., -1., -2.],
[ -3., -4., -5., -3., -4., -5.]]],
[[[ 0., 10., 20., 0., 10., 20.],
[ 0., 10., 20., 30., 40., 50.],
[ 0., 10., 20., 0., -10., -20.],
[ 0., 10., 20., -30., -40., -50.]],
[[ 30., 40., 50., 0., 10., 20.],
[ 30., 40., 50., 30., 40., 50.],
[ 30., 40., 50., 0., -10., -20.],
[ 30., 40., 50., -30., -40., -50.]],
[[ 0., -10., -20., 0., 10., 20.],
[ 0., -10., -20., 30., 40., 50.],
[ 0., -10., -20., 0., -10., -20.],
[ 0., -10., -20., -30., -40., -50.]],
[[-30., -40., -50., 0., 10., 20.],
[-30., -40., -50., 30., 40., 50.],
[-30., -40., -50., 0., -10., -20.],
[-30., -40., -50., -30., -40., -50.]]]])
One possible way to do that would be:
all_ordered_idx_pairs = torch.cartesian_prod(torch.tensor(range(x.shape[1])),torch.tensor(range(x.shape[1])))
y = torch.stack([x[i][all_ordered_idx_pairs] for i in range(x.shape[0])])
After reshaping the tensor:
y = y.view(x.shape[0], x.shape[1], x.shape[1], -1)
you get:
y = torch.tensor([[[[0,0],[0,1],[0,2]],[[1,0],[1,1],[1,2]], [[2,0], [2,1], [2,2]]],
[[[3,3],[3,4],[3,5]],[[4,3],[4,4],[4,5]], [[5,3],[5,4],[5,5]]]])

Best way to cut a pytorch tensor into overlapping chunks?

If for instance I have:
eg6 = torch.tensor([
[ 1., 7., 13., 19.],
[ 2., 8., 14., 20.],
[ 3., 9., 15., 21.],
[ 4., 10., 16., 22.],
[ 5., 11., 17., 23.],
[ 6., 12., 18., 24.]])
batch1 = eg6
batch2 = -eg6
x = torch.cat((batch1,batch2)).view(2,6,4)
And then I want to slice it up into overlapping chunks, like a sliding window function, and have the chunks be batch-processable. For example, and just looking at the first dimension, I want 1,2,3, 3,4,5, 5,6 (or 5,6,0).
It seems unfold() kind of does what I want. It transposes the last two dimensions for some reason, but that is easy enough to repair. Changing it from shape [2,3,3,4] to [6,3,4] requires a memory copy, but I believe that is unavoidable?
SZ=3
x2 = x.unfold(1,SZ,2).transpose(2,3).reshape(-1,SZ,4)
This works perfectly when x is of shape [2,7,4]. But with only 6 rows, it throws away the final row.
Is there a version of unfold() that can be told to use all data, ideally taking a pad character?
Or do I need to pad x before calling unfold()? What is the best way to do that? I'm wondering if "pad" is the wrong word, as I'm only finding functions that want to put padding characters at both ends, with convolutions in mind.
Aside: Looking at the source of unfold, it seems the strange transpose is there deliberately and explicitly?! For that reason, and the undesired chop behaviour, it made me think the correct answer to my question might be write a new low-level function. But that is too much effort for me, at least for today... (I think a second function for the backwards pass also needs to be written.)
The operation performed here is similar to what a 1D convolution would behave like. With kernel_size=SZ and stride=2. As you noticed if you don't provide sufficient padding (you're correct on the wording) the last element won't be used.
A general approach (for any SZ and any input shape x.size(1)) is to figure out if padding is necessary, and if so what amount is needed.
The size of the output is given by out = floor((x.size(1) - SZ)/2 + 1).
The number of unused elements is x.size(1) - out*(SZ-1) - 1.
If the number of unused elements is non zero, you need to add a padding of (out+1)*(SZ-1) + 1 - x.size(1)
This example won't need padding:
>>> x = torch.stack((torch.tensor([
[ 1., 7., 13., 19.],
[ 2., 8., 14., 20.],
[ 3., 9., 15., 21.],
[ 4., 10., 16., 22.],
[ 5., 11., 17., 23.]]),)*2)
>>> x.shape
torch.Size([2, 5, 4])
>>> out = floor((x.size(1) - SZ)/2 + 1)
2
>>> unused = x.size(1) - out*(SZ-1) - 1
0
While this one will:
>>> x = torch.stack((torch.tensor([
[ 1., 7., 13., 19.],
[ 2., 8., 14., 20.],
[ 3., 9., 15., 21.],
[ 4., 10., 16., 22.],
[ 5., 11., 17., 23.],
[ 6., 12., 18., 24.]]),)*2)
>>> x.shape
torch.Size([2, 6, 4])
>>> out = floor((x.size(1) - SZ)/2 + 1)
2
>>> unused = x.size(1) - out*(SZ-1) - 1
1
>>> p = (out+1)*(SZ-1) + 1 - x.size(1)
1
Now, to actually add padding you could just use torch.cat. Although I am the built-in, nn.functional.pad, would work...
>>> torch.cat((x, torch.zeros(x.size(0), p, x.size(2))), dim=1)
tensor([[[ 1., 7., 13., 19.],
[ 2., 8., 14., 20.],
[ 3., 9., 15., 21.],
[ 4., 10., 16., 22.],
[ 5., 11., 17., 23.],
[ 6., 12., 18., 24.],
[ 0., 0., 0., 0.]],
[[ 1., 7., 13., 19.],
[ 2., 8., 14., 20.],
[ 3., 9., 15., 21.],
[ 4., 10., 16., 22.],
[ 5., 11., 17., 23.],
[ 6., 12., 18., 24.],
[ 0., 0., 0., 0.]]])

Mismatch between Scipy stat (KS-test) distribution and histogram plot of the data set

I have a dataset like this
y = array([ 25., 20., 10., 31., 30., 66., 13., 5., 9., 2., 4.,
9., 6., 26., 72., 7., 5., 18., 8., 12., 4., 7.,
114., 5., 6., 17., 39., 4., 5., 42., 63., 3., 6.,
16., 17., 4., 27., 18., 3., 7., 48., 24., 72., 21.,
12., 13., 106., 120., 5., 34., 52., 22., 2., 8., 9.,
5., 35., 4., 4., 1., 56., 1., 17., 34., 3., 5.,
17., 17., 10., 48., 9., 195., 20., 60., 5., 77., 114.,
59., 1., 1., 1., 67., 9., 4., 1., 13., 6., 46.,
40., 8., 6., 1., 2., 1., 1., 1., 7., 6., 53.,
6., 3., 4., 2., 1., 1., 5., 1., 5., 1., 7.,
1., 1.])
The corresponding histogram from this data is following
number_of_bins = len(y)
bin_cutoffs = np.linspace(np.percentile(y,0), np.percentile(y,99),number_of_bins)
h = plt.hist(y, bins = bin_cutoffs, color='red')
I test the dataset to get the actual parameter from scipy stat KS test with the following code (got this from How to find probability distribution and parameters for real data? (Python 3))
def get_best_distribution(data):
dist_names = ["norm", "exponweib", "weibull_max", "weibull_min","expon","pareto", "genextreme","gamma","beta"]
dist_results = []
params = {}
for dist_name in dist_names:
dist = getattr(st, dist_name)
param = dist.fit(data)
params[dist_name] = param
# Applying the Kolmogorov-Smirnov test
D, p = st.kstest(data, dist_name, args=param)
print("p value for "+dist_name+" = "+str(p))
dist_results.append((dist_name, p))
# select the best fitted distribution
best_dist, best_p = (max(dist_results, key=lambda item: item[1]))
# store the name of the best fit and its p value
print("Best fitting distribution: "+str(best_dist))
print("Best p value: "+ str(best_p))
print("Parameters for the best fit: "+ str(params[best_dist]))
return best_dist, best_p, params[best_dist]
The result shows that its genextreme distribution. The result is as shown bellow:
('genextreme',
0.1823402997669471,
(-1.119997717132149, 5.036499415233003, 6.2122664378291175))
The fitted curve using these attributes is following
From my understanding, the histogram suggests that it is a exponential distribution.But from KS test it shows another.Can anyone explain why this is happening or anything wrong?

Does conv_transpose2d do a convolution or a cross-correlation operator?

The documentation for the nn mentions it does a cross-correlation, however, my results indicate it does a convolution operator.
import torch
import torch.nn.functional as F
im = torch.Tensor([[0,1],[2,3]]).unsqueeze(0).unsqueeze(0)
kernel = torch.Tensor([[0,1,2],[3,4,5], [6,7,8]]).unsqueeze(0).unsqueeze(0)
op = F.conv_transpose2d(im, kernel, stride=2)
print(op)
It outputs :
tensor([[[[ 0., 0., 0., 1., 2.],
[ 0., 0., 3., 4., 5.],
[ 0., 2., 10., 10., 14.],
[ 6., 8., 19., 12., 15.],
[12., 14., 34., 21., 24.]]]])
which would be the result if there was a convolution, I had expected the correlation result to be:
tensor([[[[ 0., 0., 8., 7., 6.],
[ 0., 0., 5., 4., 3.],
[16., 14., 38., 22., 18.],
[10., 8., 21., 12., 9.],
[ 4., 2., 6., 3., 0.]]]])
Did I misunderstand something?

Need pytorch help doing 2D convolutions of N images with N kernels all at once

I have a torch tensor which is a stack of images. Let's say for kicks it is
im=th.arange(4*5*6,dtype=th.float32).view(4,5,6)
which is a 4x5x6 tensor, meaning four 5x6 images stacked vertically.
I want to convolve each layer with its own 2-D kernel so that
I_{out,j} = k_j*I_{in,j}, j=(1...4)
I can obviously do this with a for loop, but I'd like to take advantage of GPU acceleration and do all the convolutions at the same time. No matter what I try, I've only been able to use torch's conv2d or conv3d to produce a single output layer that is the sum of all the 2d convolutions. Or I can make 4 layers where each is the same sum of all the 2d convolutions. Here's a concrete example. Let's use im as defined above. Say that the kernel is defined by
k=th.zeros((4,3,3),dtype=th.float32)
n=-1
for i in range(2):
for j in range(2):
n+=1
k[n,i,j]=1
k[n,2,2]=1
print(k)
tensor([[[1., 0., 0.],
[0., 0., 0.],
[0., 0., 1.]],
[[0., 1., 0.],
[0., 0., 0.],
[0., 0., 1.]],
[[0., 0., 0.],
[1., 0., 0.],
[0., 0., 1.]],
[[0., 0., 0.],
[0., 1., 0.],
[0., 0., 1.]]])
and from above, im is
tensor([[[ 0., 1., 2., 3., 4., 5.],
[ 6., 7., 8., 9., 10., 11.],
[ 12., 13., 14., 15., 16., 17.],
[ 18., 19., 20., 21., 22., 23.],
[ 24., 25., 26., 27., 28., 29.]],
[[ 30., 31., 32., 33., 34., 35.],
[ 36., 37., 38., 39., 40., 41.],
[ 42., 43., 44., 45., 46., 47.],
[ 48., 49., 50., 51., 52., 53.],
[ 54., 55., 56., 57., 58., 59.]],
[[ 60., 61., 62., 63., 64., 65.],
[ 66., 67., 68., 69., 70., 71.],
[ 72., 73., 74., 75., 76., 77.],
[ 78., 79., 80., 81., 82., 83.],
[ 84., 85., 86., 87., 88., 89.]],
[[ 90., 91., 92., 93., 94., 95.],
[ 96., 97., 98., 99., 100., 101.],
[102., 103., 104., 105., 106., 107.],
[108., 109., 110., 111., 112., 113.],
[114., 115., 116., 117., 118., 119.]]])
The right answer is easy if I do the for loop:
import torch.functional as F
for i in range(4):
print(F.conv2d(im[i].expand(1,1,5,6),k[i].expand(1,1,3,3)))
tensor([[[[14., 16., 18., 20.],
[26., 28., 30., 32.],
[38., 40., 42., 44.]]]])
tensor([[[[ 75., 77., 79., 81.],
[ 87., 89., 91., 93.],
[ 99., 101., 103., 105.]]]])
tensor([[[[140., 142., 144., 146.],
[152., 154., 156., 158.],
[164., 166., 168., 170.]]]])
tensor([[[[201., 203., 205., 207.],
[213., 215., 217., 219.],
[225., 227., 229., 231.]]]])
As I noted earlier, the only thing I've been able to get is one sum of those four output images (or four copies of the same summed layer):
F.conv2d(im.expand(1,4,5,6),k.expand(1,4,3,3))
tensor([[[[430., 438., 446., 454.],
[478., 486., 494., 502.],
[526., 534., 542., 550.]]]])
I'm certain that what I want to do is possible, I just haven't been able to wrap my head around it yet. Does anyone have a solution to offer?
This is pretty straight forward if you use a grouped convolution.
From the nn.Conv2d documentation
At groups=in_channels, each input channel is convolved with its own
set of filters
Which is exactly what we want.
The shape of the weights argument to F.conv2d needs to be considered since it changes depending on the value of groups. The first dimension of weights should just be out_channels, which is 4 in this case. The second dimension according to F.conv2d docs should be in_channels / groups, which is 1. So we can perform the operation using
F.conv2d(im.unsqueeze(0), k.unsqueeze(1), groups=4).squeeze(0)
which produces a tensor of shape [4,3,4] with values
tensor([[[ 14., 16., 18., 20.],
[ 26., 28., 30., 32.],
[ 38., 40., 42., 44.]],
[[ 75., 77., 79., 81.],
[ 87., 89., 91., 93.],
[ 99., 101., 103., 105.]],
[[140., 142., 144., 146.],
[152., 154., 156., 158.],
[164., 166., 168., 170.]],
[[201., 203., 205., 207.],
[213., 215., 217., 219.],
[225., 227., 229., 231.]]])

Resources