Bitwise operations in Pytorch - pytorch

Could someone help me how to perform bitwise AND operations on two tensors in Pytorch 1.4?
Apparently I could only find NOT and XOR operations in official document

I don't see them in the docs, but it looks like &, |, __and__, __or__, __xor__, etc are bit-wise:
>>> torch.tensor([1, 2, 3, 4]).__xor__(torch.tensor([1, 1, 1, 1]))
tensor([0, 3, 2, 5])
>>> torch.tensor([1, 2, 3, 4]) | torch.tensor([1, 1, 1, 1])
tensor([1, 3, 3, 5])
>>> torch.tensor([1, 2, 3, 4]) & torch.tensor([1, 1, 1, 1])
tensor([1, 0, 1, 0])
>>> torch.tensor([1, 2, 3, 4]).__and__(torch.tensor([1, 1, 1, 1]))
tensor([1, 0, 1, 0])
See https://github.com/pytorch/pytorch/pull/1556

Check this. There is no bitwise and/or operation for tensors in Torch. There are element-wise operations implemented in Torch, but not bite-wise ones.
However, if you could convert each bit as a separate Tensor dimension, you can use element-wise operation.
For an example,
a = torch.Tensor{0,1,1,0}
b = torch.Tensor{0,1,0,1}
torch.cmul(a,b):eq(1)
0
1
0
0
[torch.ByteTensor of size 4]
torch.add(a,b):ge(1)
0
1
1
1
[torch.ByteTensor of size 4]
Hope this will help you.

Related

Subtract the elements of every possible pair of a torch Tensor efficiently

I have a huge torch Tensor and I'm looking for an efficient approach to subtract the elements of every pair of that Tensor.
Of course I could use two nested for but it wouldn't be efficient.
For example giving
[1, 2, 3, 4]
The output I want is
[1-2, 1-3, 1-4, 2-3, 2-4, 3-4]
You can do this easily:
>>> x = torch.tensor([1, 2, 3, 4])
>>> x[:, None] - x[None, :]
tensor([[ 0, -1, -2, -3],
[ 1, 0, -1, -2],
[ 2, 1, 0, -1],
[ 3, 2, 1, 0]])
see more details here.

Alternative concatenation of tensors

I have 2 tensors of shape [2, 1, 9] and [2, 1, 3]. I'd like to concatenate across the 3rd dimension alternatively (once every 4).
For example:
a = [[[1,2,3,4,5,6,7,8,9]],[[11,12,13,14,15,16,17,18,19]]]
b = [[[10, 20, 30]], [[1, 2, 3]]]
result = [[[1,2,3,10,4,5,6,20,7,8,9,30]],[[11,12,13,1,14,15,16,2,17,18,19,3]]]
How can I do this in pytorch?
This would do the trick:
torch.concat([a.reshape((2, 1, 3, 3)), b.reshape(2, 1, 3, 1)], axis=-1).reshape((2, 1, -1))
There's probably a smarter way to do this, but hey, it works.

Count Unique elements in pytorch Tensor

Suppose I have the following tensor: y = torch.randint(0, 3, (10,)). How would you go about counting the 0's 1's and 2's in there?
The only way I can think of is by using collections.Counter(y) but was wondering if there was a more "pytorch" way of doing this. A use case for example would be when building the confusion matrix for predictions.
You can use torch.unique with the return_counts option:
>>> x = torch.randint(0, 3, (10,))
tensor([1, 1, 0, 2, 1, 0, 1, 1, 2, 1])
>>> x.unique(return_counts=True)
(tensor([0, 1, 2]), tensor([2, 6, 2]))

How to initialize columns in hybrid sparse tensor

How initialize in pytorch hybrid tensor torch.sparse_coo_tensor (one dimension is sparse and other is not), which have the following dense representation?
array([[1, 0, 5, 0],
[2, 0, 6, 0],
[3, 0, 7, 0],
[4, 0, 8, 0]])
What should I put into the indices argument?
How to initialize
Something like this:
import torch
indices = torch.tensor([[0, 0, 1, 1, 2, 2, 3, 3], [0, 2, 0, 2, 0, 2, 0, 2]])
tensor = torch.sparse_coo_tensor(
indices, torch.tensor([1, 2, 3, 4, 5, 6, 7, 8]), size=(4, 4)
)
Given above:
indices - first dimension specifies row, second column, where non-zero value(s) will be located. Those become pairs, in this case: (0, 0), (0, 2), (1, 0), (1, 2)... and so on
values - values located at those pairs, so 1 will be under (0, 0) coordinate, 2 under (0, 2) and so it goes.
size - total size of the matrix, optional, might be inferred in this case from your input
8 pairs, 8 values, there are also other ways to specify it, but the idea holds.
And a quick check:
print(tensor)
print(tensor.to_dense())
Gives us:
tensor(indices=tensor([[0, 0, 1, 1, 2, 2, 3, 3],
[0, 2, 0, 2, 0, 2, 0, 2]]),
values=tensor([1, 2, 3, 4, 5, 6, 7, 8]),
size=(4, 4), nnz=8, layout=torch.sparse_coo)
tensor([[1, 0, 2, 0],
[3, 0, 4, 0],
[5, 0, 6, 0],
[7, 0, 8, 0]])
Why to initialize
If your actual data is 50% sparse, you shouldn't use COO tensor.
It will save some memory, but operations will be way slower, so keep that in mind.

Explanation for slicing in Pytorch

why is the output same every time?
a = torch.tensor([0, 1, 2, 3, 4])
a[-2:] = torch.tensor([[[5, 6]]])
a
tensor([0, 1, 2, 5, 6])
a = torch.tensor([0, 1, 2, 3, 4])
a[-2:] = torch.tensor([[5, 6]])
a
tensor([0, 1, 2, 5, 6])
a = torch.tensor([0, 1, 2, 3, 4])
a[-2:] = torch.tensor([5, 6])
a
tensor([0, 1, 2, 5, 6])
Pytorch is following Numpy here which allows assignment to slices as long as the shapes are compatible meaning that the two sides have the same shape or the right hand side is broadcastable to the shape of the slice. Starting with trailing dimensions, two arrays are broadcastable if they only differ in dimensions where one of them is 1. So in this case
a = torch.tensor([0, 1, 2, 3, 4])
b = torch.tensor([[[5, 6]]])
print(a[-2:].shape, b.shape)
>> torch.Size([2]) torch.Size([1, 1, 2])
Pytorch will perform the following comparisons:
a[-2:].shape[-1] and b.shape[-1] are equal so the last dimension is compatible
a[-2:].shape[-2] does not exist, but b.shape[-2] is 1 so they are compatible
a[-2:].shape[-3] does not exist, but b.shape[-3] is 1 so they are compatible
All dimensions are compatible, so b can be broadcasted to a
Finally, Pytorch will convert b to tensor([5, 6]) before performing the assignment thus producing the result:
a[-2:] = b
print(a)
>> tensor([0, 1, 2, 5, 6])

Resources