Count Unique elements in pytorch Tensor - pytorch

Suppose I have the following tensor: y = torch.randint(0, 3, (10,)). How would you go about counting the 0's 1's and 2's in there?
The only way I can think of is by using collections.Counter(y) but was wondering if there was a more "pytorch" way of doing this. A use case for example would be when building the confusion matrix for predictions.

You can use torch.unique with the return_counts option:
>>> x = torch.randint(0, 3, (10,))
tensor([1, 1, 0, 2, 1, 0, 1, 1, 2, 1])
>>> x.unique(return_counts=True)
(tensor([0, 1, 2]), tensor([2, 6, 2]))

Related

Subtract the elements of every possible pair of a torch Tensor efficiently

I have a huge torch Tensor and I'm looking for an efficient approach to subtract the elements of every pair of that Tensor.
Of course I could use two nested for but it wouldn't be efficient.
For example giving
[1, 2, 3, 4]
The output I want is
[1-2, 1-3, 1-4, 2-3, 2-4, 3-4]
You can do this easily:
>>> x = torch.tensor([1, 2, 3, 4])
>>> x[:, None] - x[None, :]
tensor([[ 0, -1, -2, -3],
[ 1, 0, -1, -2],
[ 2, 1, 0, -1],
[ 3, 2, 1, 0]])
see more details here.

Is there any way to create a tensor with a specific pattern in Pytorch?

I'm working with linear transformation in the form of Y=Q(X+A), where X is the input tensor and Y is the output, Q and A are two tensors to be learned. Q is an arbitrary tensor, therefore I can use nn.Linear. But A is a (differentiable) tensor that has some specific pattern, as a short example,
A = [[a0,a1,a2,a2,a2],
[a1,a0,a1,a2,a2],
[a2,a1,a0,a1,a2],
[a2,a2,a1,a0,a1],
[a2,a2,a2,a1,a0]].
So I cannot define such a pattern in nn.Linear. Is there any way to define such a tensor in Pytorch?
This looks like a Toeplitz matrix. A possible implementation in PyTorch is:
def toeplitz(c, r):
vals = torch.cat((r, c[1:].flip(0)))
shape = len(c), len(r)
i, j = torch.ones(*shape).nonzero().T
return vals[j-i].reshape(*shape)
In your case with a0 as 0, a1 as 1 and a2 as 2:
>>> toeplitz(torch.tensor([0,1,2,2,2]), torch.tensor([0,1,2,2,2]))
tensor([[0, 1, 2, 2, 2],
[1, 0, 1, 2, 2],
[2, 1, 0, 1, 2],
[2, 2, 1, 0, 1],
[2, 2, 2, 1, 0]])
For a more detailed explanation refer to my other answer here.

Bitwise operations in Pytorch

Could someone help me how to perform bitwise AND operations on two tensors in Pytorch 1.4?
Apparently I could only find NOT and XOR operations in official document
I don't see them in the docs, but it looks like &, |, __and__, __or__, __xor__, etc are bit-wise:
>>> torch.tensor([1, 2, 3, 4]).__xor__(torch.tensor([1, 1, 1, 1]))
tensor([0, 3, 2, 5])
>>> torch.tensor([1, 2, 3, 4]) | torch.tensor([1, 1, 1, 1])
tensor([1, 3, 3, 5])
>>> torch.tensor([1, 2, 3, 4]) & torch.tensor([1, 1, 1, 1])
tensor([1, 0, 1, 0])
>>> torch.tensor([1, 2, 3, 4]).__and__(torch.tensor([1, 1, 1, 1]))
tensor([1, 0, 1, 0])
See https://github.com/pytorch/pytorch/pull/1556
Check this. There is no bitwise and/or operation for tensors in Torch. There are element-wise operations implemented in Torch, but not bite-wise ones.
However, if you could convert each bit as a separate Tensor dimension, you can use element-wise operation.
For an example,
a = torch.Tensor{0,1,1,0}
b = torch.Tensor{0,1,0,1}
torch.cmul(a,b):eq(1)
0
1
0
0
[torch.ByteTensor of size 4]
torch.add(a,b):ge(1)
0
1
1
1
[torch.ByteTensor of size 4]
Hope this will help you.

Explanation for slicing in Pytorch

why is the output same every time?
a = torch.tensor([0, 1, 2, 3, 4])
a[-2:] = torch.tensor([[[5, 6]]])
a
tensor([0, 1, 2, 5, 6])
a = torch.tensor([0, 1, 2, 3, 4])
a[-2:] = torch.tensor([[5, 6]])
a
tensor([0, 1, 2, 5, 6])
a = torch.tensor([0, 1, 2, 3, 4])
a[-2:] = torch.tensor([5, 6])
a
tensor([0, 1, 2, 5, 6])
Pytorch is following Numpy here which allows assignment to slices as long as the shapes are compatible meaning that the two sides have the same shape or the right hand side is broadcastable to the shape of the slice. Starting with trailing dimensions, two arrays are broadcastable if they only differ in dimensions where one of them is 1. So in this case
a = torch.tensor([0, 1, 2, 3, 4])
b = torch.tensor([[[5, 6]]])
print(a[-2:].shape, b.shape)
>> torch.Size([2]) torch.Size([1, 1, 2])
Pytorch will perform the following comparisons:
a[-2:].shape[-1] and b.shape[-1] are equal so the last dimension is compatible
a[-2:].shape[-2] does not exist, but b.shape[-2] is 1 so they are compatible
a[-2:].shape[-3] does not exist, but b.shape[-3] is 1 so they are compatible
All dimensions are compatible, so b can be broadcasted to a
Finally, Pytorch will convert b to tensor([5, 6]) before performing the assignment thus producing the result:
a[-2:] = b
print(a)
>> tensor([0, 1, 2, 5, 6])

Not able to use Stratified-K-Fold on multi label classifier

The following code is used to do KFold Validation but I am to train the model as it is throwing the error
ValueError: Error when checking target: expected dense_14 to have shape (7,) but got array with shape (1,)
My target Variable has 7 classes. I am using LabelEncoder to encode the classes into numbers.
By seeing this error, If I am changing the into MultiLabelBinarizer to encode the classes. I am getting the following error
ValueError: Supported target types are: ('binary', 'multiclass'). Got 'multilabel-indicator' instead.
The following is the code for KFold validation
skf = StratifiedKFold(n_splits=10, shuffle=True)
scores = np.zeros(10)
idx = 0
for index, (train_indices, val_indices) in enumerate(skf.split(X, y)):
print("Training on fold " + str(index+1) + "/10...")
# Generate batches from indices
xtrain, xval = X[train_indices], X[val_indices]
ytrain, yval = y[train_indices], y[val_indices]
model = None
model = load_model() //defined above
scores[idx] = train_model(model, xtrain, ytrain, xval, yval)
idx+=1
print(scores)
print(scores.mean())
I don't know what to do. I want to use Stratified K Fold on my model. Please help me.
MultiLabelBinarizer returns a vector which is of the length of your number of classes.
If you look at how StratifiedKFold splits your dataset, you will see that it only accepts a one-dimensional target variable, whereas you are trying to pass a target variable with dimensions [n_samples, n_classes]
Stratefied split basically preserves your class distribution. And if you think about it, it does not make a lot of sense if you have a multi-label classification problem.
If you want to preserve the distribution in terms of the different combinations of classes in your target variable, then the answer here explains two ways in which you can define your own stratefied split function.
UPDATE:
The logic is something like this:
Assuming you have n classes and your target variable is a combination of these n classes. You will have (2^n) - 1 combinations (Not including all 0s). You can now create a new target variable considering each combination as a new label.
For example, if n=3, you will have 7 unique combinations:
1. [1, 0, 0]
2. [0, 1, 0]
3. [0, 0, 1]
4. [1, 1, 0]
5. [1, 0, 1]
6. [0, 1, 1]
7. [1, 1, 1]
Map all your labels to this new target variable. You can now look at your problem as simple multi-class classification, instead of multi-label classification.
Now you can directly use StartefiedKFold using y_new as your target. Once the splits are done, you can map your labels back.
Code sample:
import numpy as np
np.random.seed(1)
y = np.random.randint(0, 2, (10, 7))
y = y[np.where(y.sum(axis=1) != 0)[0]]
OUTPUT:
array([[1, 1, 0, 0, 1, 1, 1],
[1, 1, 0, 0, 1, 0, 1],
[1, 0, 0, 1, 0, 0, 0],
[1, 0, 0, 1, 0, 0, 0],
[1, 0, 0, 0, 1, 1, 1],
[1, 1, 0, 0, 0, 1, 1],
[1, 1, 1, 1, 0, 1, 1],
[0, 0, 1, 0, 0, 1, 1],
[1, 0, 1, 0, 0, 1, 1],
[0, 1, 1, 1, 1, 0, 0]])
Label encode your class vectors:
from sklearn.preprocessing import LabelEncoder
def get_new_labels(y):
y_new = LabelEncoder().fit_transform([''.join(str(l)) for l in y])
return y_new
y_new = get_new_labels(y)
OUTPUT:
array([7, 6, 3, 3, 2, 5, 8, 0, 4, 1])

Resources