I want to sample a tensor of probability distributions with shape (N, C, H, W), where dimension 1 (size C) contains normalized probability distributions with ‘C’ possibilities. Is there a pytorch function to efficiently sample all the distributions in the tensor in parallel? I just need to sample each distribution once, so the result could either be a one-hot tensor with the same shape or a tensor of indices with shape (N, 1, H, W).
There was no single function to sample that I saw, but I was able to sample the tensor in several steps by computing the cumulative probabilities, sampling each point independently, and then picking the first point that sampled a 1 in the distribution dimension:
reverse_cumulative = torch.flip(torch.cumsum(torch.flip(probabilities, [1]), dim=1), [1])
cumulative = probabilities / reverse_cumulative
sampled = (torch.rand(cumulative.shape, device=device()) <= cumulative)
idxs = sampled * one_hot
idxs[~sampled] = self.tile_count
sampled_idxs = idxs.min(dim=1).indices
Related
I am implementing the paper Deep multiscale convolutional feature learning for weakly supervised localization of chest pathologies in X-ray images
According to my understanding the layer relevance weights belong to the last layer of each dense block.
I tried implementing the weight constraints as shown below:
def weight_constraints(self):
weights= {'feat1': self.model.features.denseblock2.denselayer12.conv2.weight.data,
'feat2':self.model.features.denseblock3.denselayer24.conv2.weight.data,
'feat3':self.model.features.denseblock4.denselayer16.conv2.weight.data}
sum(weights.values()) == 1
for i in weights.keys():
w = weights[i]
w1 = w.clamp(min= 0)
weights[i] = w1
return weights
weights= self.weight_constraints()
for i in weights.keys():
w = weights[i]
l = logits[i]
p = torch.matmul(w , l[0])
sum = sum + p
where logits is a dictionary which contains out of FC layer from each block as shown in the diagram.
logits = {'feat1': [tensor([[-0.0630]], ...ackward0>)], 'feat2': [tensor([[-0.0323]], ...ackward0>)], 'feat3': [tensor([[-8.2897e-06...ackward0>)]}
I get the following error :
mat1 and mat2 shapes cannot be multiplied (12288x3 and 1x1)
Is this the right approach?
The paper states
The
logit response from all the layers have same dimension (equal to the number of
category for classification) and now can be combined using class specific convex
combination to obtain the probability score for the class pc.
The function matmul you used perfroms matrix multiplications, it requires mat1.shape[-1] == mat2.shape[-2].
If you assume sum(w)==1, and torch.all(w > 0), you could compute the convex combination of l as (w * l).sum(-1) that is multiply w and l element-wise, broadcasting over the batch dimensions of l, and requiring w.shape[-1] == l.shape[-1] (presumably 3).
If you want to stick with matmul you can add one dimension to w and l, and perform the vector product as a matrix multiplication: torch.matmul(w[...,None,:], l[..., :, None]).
I have a 3D tensor of size say 100x5x2 and mean of the tensor across axis=1 which gives shape 100x2.
100 here is the batch size. Normally without batch, the division of tensor of shape 5x2 and 2 works perfectly but in the case of the 3D tensor with batch, I’m receiving error.
a = torch.rand(5,2)
b = torch.rand(2)
z=a/b
gives me expected answer.
a = torch.rand(100,5,2)
b = torch.rand(100,2)
z=a/b
Gives me the following error.
The size of tensor a (5) must match the size of tensor b (100) at non-singleton dimension 1.
How to divide these tensors such that my output is of shape 100x5x2 ? Something like bmm for division?
Simply do:
z = a / b.unsqueeze(1)
This adds an extra dimension in b and makes it of shape (100, 1, 2) which is compatible for broadcasting with a.
I want to build a model, that predicts next character based on the previous characters.
I have spliced text into sequences of integers with length = 100(using dataset and dataloader).
Dimensions of my input and target variables are:
inputs dimension: (batch_size,sequence length). In my case (128,100)
targets dimension: (batch_size,sequence length). In my case (128,100)
After forward pass I get dimension of my predictions: (batch_size, sequence_length, vocabulary_size) which is in my case (128,100,44)
but when I calculate my loss using nn.CrossEntropyLoss() function:
batch_size = 128
sequence_length = 100
number_of_classes = 44
# creates random tensor of your output shape
output = torch.rand(batch_size,sequence_length, number_of_classes)
# creates tensor with random targets
target = torch.randint(number_of_classes, (batch_size,sequence_length)).long()
# define loss function and calculate loss
criterion = nn.CrossEntropyLoss()
loss = criterion(output, target)
print(loss)
I get an error:
ValueError: Expected target size (128, 44), got torch.Size([128, 100])
Question is: how should I handle calculation of the loss function for many-to-many LSTM prediction? Especially sequence dimension? According to nn.CrossEntropyLoss Dimension must be(N,C,d1,d2...dN), where N is batch_size,C - number of classes. But what is D? Is it related to sequence length?
As a general comment, let me just say that you have asked many different questions, which makes it difficult for someone to answer. I suggest asking just one question per StackOverflow post, even if that means making several posts. I will answer just the main question that I think you are asking: "why is my code crashing and how to fix it?" and hopefully that will clear up your other questions.
Per your code, the output of your model has dimensions (128, 100, 44) = (N, D, C). Here N is the minibatch size, C is the number of classes, and D is the dimensionality of your input. The cross entropy loss you are using expects the output to have dimension (N, C, D) and the target to have dimension (N, D). To clear up the documentation that says (N, C, D1, D2, ..., Dk), remember that your input can be an arbitrary tensor of any dimensionality. In your case inputs have length 100, but nothing is to stop someone from making a model with, say, a 100x100 image as input. (In that case the loss would expect output to have dimension (N, C, 100, 100).) But in your case, your input is one dimensional, so you have just a single D=100 for the length of your input.
Now we see the error, outputs should be (N, C, D), but yours is (N, D, C). Your targets have the correct dimensions of (N, D). You have two paths the fix the issue. First is to change the structure of your network so that its output is (N, C, D), this may or may not be easy or what you want in the context of your model. The second option is to transpose your axes at the time of loss computation using torch.transpose https://pytorch.org/docs/stable/generated/torch.transpose.html
batch_size = 128
sequence_length = 100
number_of_classes = 44
# creates random tensor of your output shape (N, D, C)
output = torch.rand(batch_size,sequence_length, number_of_classes)
# transposes dimensionality to (N, C, D)
tansposed_output = torch.transpose(output, 1, 2)
# creates tensor with random targets
target = torch.randint(number_of_classes, (batch_size,sequence_length)).long()
# define loss function and calculate loss
criterion = nn.CrossEntropyLoss()
loss = criterion(transposed_output, target)
print(loss)
I saw a sudoku solver CNN uses a sparse categorical cross-entropy as a loss function using the TensorFlow framework, I am wondering if there is a similar function for Pytorch? if not could how could I potentially calculate the loss of a 2d array using Pytorch?
Here is an example of usage of nn.CrossEntropyLoss for image segmentation with a batch of size 1, width 2, height 2 and 3 classes.
Image segmentation is a classification problem at pixel level. Of course you can also use nn.CrossEntropyLoss for basic image classification as well.
The sudoku problem in the question can be seen as an image segmentation problem where you have 10 classes (the 10 digits) (though Neural Networks are not appropriate to solve combinatorial problems like Sudoku which already have efficient exact resolution algorithms).
nn.CrossEntropyLoss accepts ground truth labels directly as integers in [0, N_CLASSES[ (no need to onehot encode the labels):
import torch
from torch import nn
import numpy as np
# logits predicted
x = np.array([[
[[1,0,0],[1,0,0]], # predict class 0 for pixel (0,0) and class 0 for pixel (0,1)
[[0,1,0],[0,0,1]], # predict class 1 for pixel (1,0) and class 2 for pixel (1,1)
]])*5 # multiply by 5 to give bigger losses
print("logits map :")
print(x)
# ground truth labels
y = np.array([[
[0,1], # must predict class 0 for pixel (0,0) and class 1 for pixel (0,1)
[1,2], # must predict class 1 for pixel (1,0) and class 2 for pixel (1,1)
]])
print("\nlabels map :")
print(y)
x=torch.Tensor(x).permute((0,3,1,2)) # shape of preds must be (N, C, H, W) instead of (N, H, W, C)
y=torch.Tensor(y).long() # shape of labels must be (N, H, W) and type must be long integer
losses = nn.CrossEntropyLoss(reduction="none")(x, y) # reduction="none" to get the loss by pixel
print("\nLosses map :")
print(losses)
# notice that the loss is big only for pixel (0,1) where we predicted 0 instead of 1
Previous layer is embedding size (V clasess,K -outputdim) - I want to introduce a weights matrix size K x T. The weights will be trainable (as will the embeddings).They generate a VxT matrix will be used downstream.
1) How might I go about this?
2) Will this mess with the gradients?
It's basically vector x Matrix .
Example- embedding vocab = 10, dim K =4. so for a particular member of vocabulary, my embedding weights is a vector size (1,4) (think row vector).
For each row vector I want to multiply a weight matrix size 4x10, yielding a 1 x 10 vector (or layer) . The weight matrix is common to all members of the vocabulary.
This 1 x 10 vector will be input for the next layer.
What you want is a Dense layer, just without a bias. A Dense layer internally has a matrix that is common for all inputs, it does not vary with the input.
So this can be implemented as:
x = Dense(10, use_bias=False)(some_input_tensor)
No activation function is needed since you just want the matrix multiplication.