How to add to pytorch tensor at indices? - pytorch

I have to admit, I'm a bit confused by the scatter* and index* operations - I'm not sure any of them do exactly what I'm looking for, which is very simple:
Given some 2-D tensor
z = tensor([[1., 1., 1., 1.],
[1., 1., 1., 1.],
[1., 1., 1., 1.]])
And a list (or tensor?) of 2-d indexes:
inds = tensor([[0, 0],
[1, 1],
[1, 2]])
I want to add a scalar to z at those indexes (and do it efficiently):
znew = z.something_add(inds, 3)
->
znew = tensor([[4., 1., 1., 1.],
[1., 4., 4., 1.],
[1., 1., 1., 1.]])
If I have to I can make that scalar a tensor of whatever shape (where all elements = 3), but I'd rather not...

You must provide two lists to your indexing. The first having the row positions and the second the column positions. In your example, it would be:
z[[0, 1, 1], [0, 1, 2]] += 3
torch.Tensor indexing follows Numpy. See https://docs.scipy.org/doc/numpy/reference/arrays.indexing.html#integer-array-indexing for more details.

This code achieves what you want:
z_new = z.clone() # copy the tensor
z_new[inds[:, 0], inds[:, 1]] += 3 # modify selected indices of new tensor
In PyTorch, you can index each axis of a tensor with another tensor.

Related

How can I replace part of a matrix with a new matrix in torch

Consider following matrices
>>> a = torch.Tensor([[1,2,3],[4,5,6], [7,8,9]])
>>> a
tensor([[1., 2., 3.],
[4., 5., 6.],
[7., 8., 9.]])
>>> b = torch.tensor([[1,1],[1,1]])
>>> b
tensor([[1, 1],
[1, 1]])
I want to replace 4 elements in a with b where their indices are specified in X = [0,2] and Y = [0,2]
To have:
>>>a
tensor([[1., 2., 1.],
[4., 5., 6.],
[1., 8., 1.]])
I look for some operations like scatter or put_index to update the matrix in few commands (not loops).
If we consider X and Y two tensors of horizontal and vertical indices, the following can work:
a[X.reshape(-1,1), Y] = b

Broadcasting in PyTorch

To better understand how nn.BatchNorm2d works, I wanted to recreate the following lines of code:
input = torch.randint(1, 5, size=(2, 2, 3, 3)).float()
batch_norm = nn.BatchNorm2d(2)
output = (batch_norm(input))
print(input)
print(output)
tensor([[[[3., 3., 2.],
[3., 2., 2.],
[4., 2., 1.]],
[[1., 1., 2.],
[3., 2., 1.],
[1., 1., 1.]]],
[[[4., 3., 3.],
[4., 1., 4.],
[1., 3., 2.]],
[[2., 1., 4.],
[4., 2., 1.],
[4., 1., 3.]]]])
tensor([[[[ 0.3859, 0.3859, -0.6064],
[ 0.3859, -0.6064, -0.6064],
[ 1.3783, -0.6064, -1.5988]],
[[-0.8365, -0.8365, 0.0492],
[ 0.9349, 0.0492, -0.8365],
[-0.8365, -0.8365, -0.8365]]],
[[[ 1.3783, 0.3859, 0.3859],
[ 1.3783, -1.5988, 1.3783],
[-1.5988, 0.3859, -0.6064]],
[[ 0.0492, -0.8365, 1.8206],
[ 1.8206, 0.0492, -0.8365],
[ 1.8206, -0.8365, 0.9349]]]]
To achieve, I first calculated the mean and variance for each channel:
my_mean = (torch.mean(input, dim=[0, 2, 3]))
my_var = (torch.var(input, dim=[0, 2, 3]))
print(my_mean, my_var)
tensor([2.8333, 2.5556])
tensor([1.3235, 1.3203])
This seems reasonable, I have the mean and variance for each channel across the whole batch. Then I wanted to simply extract the mean from the input and divide by the variance. This is where problems arise, since I do not know to properly set up the mean and variance. PyTorch does not seem to broadcast properly:
my_output = (input - my_mean) / my_var
RuntimeError: The size of tensor a (3) must match the size of tensor b (2) at non-singleton dimension 3
I then wanted to reshape the mean and variance in the appropriate shape, such that each value is repeated 25 times in a 5x5 shape
First try:
my_mean.repeat(25).reshape(3, 5, 5)
But this also results in an error. What is the best way to achieve my goal?

How to evaluate a pyTorch/DGL tensor

From a DGL graph I want to see the adjacency matrix with
adjM = g.adjacency_matrix()
adjM
and I get the following which is fine:
tensor(indices=tensor([[0, 0, 0, 1],
[1, 2, 3, 3]]),
values=tensor([1., 1., 1., 1.]),
size=(4, 4), nnz=4, layout=torch.sparse_coo)
Now I want to have the adjacency matrix and the node values each by itself. I imagine something of this kind:
adjMatrix = adjM.indices # or
adjMatrix = adjM[0]
nodeValues = adjM.values # or
nodeValues = adjM[1]
But this form is not estimated by pyTorch/DGL.
My beginner's question:
how to do this correctly and sucsessfully? and
is there a tutorial for a nuby? ( I have searched a lot just for this detail...!)
Click here!
You will find the usage of dgl.adj(). As the doc said, the return is an adjacency matrix, and the return type is the SparseTensor.
I noticed that the output that you post is a SparseTensor.
You can try it as follows then you can get the entire adj_matrix
I create a dgl graph g, get the adjacency matrix as adj
g = dgl.graph(([0, 1, 2], [1, 2, 3]))
adj = g.adj()
adj
output is:
tensor(indices=tensor([[0, 1, 2],
[1, 2, 3]]),
values=tensor([1., 1., 1.]),
size=(4, 4), nnz=3, layout=torch.sparse_coo)
We can find that adj is the presence of sparse, and the sparse type is coo, we can use the following code to verify if adj is a SparseTensor
adj.is_sparse
output :
True
so we can use to_dense() get the original adj matrix
adj.to_dense()
the result is:
tensor([[0., 1., 0., 0.],
[0., 0., 1., 0.],
[0., 0., 0., 1.],
[0., 0., 0., 0.]])
When you have a problem with DGL you can check the Deep Graph Library Tutorials and Documentation.

PyTorch: Differentiable operations to go from coordinate tensor to grid tensor

I have a tensor that looks like
coords = torch.Tensor([[0, 0, 1, 2],
[0, 2, 2, 2]])
The first row is the x-coordinates of objects on a grid and the second row is the corresponding y-coordinates.
I need a differentiable way (i.e. gradients can flow) to go from this tensor to the corresponding "grid" tensor, where a 1 represents the presence of an object in that location (row index, column index) and 0 represents no object:
grid = torch.Tensor([[1, 0, 1],
[0, 0, 1],
[0, 0, 1]])
In general, coords can be large (the grid size is 300x300). If coords was a sparse tensor I could simply call to_dense on it, but for various reasons specific to my application I cannot store coords as sparse. Additionally, I cannot create a new sparse tensor from coords and call to_dense on it because creating a new tensor is not differentiable.
Any help is appreciated!
I'm not sure what you mean by 'differentiable', but here's a simple way to do it using advanced indexing.
coords = coords.long()
grid[coords[0],coords[1]] = 1
tensor([[1., 0., 1.],
[0., 0., 1.],
[0., 0., 1.]])
I think Torch doesn't have a detailed documentation about this, but numpy has here. (probably very similar for torch)
this is also possible
coords = coords.long()
grid[coords[0],coords[1]] = torch.Tensor([1,2,3,4])
tensor([[1., 0., 2.],
[0., 0., 3.],
[0., 0., 4.]])
Say
coords = [[0, 0, 1, 2],
[0, 2, 2, 2]]
Then:
torch.stack([torch.stack(x) for x in coords])

keras and nlp - when to use .texts_to_matrix instead of .texts_to_sequences?

Keras offers a couple of helper functions to process text:
texts_to_sequences and texts_to_matrix
It seems that most people use texts_to_sequences, but it is unclear to me why one is picked over the other and under what conditions you might want to use texts_to_matrix.
texts_to_matrix is easy to understand. It will convert texts to a matrix with columns refering to words and cells carrying number of occurrence or presence. Such a design will be useful for direct application of ML algorithms (logistic regression, decision tree, etc.)
texts_to_sequence will create lists that are collection of integers representing words. Certain functions like Keras-embeddings require this format for preprocessing.
Consider the example below.
txt = ['Python is great and useful', 'Python is easy to learn', 'Python is easy to implement']
txt = pd.Series(txt)
tok = Tokenizer(num_words=10)
tok.fit_on_texts(txt)
mat_texts = tok.texts_to_matrix(txt, mode='count')
mat_texts
Output:
array([[0., 1., 1., 0., 0., 1., 1., 1., 0., 0.],
[0., 1., 1., 1., 1., 0., 0., 0., 1., 0.],
[0., 1., 1., 1., 1., 0., 0., 0., 0., 1.]])
tok.get_config()['word_index']
Output:
'{"python": 1, "is": 2, "easy": 3, "to": 4, "great": 5, "and": 6, "useful": 7, "learn": 8, "implement": 9}'
mat_texts_seq = tok.texts_to_sequences(txt)
mat_texts_seq
Output:-
[[1, 2, 5, 6, 7], [1, 2, 3, 4, 8], [1, 2, 3, 4, 9]]

Resources