Creating a distance matrix from a matrix/list - python-3.x

I have created a list from the input and from the list I have created a matrix also.
list-
('A', 'B', 3)
('A', 'D', 4)
('B', 'D', 4)
('B', 'H', 5)
('C', 'L', 2)
('D', 'F', 1)
('F', 'H', 3)
('G', 'H', 2)
('G', 'Y', 2)
('I', 'J', 6)
('I', 'K', 4)
matrix-
['A' 'B' '3']
['A' 'D' '4']
['B' 'D' '4']
['B' 'H' '5']
['C' 'L' '2']
['D' 'F' '1']
['F' 'H' '3']
['G' 'H' '2']
['G' 'Y' '2']
['I' 'J' '6']
['I' 'K' '4']
However I want to create a distance matrix from the above matrix or the list and then print the distance matrix. what will be the correct approach to implement it. In the above matrix the first 2 nodes represent the starting and ending node and the third one is the distance. I am ready to give any further clarification if required.A sample distance matrix is -
[[0, 10, 15, 20],
[10, 0, 35, 25],
[15, 35, 0, 30],
[20, 25, 30, 0]]

I think this question can't be homework because it has a specific format like ("A" , "B" , 3) isn't orthonormal to me, so I decided to help you. But it's better to implement another idea for solving it to help yourself in coding. One approach is to assign a number to each char, then you can specify rows and columns of distance matrix with numbers that are associated with letters! for example impute 'A' to 1, 'B' to 2, and so on:
A┌ 0 3 0 ┐ 1┌ 0 3 0 ┐
B| 3 0 0 |────► 2| 3 0 0 |
C└ 0 0 1 ┘ 3└ 0 0 0 ┘
So in this example, 1 stands for "A" and so on. SO how this is going to help? Example: We have a pattern like ('A', 'B', 3), And I consider it as (1, 2, 3) then I can use the first two values of each tuple for index addressing in a distance matrix:
2
┌─────────────┐
Distance Matrix 1│ 3 ... │
('A', 'B', 3)─────► (1, 2, 3) ───────────────► │. . │
│. . │
│. . │
└─────────────┘
So first of all I'll create an input list as you mentioned. I'll name it lis:
lis = [('A', 'B', 3),
('A', 'D', 4),
('B', 'D', 4),
('B', 'H', 5),
('C', 'L', 2),
('D', 'F', 1),
('F', 'H', 3),
('G', 'H', 2),
('G', 'Y', 2),
('I', 'J', 6),
('I', 'K', 4)]
Then I detect unique letters in lis using set and set.union. consider we have letters in the first and second argument of each tuple:
items = set.union(set([item[0].upper() for item in lis]) , set([item[1].upper() for item in lis]))
Then I'll make a dictionary to assign values to each letter considering the order of English letters:
value = dict(zip(sorted(items), range(26)))
Then I'll create a zero matrix using numpy.zeros:
import numpy as np
dist_matrix = np.zeros((len(items) , len(items)))
The last step is assigning the third value of each tuple, to a related position in the distance matrix:
for i in range(len(lis)):
# for Upper triangular
dist_matrix[value[lis[i][0]] , value[lis[i][1]]] = lis[i][2]
# for lower triangular
dist_matrix[value[lis[i][1]] , value[lis[i][0]]] = lis[i][2]
"""
Example:
[0 3 0]
[3 0 0]
[0 0 0]
"""
dist_matrix
This gives me:
array([[0., 3., 0., 4., 0., 0., 0., 0., 0., 0., 0., 0.],
[3., 0., 0., 4., 0., 0., 5., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 2., 0.],
[4., 4., 0., 0., 1., 0., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 1., 0., 0., 3., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 2., 0., 0., 0., 0., 2.],
[0., 5., 0., 0., 3., 2., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 0., 0., 6., 4., 0., 0.],
[0., 0., 0., 0., 0., 0., 0., 6., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 0., 4., 0., 0., 0., 0.],
[0., 0., 2., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 2., 0., 0., 0., 0., 0., 0.]])
Appendix
All code at one glance:
import numpy as np
lis = [('A', 'B', 3),
('A', 'D', 4),
('B', 'D', 4),
('B', 'H', 5),
('C', 'L', 2),
('D', 'F', 1),
('F', 'H', 3),
('G', 'H', 2),
('G', 'Y', 2),
('I', 'J', 6),
('I', 'K', 4)]
items = set.union(set([item[0].upper() for item in lis]) , set([item[1].upper() for item in lis]))
value = dict(zip(sorted(items), range(26)))
dist_matrix = np.zeros((len(items) , len(items)))
for i in range(len(lis)):
# for Upper triangular
dist_matrix[value[lis[i][0]] , value[lis[i][1]]] = lis[i][2]
# for lower triangular
dist_matrix[value[lis[i][1]] , value[lis[i][0]]] = lis[i][2]
"""
Example:
[0 3 0]
[3 0 0]
[0 0 0]
"""
dist_matrix

Since you did not show any attempts, here some ideas to get you started.
Convert a character to a number, e.g. an index location:
ord('C') - ord('A')
Creating a matrix:
If you want to have a fixed size pre-allocated matrix, instead of lists in lists which have flexible size, the library numpy can help you. Then you can access fields and set values while looping through your data.
If you are really interested in analyzing a network, you could have a look into networkx.
Please take these ideas as a starting point and refine your question with more precise obstacles you encounter while trying to solve your problem.

Related

Numpy: Generate matrix recursively

Is there a smart way to recursively generate matrices with increasing sizes in numpy?
I do have a generator matrix which is
g = np.array([[1, 0], [1, 1]])
And in every further iteration, the size of both axes doubles, making a new matrix of the format:
[g_{n-1}, 0], [g_{n-1}, g_{n-1}]
which means that the new version would be:
g = np.array([[1, 0, 0, 0], [1, 1, 0, 0], [1, 0, 1, 0], [1, 1, 1, 1]])
Is there an easy way to obtain something like that?
I could also generate a matrix of size (len(g)*2, len(g)*2) and try to fill it manually in two for-loops, but that seems extremely annoying.
Is there a better way?
PS: For those of you curious about it, the matrix is the generator matrix for polar codes.
IIUC, one way using numpy.block:
g = np.array([[1, 0], [1, 1]])
g = np.block([[g, np.zeros(g.shape)], [g, g]])
Output (iteration 1):
array([[1., 0., 0., 0.],
[1., 1., 0., 0.],
[1., 0., 1., 0.],
[1., 1., 1., 1.]])
Output (iteration 2):
array([[1., 0., 0., 0., 0., 0., 0., 0.],
[1., 1., 0., 0., 0., 0., 0., 0.],
[1., 0., 1., 0., 0., 0., 0., 0.],
[1., 1., 1., 1., 0., 0., 0., 0.],
[1., 0., 0., 0., 1., 0., 0., 0.],
[1., 1., 0., 0., 1., 1., 0., 0.],
[1., 0., 1., 0., 1., 0., 1., 0.],
[1., 1., 1., 1., 1., 1., 1., 1.]])
I don't see a straight-forward way to generate g_n, but you can reduce the two for-loops to one (along n) with:
# another sample
g = np.array([[1, 0], [2, 3]])
g = (np.array([[g,np.zeros_like(g)],[g, g]])
.swapaxes(1,2).reshape(2*g.shape[0], 2*g.shape[1])
)
Output:
array([[1, 0, 0, 0],
[2, 3, 0, 0],
[1, 0, 1, 0],
[2, 3, 2, 3]])

All possible concatenations of two tensors in PyTorch

Suppose I have two tensors S and T defined as:
S = torch.rand((3,2,1))
T = torch.ones((3,2,1))
We can think of these as containing batches of tensors with shapes (2, 1). In this case, the batch size is 3.
I want to concatenate all possible pairings between batches. A single concatenation of batches produces a tensor of shape (4, 1). And there are 3*3 combinations so ultimately, the resulting tensor C must have a shape of (3, 3, 4, 1).
One solution is to do the following:
for i in range(S.shape[0]):
for j in range(T.shape[0]):
C[i,j,:,:] = torch.cat((S[i,:,:],T[j,:,:]))
But the for loop doesn't scale well to large batch sizes. Is there a PyTorch command to do this?
I don't know of any command out-of-the-box that does such operation. However, you can pull it off in a straightforward way using a single matrix multiplication.
The trick is to construct a tensor containing all pairs of batch elements by starting from already stacked S,T tensor. Then by multiplying it with a properly chosen mask tensor... In this method, keeping track of shapes and dimension sizes is essential.
The stack is given by (notice the reshape, we essentially flatten the batch elements from S and T into a single batch axis on ST):
>>> ST = torch.stack((S, T)).reshape(6, 2)
>>> ST
tensor([[0.7792, 0.0095],
[0.1893, 0.8159],
[0.0680, 0.7194],
[1.0000, 1.0000],
[1.0000, 1.0000],
[1.0000, 1.0000]]
# ST.shape = (6, 2)
You can retrieve all (S[i], T[j]) pairs using range and itertools.product:
>>> indices = torch.tensor(list(product(range(0, 3), range(3, 6))))
tensor([[0, 3],
[0, 4],
[0, 5],
[1, 3],
[1, 4],
[1, 5],
[2, 3],
[2, 4],
[2, 5]])
# indices.shape = (9, 2)
From there, we construct one-hot-encodings of the indices using torch.nn.functional.one_hot:
>>> mask = one_hot(indices).float()
tensor([[[1., 0., 0., 0., 0., 0.],
[0., 0., 0., 1., 0., 0.]],
[[1., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 1., 0.]],
[[1., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 1.]],
[[0., 1., 0., 0., 0., 0.],
[0., 0., 0., 1., 0., 0.]],
[[0., 1., 0., 0., 0., 0.],
[0., 0., 0., 0., 1., 0.]],
[[0., 1., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 1.]],
[[0., 0., 1., 0., 0., 0.],
[0., 0., 0., 1., 0., 0.]],
[[0., 0., 1., 0., 0., 0.],
[0., 0., 0., 0., 1., 0.]],
[[0., 0., 1., 0., 0., 0.],
[0., 0., 0., 0., 0., 1.]]])
# mask.shape = (9, 2, 6)
Finally, we compute the matrix multiplication and reshape it to the final form:
>>> (mask#ST).reshape(3, 3, 4, 1)
tensor([[[[0.7792],
[0.0095],
[1.0000],
[1.0000]],
[[0.7792],
[0.0095],
[1.0000],
[1.0000]],
[[0.7792],
[0.0095],
[1.0000],
[1.0000]]],
[[[0.1893],
[0.8159],
[1.0000],
[1.0000]],
[[0.1893],
[0.8159],
[1.0000],
[1.0000]],
[[0.1893],
[0.8159],
[1.0000],
[1.0000]]],
[[[0.0680],
[0.7194],
[1.0000],
[1.0000]],
[[0.0680],
[0.7194],
[1.0000],
[1.0000]],
[[0.0680],
[0.7194],
[1.0000],
[1.0000]]]])
I initially went with torch.einsum: torch.einsum('bf,pib->pif', ST, mask). But, later realized than that bf,pib->pif reduces nicely to a simple torch.Tensor.matmul operation if we switch the two operands: i.e. with pib,bf->pif (subscript b is reduced in the middle).
In numpy something called np.meshgrid is used.
https://stackoverflow.com/a/35608701/3259896
So in pytorch, it would be
torch.stack(
torch.meshgrid(x, y)
).T.reshape(-1,2)
Where x and y are your two lists. You can use any number. x, y , z, etc.
And then you reshape it to the number of lists you use.
So if you used three lists, use .reshape(-1,3), for four use .reshape(-1,4), etc.
So for 5 tensors, use
torch.stack(
torch.meshgrid(a, b, c, d, e)
).T.reshape(-1,5)

sklearn ndcg_score returned incorrect result

I am working on a project that involves the use of NDCG (normalized distributed cumulative gain), and I understand the method's underlying calculations.
So I imported ndcg_score from sklearn.metrics, and then pass in a ground truth array and another array to the ndcg_score function to calculate their NDCG score. The ground truth array has the values [5, 4, 3, 2, 1] while the other array has the values [5, 4, 3, 2, 0], so only the last element is different in these 2 arrays.
from sklearn.metrics import ndcg_score
user_ndcg = ndcg_score(array([[5, 4, 3, 2, 1]]), array([[5, 4, 3, 2, 0]]))
I was expecting the result to be around 0.96233 (9.88507/10.27192). However, user_ndcg actually returned 1.0, which surprised me. Initially I thought this is due to rounding, but this is not the case because when I did an experiment on another set of array: ndcg_score(array([[5, 4, 3, 2, 1]]), array([[5, 4, 0, 2, 0]])), it correctly returned 0.98898.
Does anyone know whether this could be a bug with the sklearn ndcg_score function, or whether I was doing something wrong with my code?
I am assuming you are trying to predict six different classes for this problem (0, 1, 2, 3, 4 and 5). If you want to evaluate the ndcg for five different observations, you have to pass the function two arrays of shape (5, 6) each.
That is, you have transform your ground truth and predictions to arrays of five rows and six columns per row.
# Current form of ground truth and predictions
y_true = [5, 4, 3, 2, 1]
y_pred = [5, 4, 3, 2, 0]
# Transform ground truth to ndarray
y_true_nd = np.zeros(shape=(5, 6))
y_true_nd[np.arange(5), y_true] = 1
# Transform predictions to ndarray
y_pred_nd = np.zeros(shape=(5, 6))
y_pred_nd[np.arange(5), y_pred] = 1
# Calculate ndcg score
ndcg_score(y_true_nd, y_pred_nd)
> 0.8921866522394966
Here's what y_true_nd and y_pred_nd look like:
y_true_nd
array([[0., 0., 0., 0., 0., 1.],
[0., 0., 0., 0., 1., 0.],
[0., 0., 0., 1., 0., 0.],
[0., 0., 1., 0., 0., 0.],
[0., 1., 0., 0., 0., 0.]])
y_pred_nd
array([[0., 0., 0., 0., 0., 1.],
[0., 0., 0., 0., 1., 0.],
[0., 0., 0., 1., 0., 0.],
[0., 0., 1., 0., 0., 0.],
[1., 0., 0., 0., 0., 0.]])

How to map element in pytorch tensor to id?

Given a tensor:
A = torch.tensor([2., 3., 4., 5., 6., 7.])
Then, give each element in A an id:
id = torch.arange(A.shape[0], dtype = torch.int) # tensor([0,1,2,3,4,5])
In other words, id of 2. in A is 0 and id of 3. in A is 1:
2. -> 0
3. -> 1
4. -> 2
5. -> 3
6. -> 4
7. -> 5
Then, I have a new tensor:
B = torch.tensor([3., 6., 6., 5., 4., 4., 4.])
In pytorch, is there any way in Pytorch to map each element in B to id?
In other words, I want to obtain tensor([1, 4, 4, 3, 2, 2, 2]), in which each element is id of the element in B.
What you ask can be done with slowly iterating the whole B matrix and checking each element of it against all elements of A and then retrieving the index of each element:
In [*]: for x in B:
...: print(torch.where(x==A)[0][0])
...:
...:
tensor(1)
tensor(4)
tensor(4)
tensor(3)
tensor(2)
tensor(2)
tensor(2)
Here I used torch.where to find all the True elements in the matrix x==A, where x take the value of each element of matrix B. This is really slow but it allows you to add some functionality to deal with cases where some elements of B do not appear in matrix A
The fast and dirty method to get what you want with linear algebra operations is:
In [*]: (B.view(-1,1) == A).int().argmax(dim=1)
Out[*]: tensor([1, 4, 4, 3, 2, 2, 2])
This trick takes advantage of the fact that argmax returns the first 'max' index of each vector in dim=1.
Big warning here, if the element does not exist in the matrix no error will be raised and the result will silently be 0 for all elements that do not exist in A.
In [*]: C = torch.tensor([100, 1000, 1, 3, 9999])
In [*]: (C.view(-1,1) == A).int().argmax(dim=1)
Out[*]: tensor([0, 0, 0, 1, 0])
I don't think there is such a function in PyTorch to map a tensor.
It seems quite unreasonable to solve this by comparing each value from B to values from B.
Here are two possible solutions to solve this problem.
Using a dictionary as a map
You can use a dictionary. Not so not much of a pure-PyTorch solution but will most probably be the fastest and safest way...
Just create a dict to map each element to an id, then use it to map B:
>>> map = {x.item(): i for i, x in enumerate(A)}
>>> torch.tensor([map[x.item()] for x in B])
tensor([1, 4, 4, 3, 2, 2, 2])
Change of basis approach
An alternative only using torch.Tensors. This will require the values you want to map - the content of A - to be integers because they will be used to index a tensor.
Encode the content of A into one-hot encodings:
>>> A_enc = torch.zeros((int(A.max())+1,)*2)
>>> A_enc[A, torch.arange(A.shape[0])] = 1
>>> A_enc
tensor([[0., 0., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 0., 0.],
[1., 0., 0., 0., 0., 0., 0., 0.],
[0., 1., 0., 0., 0., 0., 0., 0.],
[0., 0., 1., 0., 0., 0., 0., 0.],
[0., 0., 0., 1., 0., 0., 0., 0.],
[0., 0., 0., 0., 1., 0., 0., 0.],
[0., 0., 0., 0., 0., 1., 0., 0.]])
We'll use A_enc as our basis to map integers:
>>> v = torch.argmax(A_enc, dim=0)
tensor([0, 0, 0, 1, 2, 3, 4, 5])
Now, given an integer for instance x=3, we can encode it into a one-hot-encoding: x_enc = [0, 0, 0, 1, 0, 0, 0, 0]. Then, use v to map it. With a simple dot product you can get the mapping of x_enc: here <v/x_enc> gives 1 which is the desired result (first element of mapped-B). But instead of giving x_enc, we will compute the matrix multiplication between v and encoded-B. First encode B then compute the matrix multiplcition vxB_enc:
>>> B_enc = torch.zeros(A_enc.shape[0], B.shape[0])
>>> B_enc[B, torch.arange(B.shape[0])] = 1
>>> B_enc
tensor([[0., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 0.],
[1., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 1., 1., 1.],
[0., 0., 0., 1., 0., 0., 0.],
[0., 1., 1., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 0.]])
>>> v#B_enc.long()
tensor([1, 4, 4, 3, 2, 2, 2])
Note - you will have to define your tensors with Long type.
There is a similar issue for numpy so my answer is heavily inspired by their solution. I will compare some of the mentioned methods using perfplot. I will also generalize the problem to apply a mapping to a tensor (yours is just a specific case).
For the analysis, I will assume the mapping contains all the unique elements in the tensor and the number of elements to small and constant.
import torch
def apply(a: torch.Tensor, ids: torch.Tensor, b: torch.Tensor) -> torch.Tensor:
mapping = {k.item(): v.item() for k, v in zip(a, ids)}
return b.clone().apply_(lambda x: mapping.__getitem__(x))
def bucketize(a: torch.Tensor, ids: torch.Tensor, b: torch.Tensor) -> torch.Tensor:
mapping = {k.item(): v.item() for k, v in zip(a, ids)}
# From `https://stackoverflow.com/questions/13572448`.
palette, key = zip(*mapping.items())
key = torch.tensor(key)
palette = torch.tensor(palette)
index = torch.bucketize(b.ravel(), palette)
remapped = key[index].reshape(b.shape)
return remapped
def iterate(a: torch.Tensor, ids: torch.Tensor, b: torch.Tensor) -> torch.Tensor:
mapping = {k.item(): v.item() for k, v in zip(a, ids)}
return torch.tensor([mapping[x.item()] for x in b])
def argmax(a: torch.Tensor, ids: torch.Tensor, b: torch.Tensor) -> torch.Tensor:
return (b.view(-1, 1) == a).int().argmax(dim=1)
if __name__ == "__main__":
import perfplot
a = torch.arange(2, 8)
ids = torch.arange(0, 6)
perfplot.show(
setup=lambda n: torch.randint(2, 8, (n,)),
kernels=[
lambda x: apply(a, ids, x),
lambda x: bucketize(a, ids, x),
lambda x: iterate(a, ids, x),
lambda x: argmax(a, ids, x),
],
labels=["apply", "bucketize", "iterate", "argmax"],
n_range=[2 ** k for k in range(25)],
xlabel="len(a)",
)
Running this yields the following plot:
Hence depending on the number of elements in your tensor you can pick either the argmax method (with the caveats mentioned and the restriction that you have to map the values from 0 to N), apply, or bucketize.
Now if we increase the number of elements to be mapped lets say tens of thousands i.e. a = torch.arange(2, 10002) and ids = torch.arange(0, 10000) we get the following results:
This means the speed increase of bucketize will only be visible for a larger array but still outperforms the other methods (the argmax method was killed and therefore I had to remove it).
Last, if we have a mapping that does not have all keys present in the tensor we can just update a dictionary with all unique keys:
mapping = {x.item(): x.item() for x in torch.unique(a)}
mapping.update({k.item(): v.item() for k, v in zip(a, ids)})
Now, if the unique elements you want to map is orders of magnitude larger than the array computing this may shift the value of n for when bucketize is faster than apply (since for apply you can change the mapping.__getitem__(x) for mapping.get(x, x).
I guess there is an easier way. Create an array as mapper, cast your tensor back into np.ndarray first and then address it.
import numpy as np
a_array = A.numpy().astype(int)
b_array = B.numpy().astype(int)
mapper = np.zeros(10)
for i, x in enumerate(a_array):
mapper[x] = i
out = torch.Tensor(mapper[b_array])

Interpretation of in_channels and out_channels in Conv2D in Pytorch Convolution Neural Networks (CNN)

Let us suppose I have a CNN with starting 2 layers as:
inp_conv = Conv2D(in_channels=1,out_channels=6,kernel_size=(3,3))
Please correct me if I am wrong but I think what this line if code can be thought as that
There is a single grayscale image coming as input where we have to use 6 different kernals of same size (3,3) to make 6 different feature maps from a single image.
And if I have a second Conv2D layer just after first one as
second_conv_connected_to_inp_conv = Conv2D(in_channels=6,out_channels=12,kernel_size=(3,3))
What does this mean in terms of out_channels? Is there going to be 12 new feature maps for each of the 6 feature maps coming as output from first layer OR are there going to be a total of 12 feature maps from 6 incoming features?
To increase 6 channels in your second convolution layer to 12 channels. We take 12 of 6x3x3 filters. Each 6x3x3 filter will give a single Channel as output when the dot product is performed. Since we are taking 12 of those 6x3x3 filters we will get exactly 12 channels as output. For more information check this link.
https://cs231n.github.io/convolutional-networks/#conv
Edit: Think of it in this way. we have 6 input channels i.e HxWx6 where H is height and W is the width of the image. Since there are 6 channels we take 6 3x3 filters(Assuming kernel size is 3). After dot product we again get 6 Channels. But Now we add all the resulting 6 channels to get a single channel. This operation is performed 12 times to get 12 Channels.
For each out_channel, you have a set of kernels for each in_channel.
Equivalently, each out_channel has an in_channel x height x width kernel:
for i in nn.Conv2d(in_channels=2, out_channels=3, kernel_size=(4, 5)).parameters():
print(i)
Output:
Parameter containing:
tensor([[[[-0.0012, 0.0848, -0.1301, -0.1164, -0.0609],
[ 0.0424, -0.0031, 0.1254, -0.0140, 0.0418],
[-0.0478, -0.0311, -0.1511, -0.1047, -0.0652],
[ 0.0059, 0.0625, 0.0949, -0.1072, -0.0689]],
[[ 0.0574, 0.1313, -0.0325, 0.1183, -0.0255],
[ 0.0167, 0.1432, -0.1467, -0.0995, -0.0400],
[-0.0616, 0.1366, -0.1025, -0.0728, -0.1105],
[-0.1481, -0.0923, 0.1359, 0.0706, 0.0766]]],
[[[ 0.0083, -0.0811, 0.0268, -0.1476, -0.1142],
[-0.0815, 0.0998, 0.0927, -0.0701, -0.0057],
[ 0.1011, 0.1572, 0.0628, 0.0214, 0.1060],
[-0.0931, 0.0295, -0.1226, -0.1096, -0.0817]],
[[ 0.0715, 0.0636, -0.0937, 0.0478, 0.0868],
[-0.0200, 0.0060, 0.0366, 0.0981, 0.1518],
[-0.1218, -0.0579, 0.0621, 0.1310, 0.1376],
[ 0.1395, 0.0315, -0.1375, 0.0145, -0.0989]]],
[[[-0.1474, 0.1405, 0.1202, -0.1577, 0.0296],
[-0.0266, -0.0260, -0.0724, 0.0608, -0.0937],
[ 0.0580, 0.0800, 0.1132, 0.0591, -0.1565],
[-0.1026, 0.0789, 0.0331, -0.1233, -0.0910]],
[[ 0.1487, 0.1065, -0.0689, -0.0398, -0.1506],
[-0.0028, -0.1191, -0.1220, -0.0087, 0.0237],
[-0.0648, 0.0938, -0.0962, 0.1435, 0.1084],
[-0.1333, -0.0394, 0.0071, 0.0231, 0.0375]]]], requires_grad=True)
Parameter containing:
tensor([ 0.0620, 0.0095, -0.0771], requires_grad=True)
A more detailed example going from 1 channel input, through 2 and 4 channel convolutions:
import torch
torch.manual_seed(0)
input0 = torch.randint(-1, 1, (1, 1, 8, 8)).type(torch.FloatTensor)
print('input0:', input0.size())
print(input0.data)
layer0 = nn.Conv2d(in_channels=1, out_channels=2, kernel_size=2, stride=2, padding=0, bias=False)
print('\nlayer1:')
for i in layer0.parameters():
print(i.size())
i.data = torch.randint(-1, 1, i.size()).type(torch.FloatTensor)
print(i.data)
output0 = layer0(input0)
print('\noutput0:', output0.size())
print(output0.data)
print('\nlayer1:')
layer1 = nn.Conv2d(in_channels=2, out_channels=4, kernel_size=2, stride=2, padding=0, bias=False)
for i in layer1.parameters():
print(i.size())
i.data = torch.randint(-1, 1, i.size()).type(torch.FloatTensor)
print(i.data)
output1 = layer1(output0)
print('\noutput1:', output1.size())
print(output1.data)
output:
input0: torch.Size([1, 1, 8, 8])
tensor([[[[-1., 0., 0., -1., 0., 0., 0., 0.],
[ 0., 0., 0., -1., -1., 0., -1., -1.],
[-1., -1., -1., 0., -1., 0., 0., -1.],
[-1., 0., 0., 0., 0., -1., 0., -1.],
[ 0., -1., 0., 0., -1., 0., 0., -1.],
[-1., 0., -1., 0., 0., 0., 0., 0.],
[-1., 0., -1., 0., 0., 0., 0., -1.],
[ 0., -1., -1., 0., 0., -1., 0., -1.]]]])
layer1:
torch.Size([2, 1, 2, 2])
tensor([[[[-1., -1.],
[-1., 0.]]],
[[[ 0., -1.],
[ 0., -1.]]]])
output0: torch.Size([1, 2, 4, 4])
tensor([[[[1., 1., 1., 1.],
[3., 1., 1., 1.],
[2., 1., 1., 1.],
[1., 2., 0., 1.]],
[[0., 2., 0., 1.],
[1., 0., 1., 2.],
[1., 0., 0., 1.],
[1., 0., 1., 2.]]]])
layer1:
torch.Size([4, 2, 2, 2])
tensor([[[[-1., -1.],
[-1., -1.]],
[[ 0., -1.],
[ 0., -1.]]],
[[[ 0., 0.],
[ 0., 0.]],
[[ 0., -1.],
[ 0., 0.]]],
[[[ 0., 0.],
[-1., 0.]],
[[ 0., -1.],
[-1., 0.]]],
[[[-1., -1.],
[-1., -1.]],
[[ 0., 0.],
[-1., -1.]]]])
output1: torch.Size([1, 4, 2, 2])
tensor([[[[-8., -7.],
[-6., -6.]],
[[-2., -1.],
[ 0., -1.]],
[[-6., -3.],
[-2., -2.]],
[[-7., -7.],
[-7., -6.]]]])
Breaking down the linear algebra:
np.sum(
# kernel for layer1, in_channel 0, out_channel 0
# multiplied by output0, channel 0, top left corner
(np.array([[-1., -1.],
[-1., -1.]]) * \
np.array([[1., 1.],
[3., 1.]])) + \
# kernel for layer1, in_channel 1, out_channel 0
# multiplied by output0, channel 1, top left corner
(np.array([[ 0., -1.],
[ 0., -1.]]) * \
np.array([[0., 2.],
[1., 0.]]))
)
This will be equal to output1, channel 0, top left corner:
-8.0

Resources