Let the tensor shown below be the representation of two sentences (batch_size = 2) composed with 3 words (max_lenght = 3) and each word being represented by vectors of dimension equal to 5 (hidden_size = 5) obtained as output from a neural network:
net_output
# tensor([[[0.7718, 0.3856, 0.2545, 0.7502, 0.5844],
# [0.4400, 0.3753, 0.4840, 0.2483, 0.4751],
# [0.4927, 0.7380, 0.1502, 0.5222, 0.0093]],
# [[0.5859, 0.0010, 0.2261, 0.6318, 0.5636],
# [0.0996, 0.2178, 0.9003, 0.4708, 0.7501],
# [0.4244, 0.7947, 0.5711, 0.0720, 0.1106]]])
Also consider the following attention scores:
att_scores
# tensor([[0.2425, 0.5279, 0.2295],
# [0.2461, 0.4789, 0.2751]])
Which efficient approach allows obtaining the aggregation of vectors in net_output weighted by att_scores resulting in a vector of shape (2, 5)?
This should work:
weighted = (net_output * att_scores[..., None]).sum(axis = 1)
Uses broadcasting to (elementwise) multiply the attention weights to each vector and aggregates (them by summing) all vectors in a batch.
Related
I have a use-case where I have to do FFT for a given tensor as. Here, FFT is applied to each of the 10 rows, in a column-wise manner which gives the dimension (10, 11) post FFT.
# Random data-
x = torch.rand((10, 20))
# Compute RFFT of 'x'-
x_fft = torch.fft.rfft(x)
# Sanity check-
x.shape, x_fft.shape
# (torch.Size([10, 20]), torch.Size([10, 11]))
# FFT for the first 2 rows are-
x_fft[:2, :]
'''
tensor([[12.2561+0.0000j, 0.7551-1.2075j, 1.1119-0.0458j, -0.2814-1.5266j,
1.4083-0.7302j, 0.6648+0.3311j, 0.3969+0.0632j, -0.8031-0.1904j,
-0.4206+0.9066j, -0.2149+0.9160j, 0.4800+0.0000j],
[ 9.8967+0.0000j, -0.5100-0.2377j, -0.6344+2.2406j, 0.4584-1.0705j,
0.2235+0.4788j, -0.3923+0.8205j, -1.0372-0.0292j, -1.6368+0.5517j,
1.5093+0.0419j, 0.5755-1.2133j, 2.9269+0.0000j]])
'''
# The goal is to have for each row, 1-D vector (of size = 11) as follows:
# So, for first row, the desired 1-D vector (size = 11) is-
[12.2561, 0.0000, 0.7551, -1.2075, 1.1119, -0.0458, -0.2814, -1.5266,
1.4083, -0.7302, 0.6648, 0.3311, 0.3969, 0.0632, -0.8031, -0.1904,
-0.4206, 0.9066, -0.2149, 0.9160, 0.4800, 0.0000]
'''
Here, you are taking the real and imaginary components and placing them adjacent to each other.
Adjacent means:
[a_1_real, a_1_imag, a_2_real, a_2_imag, a_3_real, a_3_imag, ....., a_n_real, a_n_imag]
Since for each row, you get 11 FFT complex numbers, a_n = a_11.
How to go about it?
Your question seems to come down to: how to interleave two tensors together. Given x and y the two tensors. You can do so with a combination of transpose and reshape.
>>> torch.stack((x,y),1).transpose(1,2).reshape(2,-1)
tensor([[ 1.1547e+01, 0.0000e+00, 1.3786e+00, -8.1970e-01, -3.2118e-02,
-2.3900e-02, -3.2898e-01, -3.4610e-01, -1.7916e-01, 1.2308e+00,
-5.4203e-01, 1.2580e-01, 8.5273e-01, 8.9980e-01, -2.7096e+00,
-3.8060e-01, 3.0016e-01, -4.5240e-01, -7.7809e-02, 4.5630e-01,
-4.5805e-03, 0.0000e+00],
[ 1.1106e+01, 0.0000e+00, 1.3362e-01, 1.3830e-01, -7.4233e-01,
7.7570e-01, -9.9461e-01, 1.0834e+00, 1.6952e+00, 5.2920e-01,
-1.1884e+00, -2.5970e-01, -8.7958e-01, 4.3180e-01, -9.3039e-01,
8.8130e-01, -1.0048e+00, 1.2823e+00, 2.0595e-01, -6.5170e-01,
1.7209e+00, 0.0000e+00]])
I have two tensors and both are of same shape. I want to calculate pairwise sinkhorn distance using GeomLoss.
What i have tried:
import torch
import geomloss # pip install git+https://github.com/jeanfeydy/geomloss
a = torch.rand((8,4))
b = torch.rand((8,4))
geomloss.SamplesLoss('sinkhorn')(a,b)
# ^ input shape [batch, feature_dim]
# will return a scalar value
geomloss.SamplesLoss('sinkhorn')(a.unsqueeze(1),b.unsqueeze(1))
# ^ input shape [batch, n_points, feature_dim]
# will return a tensor of size [batch] of distances between a[i] and b[i] for each i
However I would like to compute pairwise distance where the resultant tensor should be of size [batch, batch]. To achieve this, I tried the following to use broadcasting:
geomloss.SamplesLoss('sinkhorn')(a.unsqueeze(0), b.unsqueeze(1))
But I got this error message:
ValueError: Samples x and y should have the same batchsize.
Since the documentation doesn't give examples on how to use the distance's forward function. Here's a way to do it, which will require you to call the distance function batch times.
We will construct the distance matrix line by line. Line i corresponds to the distances a[i]<->b[0], a[i]<->b[1], through to a[i]<->b[batch]. To do so we need to construct, for each line i, a (8x4) repeated version of tensor a[i].
This will do:
a_i = torch.stack(8*[a[i]], dim=0)
Then we calculate the distance between a[i] and each batch in b:
dist(a_i.unsqueeze(1), b.unsqueeze(1))
Having a total of batch lines we can construct our final tensor stack.
Here's the complete code:
batch = a.shape[0]
dist = geomloss.SamplesLoss('sinkhorn')
distances = [dist(torch.stack(batch*[a[i]]).unsqueeze(1), b.unsqueeze(1)) for i in range(batch)]
D = torch.stack(distances)
I have time-series sequences which I needed to keep the length of sequences fixed to a number by padding zeroes into matrix and using keras.layers.Masking in keras I could neglect those padded zeros for further computations, I am wondering how could it be done in Pytorch?
Either I need to do the padding in pytroch and pytorch can't handle the sequences with varying lengths what is the equivalent to Masking layer of keras in pytorch, or if pytorch handles the sequences with varying lengths, how could it be done?
You can use PackedSequence class as equivalent to keras masking. you can find more features at torch.nn.utils.rnn
Here putting example from packing for variable-length sequence inputs for rnn
import torch
import torch.nn as nn
from torch.autograd import Variable
batch_size = 3
max_length = 3
hidden_size = 2
n_layers =1
# container
batch_in = torch.zeros((batch_size, 1, max_length))
#data
vec_1 = torch.FloatTensor([[1, 2, 3]])
vec_2 = torch.FloatTensor([[1, 2, 0]])
vec_3 = torch.FloatTensor([[1, 0, 0]])
batch_in[0] = vec_1
batch_in[1] = vec_2
batch_in[2] = vec_3
batch_in = Variable(batch_in)
seq_lengths = [3,2,1] # list of integers holding information about the batch size at each sequence step
# pack it
pack = torch.nn.utils.rnn.pack_padded_sequence(batch_in, seq_lengths, batch_first=True)
>>> pack
PackedSequence(data=Variable containing:
1 2 3
1 2 0
1 0 0
[torch.FloatTensor of size 3x3]
, batch_sizes=[3])
# initialize
rnn = nn.RNN(max_length, hidden_size, n_layers, batch_first=True)
h0 = Variable(torch.randn(n_layers, batch_size, hidden_size))
#forward
out, _ = rnn(pack, h0)
# unpack
unpacked, unpacked_len = torch.nn.utils.rnn.pad_packed_sequence(out)
>>> unpacked
Variable containing:
(0 ,.,.) =
-0.7883 -0.7972
0.3367 -0.6102
0.1502 -0.4654
[torch.FloatTensor of size 1x3x2]
more you would find this article useful. [Jum to Title - "How the PackedSequence object works"] - link
You can use a packed sequence to mask a timestep in the sequence dimension:
batch_mask = ... # boolean mask e.g. (seq x batch)
# move `padding` at right place then it will be cut when packing
compact_seq = torch.zeros_like(x)
for i, seq_len in enumerate(batch_mask.sum(0)):
compact_seq[:seq_len, i] = x[batch_mask[:,i],i]
# pack in sequence dimension (the number of agents)
packed_x = pack_padded_sequence(compact_seq, batch_mask.sum(0).cpu().numpy(), enforce_sorted=False)
packed_scores, rnn_hxs = nn.GRU(packed_x, rnn_hxs)
# restore sequence dimension
scores, _ = pad_packed_sequence(packed_scores)
# restore order, moving padding in its place
scores = torch.zeros((*batch_mask.shape,scores.size(-1))).to(scores.device).masked_scatter(batch_mask.unsqueeze(-1), scores)
instead use a mask select/scatter to mask in the batch dimension:
batch_mask = torch.any(x, -1).unsqueeze(-1) # boolean mask (batch,1)
batch_x = torch.masked_select(x, batch_mask).reshape(-1, x.size(-1))
batch_rnn_hxs = torch.masked_select(rnn_hxs, batch_mask).reshape(-1, rnn_hxs.size(-1))
batch_rnn_hxs = nn.GRUCell(batch_x, batch_rnn_hxs)
rnn_hxs = rnn_hxs.masked_scatter(batch_mask, batch_rnn_hxs) # restore batch
Note that using scatter function is safe for gradient backpropagation
I'm building a convolutional net in Keras that assigns multiple classes to an image. Given that the image has 9 points of interest that can be classified in one of the three ways I wanted to add 27 output neurons with softmax activation that would compute probability for each consecutive triple of neurons.
Is it possible to do that? I know I can simply add a big softmax layer but this would result in a probability distribution over all output neurons which is too broad for my application.
In the most naive implementation, you can reshape your data and you'll get exactly what you described: "probability for each consecutive triplet".
You take the output with 27 classes, shaped like (batch_size,27) and reshape it:
model.add(Reshape((9,3)))
model.add(Activation('softmax'))
Take care to reshape your y_true data as well. Or add yet another reshape in the model to restore the original form:
model.add(Reshape((27,))
In more elaborate solutions, you'd probably separate the 9 points of insterest according to their locations (if they have a roughly static location) and make parallel paths. For instance, suppose your 9 locations are evenly spaced rectangles, and you want to use the same net and classes for those segments:
inputImage = Input((height,width,channels))
#supposing the width and height are multiples of 3, for easiness in this example
recHeight = height//3
recWidth = width//3
#create layers here without calling them
someConv1 = Conv2D(...)
someConv2 = Conv2D(...)
flatten = Flatten()
classificator = Dense(..., activation='softmax')
outputs = []
for i in range(3):
for j in range(3):
fromH = i*recHeight
toH = fromH + recHeight
fromW = j*recWidth
toW = fromW + recWidth
imagePart = Lambda(
lambda x: x[:,fromH:toH, fromW:toW,:],
output_shape=(recHeight,recWidth,channels)
)(inputImage)
#using the same net and classes for all segments
#if this is not true, create new layers here instead of using the same
output = someConv1(imagePart)
output = someConv2(output)
output = flatten(output)
output = classificator(output)
outputs.append(output)
outputs = Concatenate()(outputs)
model = Model(inputImage,outputs)
Given a dataset of n samples, m features, and using [sklearn.neural_network.MLPClassifier][1], how can I set hidden_layer_sizes to start with m inputs? For instance, I understand that if hidden_layer_sizes= (10,10) it means there are 2 hidden layers each of 10 neurons (i.e., units) but I don't know if this also implies 10 inputs as well.
Thank you
This classifier/regressor, as implemented, is doing this automatically when calling fit.
This can be seen in it's code here.
Excerpt:
n_samples, n_features = X.shape
# Ensure y is 2D
if y.ndim == 1:
y = y.reshape((-1, 1))
self.n_outputs_ = y.shape[1]
layer_units = ([n_features] + hidden_layer_sizes +
[self.n_outputs_])
You see, that your potentially given hidden_layer_sizes is surrounded by layer-dimensions defined by your data within .fit(). This is the reason, the signature reads like this with a subtraction of 2!:
Parameters
hidden_layer_sizes : tuple, length = n_layers - 2, default (100,)
The ith element represents the number of neurons in the ith hidden layer.