How to make this 1 cell LSTM network? - keras

I would like to make an LSTM network to learn to give me back the first value of the sequence each time there is a 0 in the sequence and 0 if there is another value.
Example:
x = 9 8 3 1 0 3 4
y = 0 0 0 0 9 0 0
The network memorize a value and give it back when it receives a special signal.
I think a can do that with one LSTM cell like that:
in red the weights and inside the gray area the biases.
Here is my model:
model2=Sequential()
model2.add(LSTM(input_dim=1, output_dim=1, return_sequences = True))
model2.add(TimeDistributed(Dense(output_dim=1, activation='linear')))
model2.compile(loss = "mse", optimizer = "rmsprop")
and here how I set the weigths to my cell, however I not sure at all of the order :
# w : weights of x_t
# u : weights of h_{t-1}
# order of array: input_gate, new_input, forget_gate, output_gate
# (Tensorflow order)
w = np.array([[0, 1, 0, -100]], dtype=np.float32)
u = np.array([[1, 0, 0, 0]], dtype=np.float32)
biases = np.array([0, 0, 1, 1], dtype=np.float32)
model2.get_layer('lstm').set_weights([w, u, biases])
Am I right with the weigths? Is it as I put it on the figure?
To work it needs to have the right inital values. How do I set the initial values c of the cell and h of the previous output ? I seen that in the source code
h_tm1 = states[0] # previous memory state
c_tm1 = states[1] # previous carry state
but I couldn't find how to use that.

Why not do this manually? It's so easy and it's an exact calculation. You don't need weights for that, and that is certainly not differentiable regarding weights.
Given an input tensor with shape (batch, steps, features):
def processSequence(x):
initial = x[:,0:1]
zeros = K.cast(K.equal(x,0), K.floatx())
return initial * zeros
model.add(Lambda(processSequence))
Warning: if you're intending to use this with inputs from other layers, the probability of finding a zero will be so small that this layer will be useless.

Related

Expected stride to be a single integer value or a list of 1 values to match the convolution dimensions, but got stride=[1, 1]

I read this question but it doesnt seem to answer my question :(.
So basically I'm trying to vectorize the game snake so it can run faster.
Here is my code till now:
import torch
import torch.nn.functional as F
device = torch.device("cpu")
class SnakeBoard:
def __init__(self, board=None):
if board != None:
self.channels = board
else:
# 0 - Food, 1 - Head, 2 - Body
self.channels = torch.zeros(1, 3, 15, 17,
device=device)
# Initialize game channels
self.channels[:, 0, 7, 12] = 1
self.channels[:, 1, 7, 5] = 1
self.channels[:, 2, 7, 2:6] = torch.arange(1, 5)
self.move()
def move(self):
self.channels[:, 2] -= 1
F.relu(self.channels[:, 2], inplace=True)
# Up movement test
F.conv2d(self.channels[:, 1], torch.tensor([[[0,1,0],[0,0,0],[0,0,0]]]), padding=1)
SnakeBoard()
The first dimension in channels represents batch size, second dimension represent the 3 channels of the snake game: food, head, and body, and finally the third and fourth dimensions represent the height and width of the board.
Unfortunately when running the code I get error: Expected stride to be a single integer value or a list of 1 values to match the convolution dimensions, but got stride=[1, 1]
How can I fix that?
The dimensions of the inputs for the convolution are not correct for a 2D convolution. Let's have a look at the dimensions you're passing to F.conv2d:
self.channels[:, 1].size()
# => torch.Size([1, 15, 17])
torch.tensor([[[0,1,0],[0,0,0],[0,0,0]]]).size()
# => torch.Size([1, 3, 3])
The correct sizes should be
input: (batch_size, in_channels , height, width)
weight: (out_channels, in_channels , kernel_height, kernel_width)
Because your weight has only 3 dimensions, it is considered to be a 1D convolution, but since you called F.conv2d the stride and padding will be tuples and therefore it won't work.
For the input you indexed the second dimension, which selects that particular element across that dimensions and eliminates that dimensions. To keep that dimension you can index it with a slice of just one element.
And for the weight you are missing one dimension as well, which can just be added directly. Also your weight is of type torch.long, since you are only using integers in the tensor creation, but the weight needs to be of type torch.float.
F.conv2d(self.channels[:, 1:2], torch.tensor([[[[0,1,0],[0,0,0],[0,0,0]]]], dtype=torch.float), padding=1)
On a different note, I don't think that convolutions are appropriate for this use case, because you're not using a key property of the convolution, which is to capture the surroundings. Those are just too many unnecessary computations to achieve what you want, most of them are multiplications with 0.
For example, a move up is much easier to achieve by removing the first row and adding a new row of zeros at the end, so everything is shifted up (assuming that the first row is the top and the last row is the bottom of the board).
head = self.channels[:, 1:2]
batch_size, channels, height, width = head.size()
# Take everything but the first row of the head
# Add a row of zeros to the end by concatenating them across the height (dimension 2)
new_head = torch.cat([head[:, :, 1:], torch.zeros(batch_size, channels, 1, width)], dim=2)
# Or if you want to wrap it around the board, it's even simpler.
# Move the first row to the end
wrap_around_head = torch.cat([head[:, :, 1:], head[:, :, 0:1]], dim=2)

What would be the equivalent of keras.layers.Masking in pytorch?

I have time-series sequences which I needed to keep the length of sequences fixed to a number by padding zeroes into matrix and using keras.layers.Masking in keras I could neglect those padded zeros for further computations, I am wondering how could it be done in Pytorch?
Either I need to do the padding in pytroch and pytorch can't handle the sequences with varying lengths what is the equivalent to Masking layer of keras in pytorch, or if pytorch handles the sequences with varying lengths, how could it be done?
You can use PackedSequence class as equivalent to keras masking. you can find more features at torch.nn.utils.rnn
Here putting example from packing for variable-length sequence inputs for rnn
import torch
import torch.nn as nn
from torch.autograd import Variable
batch_size = 3
max_length = 3
hidden_size = 2
n_layers =1
# container
batch_in = torch.zeros((batch_size, 1, max_length))
#data
vec_1 = torch.FloatTensor([[1, 2, 3]])
vec_2 = torch.FloatTensor([[1, 2, 0]])
vec_3 = torch.FloatTensor([[1, 0, 0]])
batch_in[0] = vec_1
batch_in[1] = vec_2
batch_in[2] = vec_3
batch_in = Variable(batch_in)
seq_lengths = [3,2,1] # list of integers holding information about the batch size at each sequence step
# pack it
pack = torch.nn.utils.rnn.pack_padded_sequence(batch_in, seq_lengths, batch_first=True)
>>> pack
PackedSequence(data=Variable containing:
1 2 3
1 2 0
1 0 0
[torch.FloatTensor of size 3x3]
, batch_sizes=[3])
# initialize
rnn = nn.RNN(max_length, hidden_size, n_layers, batch_first=True)
h0 = Variable(torch.randn(n_layers, batch_size, hidden_size))
#forward
out, _ = rnn(pack, h0)
# unpack
unpacked, unpacked_len = torch.nn.utils.rnn.pad_packed_sequence(out)
>>> unpacked
Variable containing:
(0 ,.,.) =
-0.7883 -0.7972
0.3367 -0.6102
0.1502 -0.4654
[torch.FloatTensor of size 1x3x2]
more you would find this article useful. [Jum to Title - "How the PackedSequence object works"] - link
You can use a packed sequence to mask a timestep in the sequence dimension:
batch_mask = ... # boolean mask e.g. (seq x batch)
# move `padding` at right place then it will be cut when packing
compact_seq = torch.zeros_like(x)
for i, seq_len in enumerate(batch_mask.sum(0)):
compact_seq[:seq_len, i] = x[batch_mask[:,i],i]
# pack in sequence dimension (the number of agents)
packed_x = pack_padded_sequence(compact_seq, batch_mask.sum(0).cpu().numpy(), enforce_sorted=False)
packed_scores, rnn_hxs = nn.GRU(packed_x, rnn_hxs)
# restore sequence dimension
scores, _ = pad_packed_sequence(packed_scores)
# restore order, moving padding in its place
scores = torch.zeros((*batch_mask.shape,scores.size(-1))).to(scores.device).masked_scatter(batch_mask.unsqueeze(-1), scores)
instead use a mask select/scatter to mask in the batch dimension:
batch_mask = torch.any(x, -1).unsqueeze(-1) # boolean mask (batch,1)
batch_x = torch.masked_select(x, batch_mask).reshape(-1, x.size(-1))
batch_rnn_hxs = torch.masked_select(rnn_hxs, batch_mask).reshape(-1, rnn_hxs.size(-1))
batch_rnn_hxs = nn.GRUCell(batch_x, batch_rnn_hxs)
rnn_hxs = rnn_hxs.masked_scatter(batch_mask, batch_rnn_hxs) # restore batch
Note that using scatter function is safe for gradient backpropagation

Simple vanilla RNN doesn't pass gradient check

I recently tried to implement a vanilla RNN from scratch. I implemented everything and even ran a seemingly OK example! yet I noticed the gradient check is not successful! and only some parts (specifically weight and bias for the output) pass the gradient check while other weights (Whh, Whx) don't pass it.
I followed karpathy/corsera's implementation and made sure everything is implemented. Yet karpathy/corsera's code passes the gradient check and mine doesn't. I have no clue at this point, what is causing this!
Here is the snippets responsible for backward pass in the original code :
def rnn_step_backward(dy, gradients, parameters, x, a, a_prev):
gradients['dWya'] += np.dot(dy, a.T)
gradients['dby'] += dy
da = np.dot(parameters['Wya'].T, dy) + gradients['da_next'] # backprop into h
daraw = (1 - a * a) * da # backprop through tanh nonlinearity
gradients['db'] += daraw
gradients['dWax'] += np.dot(daraw, x.T)
gradients['dWaa'] += np.dot(daraw, a_prev.T)
gradients['da_next'] = np.dot(parameters['Waa'].T, daraw)
return gradients
def rnn_backward(X, Y, parameters, cache):
# Initialize gradients as an empty dictionary
gradients = {}
# Retrieve from cache and parameters
(y_hat, a, x) = cache
Waa, Wax, Wya, by, b = parameters['Waa'], parameters['Wax'], parameters['Wya'], parameters['by'], parameters['b']
# each one should be initialized to zeros of the same dimension as its corresponding parameter
gradients['dWax'], gradients['dWaa'], gradients['dWya'] = np.zeros_like(Wax), np.zeros_like(Waa), np.zeros_like(Wya)
gradients['db'], gradients['dby'] = np.zeros_like(b), np.zeros_like(by)
gradients['da_next'] = np.zeros_like(a[0])
### START CODE HERE ###
# Backpropagate through time
for t in reversed(range(len(X))):
dy = np.copy(y_hat[t])
# this means, subract the correct answer from the predicted value (1-the predicted value which is specified by Y[t])
dy[Y[t]] -= 1
gradients = rnn_step_backward(dy, gradients, parameters, x[t], a[t], a[t-1])
### END CODE HERE ###
return gradients, a
and this is my implementation:
def rnn_cell_backward(self, xt, h, h_prev, output, true_label, dh_next):
"""
Runs a single backward pass once.
Inputs:
- xt: The input data of shape (Batch_size, input_dim_size)
- h: The next hidden state at timestep t(which comes from the forward pass)
- h_prev: The previous hidden state at timestep t-1
- output : The output at the current timestep
- true_label: The label for the current timestep, used for calculating loss
- dh_next: The gradient of hidden state h (dh) which in the beginning
is zero and is updated as we go backward in the backprogagation.
the dh for the next round, would come from the 'dh_prev' as we will see shortly!
Just remember the backward pass is essentially a loop! and we start at the end
and traverse back to the beginning!
Returns :
- dW1 : The gradient for W1
- dW2 : The gradient for W2
- dW3 : The gradient for W3
- dbh : The gradient for bh
- dbo : The gradient for bo
- dh_prev : The gradient for previous hiddenstate at timestep t-1. this will be used
as the next dh for the next round of backpropagation.
- per_ts_loss : The loss for current timestep.
"""
e = np.copy(output)
# correct idx for each row(sample)!
idxs = np.argmax(true_label, axis=1)
# number of rows(samples) in our batch
rows = np.arange(e.shape[0])
# This is the vectorized version of error_t = output_t - label_t or simply e = output[t] - 1
# where t refers to the index in which label is 1.
e[rows, idxs] -= 1
# This is used for our loss to see how well we are doing during training.
per_ts_loss = output[rows, idxs].sum()
# must have shape of W3 which is (vocabsize_or_output_dim_size, hidden_state_size)
dW3 = np.dot(e.T, h)
# dbo = e.1, since we have batch we use np.sum
# e is a vector, when it is subtracted from label, the result will be added to dbo
dbo = np.sum(e, axis=0)
# when calculating the dh, we also add the dh from the next timestep as well
# when we are in the last timestep, the dh_next is initially zero.
dh = np.dot(e, self.W3) + dh_next # from later cell
# the input part
dtanh = (1 - h * h) * dh
# dbh = dtanh.1, we use sum, since we have a batch
dbh = np.sum(dtanh, axis=0)
# compute the gradient of the loss with respect to W1
# this is actually not needed! we only care about tune-able
# parameters, so we are only after, W1,W2,W3, db and do
# dxt = np.dot(dtanh, W1.T)
# must have the shape of (vocab_size, hidden_state_size)
dW1 = np.dot(xt.T, dtanh)
# compute the gradient with respect to W2
dh_prev = np.dot(dtanh, self.W2)
# shape must be (HiddenSize, HiddenSize)
dW2 = np.dot(h_prev.T, dtanh)
return dW1, dW2, dW3, dbh, dbo, dh_prev, per_ts_loss
def rnn_layer_backward(self, Xt, labels, H, O):
"""
Runs a full backward pass on the given data. and returns the gradients.
Inputs:
- Xt: The input data of shape (Batch_size, timesteps, input_dim_size)
- labels: The labels for the input data
- H: The hiddenstates for the current layer prodced in the foward pass
of shape (Batch_size, timesteps, HiddenStateSize)
- O: The output for the current layer of shape (Batch_size, timesteps, outputsize)
Returns :
- dW1: The gradient for W1
- dW2: The gradient for W2
- dW3: The gradient for W3
- dbh: The gradient for bh
- dbo: The gradient for bo
- dh: The gradient for the hidden state at timestep t
- loss: The current loss
"""
dW1 = np.zeros_like(self.W1)
dW2 = np.zeros_like(self.W2)
dW3 = np.zeros_like(self.W3)
dbh = np.zeros_like(self.bh)
dbo = np.zeros_like(self.bo)
dh_next = np.zeros_like(H[:, 0, :])
hprev = None
_, T_x, _ = Xt.shape
loss = 0
for t in reversed(range(T_x)):
# this if-else block can be removed! and for hprev, we can simply
# use H[:,t -1, : ] instead, but I also add this in case it makes a
# a difference! so far I have not seen any difference though!
if t > 0:
hprev = H[:, t - 1, :]
else:
hprev = np.zeros_like(H[:, 0, :])
dw_1, dw_2, dw_3, db_h, db_o, dh_prev, e = self.rnn_cell_backward(Xt[:, t, :],
H[:, t, :],
hprev,
O[:, t, :],
labels[:, t, :],
dh_next)
dh_next = dh_prev
dW1 += dw_1
dW2 += dw_2
dW3 += dw_3
dbh += db_h
dbo += db_o
# Update the loss by substracting the cross-entropy term of this time-step from it.
loss -= np.log(e)
return dW1, dW2, dW3, dbh, dbo, dh_next, loss
I have commented everything and provided a minimal example to demonstrate this here:
My code (doesn't pass gradient check)
And here is the implementation that I used as my guide. This is from karpathy/Coursera and passes all the gradient checks!: original code
At this point I have no idea why this is not working. I'm a beginner in Python so, this could be why I can't find the issue.
2 month later I think I found the culprit! I should have changed the following line :
# compute the gradient with respect to W2
dh_prev = np.dot(dtanh, self.W2)
to
# compute the gradient with respect to W2
# note the transpose here!
dh_prev = np.dot(dtanh, self.W2.T)
When I was initially writing the backward pass, I only paid attention to the dimensions and that made me make this mistake. This is actually an example of messing features that can happen in mindless/blind reshaping/transposing(or not doing so!)
In order to get what has gone wrong here let me give an example.
Suppose we have a matrix of peoples features and we dedicated each row to each person, therefore our matrix would look like this :
Features | Age | height(cm) | weight(kg) |
matrix = | 20 | 185 | 75 |
| 85 | 155 | 95 |
| 40 | 205 | 120 |
Now if we make this into a numpy array we will have the following :
m = np.array([[20, 185, 75],
[85, 155, 95],
[40, 205, 120]])
A simple 3x3 array right?
Now the way we interpret our matrix is very important, here each row and each column has a specific meaning. Each person is described using a row, and each column is a specific feature vector.
So, you see there is a "structure" in the matrix we represent our data with.
In other words, each data item is represented as a row, and each column specifies a single feature. When multiplying with another matrix, this semantic should be paid attention to ,meaning, when two matrices are to be multiplied, each data row must have this semantic.
Lets have an example and make this more clear :
suppose we have two matrices :
m1 = np.array([[20, 185, 75],
[85, 155, 95],
[40, 205, 120]])
m2 = np.array([[0.9, 0.8, 0.85],
[0.1, 0.5, 0.4],
[0.6, 0.9, 0.8]])
these two matrices contain data that are arranged in rows, therefore, multiplying them would result in the correct answer, However altering the order of data using Transpose for example, will destroy the semantic and we will be multiplying unrelated data!
In my case I needed to transpose the second matrix it to make the order right
for the operation at hand! and that fixed the gradient checking hopefully!

how can I insert a Tensor into another Tensor in pytorch

I have pytorch Tensor with shape (batch_size, step, vec_size), for example, a Tensor(32, 64, 128), let's call it A.
I have another Tensor(batch_size, vec_size), e.g. Tensor(32, 128), let's call it B.
I want to insert B into a certain position at axis 1 of A. The insert positions are given in a Tensor(batch_size), named P.
I understand there is no Empty tensor(like an empty list) in pytorch, so, I initialize A as zeros, and add B at a certain position at axis 1 of A.
A = Variable(torch.zeros(batch_size, step, vec_size))
What I'm doing is like:
for i in range(batch_size):
pos = P[i]
A[i][pos] = A[i][pos] + B[i]
But I get an Error:
RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation
Then, I make a clone of A each inside the loop:
for i in range(batch_size):
A_clone = A.clone()
pos = P[i]
A_clone[i][pos] = A_clone[i][pos] + B[i]
This is very slow for autograd, I wonder if there any better solutions? Thank you.
You can use a mask instead of cloning.
See the code below
# setup
batch, step, vec_size = 64, 10, 128
A = torch.rand((batch, step, vec_size))
B = torch.rand((batch, vec_size))
pos = torch.randint(10, (64,)).long()
# computations
# create a mask where pos is 0 if it is to be replaced
mask = torch.ones( (batch, step)).view(batch,step,1).float()
mask[torch.arange(batch), pos]=0
# expand B to have same dimension as A and compute the result
result = A*mask + B.unsqueeze(dim=1).expand([-1, step, -1])*(1-mask)
This way you avoid using for loops and cloning as well.

Implementing word dropout in pytorch

I want to add word dropout to my network so that I can have sufficient training examples for training the embedding of the "unk" token. As far as I'm aware, this is standard practice. Let's assume the index of the unk token is 0, and the index for padding is 1 (we can switch them if that's more convenient).
This is a simple CNN network which implements word dropout the way I would have expected it to work:
class Classifier(nn.Module):
def __init__(self, params):
super(Classifier, self).__init__()
self.params = params
self.word_dropout = nn.Dropout(params["word_dropout"])
self.pad = torch.nn.ConstantPad1d(max(params["window_sizes"])-1, 1)
self.embedding = nn.Embedding(params["vocab_size"], params["word_dim"], padding_idx=1)
self.convs = nn.ModuleList([nn.Conv1d(1, params["feature_num"], params["word_dim"] * window_size, stride=params["word_dim"], bias=False) for window_size in params["window_sizes"]])
self.dropout = nn.Dropout(params["dropout"])
self.fc = nn.Linear(params["feature_num"] * len(params["window_sizes"]), params["num_classes"])
def forward(self, x, l):
x = self.word_dropout(x)
x = self.pad(x)
embedded_x = self.embedding(x)
embedded_x = embedded_x.view(-1, 1, x.size()[1] * self.params["word_dim"]) # [batch_size, 1, seq_len * word_dim]
features = [F.relu(conv(embedded_x)) for conv in self.convs]
pooled = [F.max_pool1d(feat, feat.size()[2]).view(-1, params["feature_num"]) for feat in features]
pooled = torch.cat(pooled, 1)
pooled = self.dropout(pooled)
logit = self.fc(pooled)
return logit
Don't mind the padding - pytorch doesn't have an easy way of using non zero padding in CNNs, much less trainable non-zero padding, so I'm doing it manually. Dropout also doesn't allow me to use non zero dropout, and I want to separate the padding token from the unk token. I'm keeping it in my example because it's the reason for this question's existence.
This doesn't work because dropout wants Float Tensors so that it can scale them properly, while my input is Long Tensors that don't need to be scaled.
Is there an easy way of doing this in pytorch? I essentially want to use LongTensor-friendly dropout (bonus: better if it will let me specify a dropout constant that isn't 0, so that I could use zero padding).
Actually I would do it outside of your model, before converting your input into a LongTensor.
This would look like this:
import random
def add_unk(input_token_id, p):
#random.random() gives you a value between 0 and 1
#to avoid switching your padding to 0 we add 'input_token_id > 1'
if random.random() < p and input_token_id > 1:
return 0
else:
return input_token_id
#than you have your input token_id
#for this example I take just a random number, lets say 127
input_token_id = 127
#let p be your probability for UNK
p = 0.01
your_input_tensor = torch.LongTensor([add_unk(input_token_id, p)])
Edit:
So there are two options which come to my mind which are actually GPU-friendly. In general both solutions should be much more efficient.
Option one - Doing computation directly in forward():
If you're not using torch.utils and don't have plans using it later this is probably the way to go.
Instead of doing the computation before we just do it in the forward() method of main PyTorch class. However I see no (simple) way doing this in torch 0.3.1., so you would need to upgrade to version 0.4.0:
So imagine x is your input vector:
>>> x = torch.tensor(range(10))
>>> x
tensor([ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9])
probs is a vector containing uniform probabilities for dropout so we can check later agains our probability for dropout:
>>> probs = torch.empty(10).uniform_(0, 1)
>>> probs
tensor([ 0.9793, 0.1742, 0.0904, 0.8735, 0.4774, 0.2329, 0.0074,
0.5398, 0.4681, 0.5314])
Now we apply the dropout probabilities probs on our input x:
>>> torch.where(probs > 0.2, x, torch.zeros(10, dtype=torch.int64))
tensor([ 0, 0, 0, 3, 4, 5, 0, 7, 8, 9])
Note: To see some effect I chose a dropout probability of 0.2 here. I reality you probably want it to be smaller.
You can pick for this any token / id you like, here is an example with 42 as unknown token id:
>>> unk_token = 42
>>> torch.where(probs > 0.2, x, torch.empty(10, dtype=torch.int64).fill_(unk_token))
tensor([ 0, 42, 42, 3, 4, 5, 42, 7, 8, 9])
torch.where comes with PyTorch 0.4.0:
https://pytorch.org/docs/master/torch.html#torch.where
I don't know about the shapes of your network, but your forward() should look something like this then (when using mini-batching you need to flatten the input before applying dropout):
def forward_train(self, x, l):
# probabilities
probs = torch.empty(x.size(0)).uniform_(0, 1)
# applying word dropout
x = torch.where(probs > 0.02, x, torch.zeros(x.size(0), dtype=torch.int64))
# continue like before ...
x = self.pad(x)
embedded_x = self.embedding(x)
embedded_x = embedded_x.view(-1, 1, x.size()[1] * self.params["word_dim"]) # [batch_size, 1, seq_len * word_dim]
features = [F.relu(conv(embedded_x)) for conv in self.convs]
pooled = [F.max_pool1d(feat, feat.size()[2]).view(-1, params["feature_num"]) for feat in features]
pooled = torch.cat(pooled, 1)
pooled = self.dropout(pooled)
logit = self.fc(pooled)
return logit
Note: I named the function forward_train() so you should use another forward() without dropout for evaluation / predicting. But you could also use some if conditions with train().
Option two: using torch.utils.data.Dataset:
If you're using Dataset provided by torch.utils it is very easy to do this kind of pre-processing efficiently. Dataset uses strong multi-processing acceleration by default so the the code sample above just has to be executed in the __getitem__ method of your Dataset class.
This could look like this:
def __getitem__(self, index):
'Generates one sample of data'
# Select sample
ID = self.input_tokens[index]
# Load data and get label
# using add ink_unk function from code above
X = torch.LongTensor(add_unk(ID, p=0.01))
y = self.targets[index]
return X, y
This is a bit out of context and doesn't look very elegant but I think you get the idea. According to this blog post of Shervine Amidi at Stanford it should be no problem to do more complex pre-processing steps in this function:
Since our code [Dataset is meant] is designed to be multicore-friendly, note that you
can do more complex operations instead (e.g. computations from source
files) without worrying that data generation becomes a bottleneck in
the training process.
The linked blog post - "A detailed example of how to generate your data in parallel with PyTorch" - provides also a good guide for implementing the data generation with Dataset and DataLoader.
I guess you'll prefer option one - only two lines and it should be very efficient. :)
Good luck!

Resources