How can I apply cuda to custom model in pytorch? - pytorch

The type of inputs is dictionary of tensors. So while training I convert device to cuda to use gpu. And my custom model is like above. Also I assigned cuda to the model.
class EmbeddingLayer(nn.Module):
def __init__(self):
super(EmbeddingLayer, self).__init__()
# other features
self.other_features_embedding = []
for feature_name in OTHER_FEATURES:
vocabulary = CATEGORICAL_FEATURES_WITH_VOCABULARY[feature_name]
embedding_dims = int(math.sqrt(len(vocabulary)))
embedding = nn.Embedding(len(vocabulary)+1, embedding_dims)
self.other_features_embedding.append(embedding)
# transformer features
item_vocabulary = CATEGORICAL_FEATURES_WITH_VOCABULARY['item']
self.item_embedding_dims = int(math.sqrt(len(item_vocabulary)))
self.item_embedding = nn.Embedding(len(item_vocabulary)+1, self.item_embedding_dims)
def forward(self, inputs):
# other features
encoded_other_features = []
for i, feature_name in enumerate(OTHER_FEATURES):
embedding = self.other_features_embedding[i](inputs[feature_name])
encoded_other_features.append(embedding)
encoded_other_features = torch.cat(encoded_other_features, -1)
# transformer features
encoded_sequence_item = self.item_embedding(inputs['sequence_item'])
encoded_target_item = self.item_embedding(inputs['target_item'])
positions = inputs['target_timestamp'].repeat(sequence_length-1, 1).transpose(0, 1) - inputs['sequence_timestamp']
encoded_positions = positions.repeat(1, self.item_embedding_dims).reshape(-1, self.item_embedding_dims, sequence_length-1).transpose(1,2)
encoded_sequence_item_with_position = encoded_sequence_item + encoded_positions
encoded_transformer_features = torch.cat((encoded_sequence_item_with_position, encoded_target_item.reshape(-1, 1, self.item_embedding_dims)), 1)
return encoded_other_features, encoded_transformer_features
class BST(nn.Module):
def __init__(self, hidden_units, dropout, num_heads):
super(BST, self).__init__()
...
self.embedding_layer = EmbeddingLayer()
...
def forward(self, inputs):
other_features, transformer_features = self.embedding_layer(inputs)
...
return self.output(features)
model = BST([256, 128], 0.3, 1)
model.to(device)
def train(model, optimizer, dataloader):
model.train()
for inputs in tqdm(dataloader, total=len(dataloader)):
for k, v in inputs.items():
inputs[k] = v.to(device)
model.zero_grad()
pred = model(inputs)
...
But following error occurs:
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cpu and cuda:0! (when checking argument for argument index in method wrapper__index_select)
I think the error occurs at embedding in EmbeddingLayer. How can I fix this error to use gpu while training?

Your list of nn.Module is not registering the embedding layers as sub modules of your layer. In order to properly register a list of modules you should use nn.ModuleList. Therefore, you should add the following right after the loop in your __init__ function:
embeddings = []
for feature_name in OTHER_FEATURES:
vocabulary = CATEGORICAL_FEATURES_WITH_VOCABULARY[feature_name]
embedding_dims = int(math.sqrt(len(vocabulary)))
embedding = nn.Embedding(len(vocabulary)+1, embedding_dims)
embeddings.append(embedding)
self.other_features_embedding = nn.ModuleList(embeddings)

Related

Computing the Hessian of a Simple NN in PyTorch wrt to Parameters

I am relatively new to PyTorch and trying to compute the Hessian of a very simple feedforward networks with respect to its weights. I am trying to get torch.autograd.functional.hessian to work. I have been digging the forums and since this is a relatively new function added to PyTorch, I am unable to find a whole lot of information on it. Here is my simple network architecture which is from some sample code on Kaggle on Mnist.
class Network(nn.Module):
def __init__(self):
super(Network, self).__init__()
self.l1 = nn.Linear(input_size, hidden_size)
self.relu = nn.ReLU()
self.l3 = nn.Linear(hidden_size, output_size)
def forward(self, x):
x = self.l1(x)
x = self.relu(x)
x = self.l3(x)
return F.log_softmax(x, dim = 1)
net = Network()
optimizer = optim.SGD(net.parameters(), lr=learning_rate, momentum=0.9)
loss_func = nn.CrossEntropyLoss()
and I am running the NN for a bunch of epochs like:
for e in range(epochs):
for i in range(0, x.shape[0], batch_size):
x_mini = x[i:i + batch_size]
y_mini = y[i:i + batch_size]
x_var = Variable(x_mini)
y_var = Variable(y_mini)
optimizer.zero_grad()
net_out = net(x_var)
loss = loss_func(net_out, y_var)
loss.backward()
optimizer.step()
if i % 100 == 0:
loss_log.append(loss.data)
Then, I add all the parameters to a list and make a tensor out of it as below:
param_list = []
for param in net.parameters():
param_list.append(param.view(-1))
param_list = torch.cat(param_list)
Finally, I am trying to compute the Hessian of the converged network by running:
hessian = torch.autograd.functional.hessian(loss_func, param_list,create_graph=True)
but it gives me this error:
TypeError: forward() missing 1 required positional argument: 'target'
Any help would be appreciated.
Computing the hessian with regard to the parameters of a model (as opposed to the inputs to the model) isn't really well-supported right now. There's some work being done on this at https://github.com/pytorch/pytorch/issues/49171 , but for the moment it's very inconvenient.
Your code has a few other problems -- where you're passing loss_func, you should be passing a function that constructs the computation graph. Also, you never specify the input to the network or the target for the loss function.
Here's some code that cheats a little bit to use the existing functional interface to compute the hessian of the model weights, and concatenates everything together to give the same form as what you were trying to do:
# Pick a random input to the network
src = torch.rand(1, 2)
# Say our target for our loss is all ones
dst = torch.ones(1, dtype=torch.long)
keys = list(net.state_dict().keys())
parameters = list(net.parameters())
sizes = [x.view(-1).shape[0] for x in parameters]
ndims = sum(sizes)
def hessian_hack(*params):
for i in range(len(keys)):
path = keys[i].split('.')
cur = net
for f in range(0, len(path)-1):
cur = net.__getattr__(path[f])
cur.__delattr__(path[-1])
cur.__setattr__(path[-1], params[i])
return loss_func(net(src), dst)
# sub_hessians[i][f] is the hessian of parameter i vs parameter f
sub_hessians = torch.autograd.functional.hessian(
hessian_hack,
tuple(parameters),
create_graph=True)
# We can combine them all into a nice big hessian.
hessian = torch.cat([
torch.cat([
sub_hessians[i][f].reshape(sizes[i], sizes[f])
for f in range(len(sub_hessians[i]))
], axis=1)
for i in range(len(sub_hessians))
], axis=0)
print(hessian)

PyTorch nn.Transformer learns to copy target

I’m trying to train a Transformer Seq2Seq model using nn.Transformer class. I believe I am implementing it wrong, since when I train it, it seems to fit too fast, and during inference it repeats itself often. This seems like a masking issue in the decoder, and when I remove the target mask, the training performance is the same. This leads me to believe I am doing the target masking wrong. Here is my model code:
class TransformerModel(nn.Module):
def __init__(self,
vocab_size, input_dim, heads, feedforward_dim, encoder_layers, decoder_layers,
sos_token, eos_token, pad_token, max_len=200, dropout=0.5,
device=(torch.device("cuda") if torch.cuda.is_available() else torch.device("cpu"))):
super(TransformerModel, self).__init__()
self.target_mask = None
self.embedding = nn.Embedding(vocab_size, input_dim, padding_idx=pad_token)
self.pos_embedding = nn.Embedding(max_len, input_dim, padding_idx=pad_token)
self.transformer = nn.Transformer(
d_model=input_dim, nhead=heads, num_encoder_layers=encoder_layers,
num_decoder_layers=decoder_layers, dim_feedforward=feedforward_dim,
dropout=dropout)
self.out = nn.Sequential(
nn.Linear(input_dim, feedforward_dim),
nn.ReLU(),
nn.Linear(feedforward_dim, vocab_size))
self.device = device
self.max_len = max_len
self.sos_token = sos_token
self.eos_token = eos_token
# Initialize all weights to be uniformly distributed between -initrange and initrange
def init_weights(self):
initrange = 0.1
self.encoder.weight.data.uniform_(-initrange, initrange)
self.decoder.bias.data.zero_()
self.decoder.weight.data.uniform_(-initrange, initrange)
# Generate mask covering the top right triangle of a matrix
def generate_square_subsequent_mask(self, size):
mask = (torch.triu(torch.ones(size, size)) == 1).transpose(0, 1)
mask = mask.float().masked_fill(mask == 0, float('-inf')).masked_fill(mask == 1, float(0.0))
return mask
def forward(self, src, tgt):
# src: (Max source seq len, batch size, 1)
# tgt: (Max target seq len, batch size, 1)
# Embed source and target with normal and positional embeddings
embedded_src = (self.embedding(src) +
self.pos_embedding(
torch.arange(0, src.shape[1]).to(self.device).unsqueeze(0).repeat(src.shape[0], 1)))
# Generate target mask
target_mask = self.generate_square_subsequent_mask(size=tgt.shape[0]).to(self.device)
embedded_tgt = (self.embedding(tgt) +
self.pos_embedding(
torch.arange(0, tgt.shape[1]).to(self.device).unsqueeze(0).repeat(tgt.shape[0], 1)))
# Feed through model
outputs = self.transformer(src=embedded_src, tgt=embedded_tgt, tgt_mask=target_mask)
outputs = F.log_softmax(self.out(outputs), dim=-1)
return outputs
For those having the same problem, my issue was that I wasn't properly adding the SOS token to the target I was feeding the model, and the EOS token to the target I was using in the loss function.
For reference:
The target fed to the model should be: [SOS] ....
And the target used for the loss should be: .... [EOS]

How to fix 'No gradients provided for any variable' error when using ctc_loss in Tensorflow

I am trying to make Baidu's Deep Speech 2 model in Tensorflow 2.0.0alpha0. I am having trouble optimizing the Tensorflow ctc_loss using a tf.GradientTape() object for calculating the gradients.
I am currently passing a tensor of shape (batch_size, max_step, feats) to my model and then passing the computed logits to the loss function. I have also tried passing a sparse tensor but this also does not work.
Here is the code for creating my model
import tensorflow as tf
class DeepSpeech2(tf.keras.Model):
def __init__(self, vocab_size, conv_filters=[11], conv_kernel_sizes=[1280], conv_strides=[2],
recur_sizes=[100], rnn_type='gru', bidirect_rnn=False, batch_norm=True,
learning_rate=1e-3, name='DeepSpeech2'):
super(DeepSpeech2, self).__init__()
self._vocab_size = vocab_size
self._conv_filters = conv_filters
self._conv_kernel_sizes = conv_kernel_sizes
self._conv_strides = conv_strides
self._recur_sizes = recur_sizes
self._rnn_type = rnn_type
self._bidirect_rnn = bidirect_rnn
self._batch_norm = batch_norm
self._learning_rate = learning_rate
self._name = name
self._conv_batch_norm = None
with tf.name_scope(self._name):
self._convolution = [tf.keras.layers.Conv1D(filters=conv_filters[i],
kernel_size=conv_kernel_sizes[i], strides=conv_strides[i],
padding='valid', activation='relu',
name='conv1d_{}'.format(i)) for i in range(len(self._conv_filters))]
if self._batch_norm:
self._conv_batch_norm = tf.keras.layers.BatchNormalization(name='bn_conv_1d')
if self._rnn_type == 'gru':
rnn_init = tf.keras.layers.GRU
elif self._rnn_type == 'lstm':
rnn_init = tf.keras.layers.LSTM
else:
raise Exception("Invalid rnn_type: '{}' (must be 'lstm' or 'gru')"
.format(self._rnn_type))
self._rnn = []
for i, r in enumerate(self._recur_sizes):
layer = rnn_init(r, activation='relu', return_sequences=True,
name='{}_{}'.format(self._rnn_type, i))
if self._bidirect_rnn:
layer = tf.keras.layers.Bidirectional(layer)
self._rnn.append(layer)
if self._batch_norm:
self._rnn.append(tf.keras.layers.BatchNormalization())
self._fc = tf.keras.layers.TimeDistributed(tf.keras.layers.Dense(
self._vocab_size, name='fc', activation='linear'))
self._optimizer = tf.keras.optimizers.Adam(lr=self._learning_rate)
def __call__(self, specs):
with tf.name_scope(self._name):
feats = specs
for layer in self._convolution:
feats = layer(feats)
if self._conv_batch_norm:
feats = self._conv_batch_norm(feats)
rnn_outputs = feats
for layer in self._rnn:
rnn_outputs = layer(rnn_outputs)
outputs = self._fc(rnn_outputs)
return tf.transpose(outputs, (1, 0, 2))
#tf.function
def train_step(self, specs, spec_lengths, labels, label_lengths):
with tf.GradientTape() as tape:
logits = self.__call__(specs)
loss = tf.nn.ctc_loss(labels=labels, logits=logits,
label_length=label_lengths, logit_length=spec_lengths)
cost = tf.reduce_sum(loss)
decoded, neg_sum_logits = tf.nn.ctc_greedy_decoder(logits, label_lengths)
gradients = tape.gradient(cost, self.trainable_variables)
self._optimizer.apply_gradients(zip(gradients, self.trainable_variables))
return (decoded[0].indices, decoded[0].values, decoded[0].dense_shape), cost
I am currently getting the following error
ValueError: No gradients provided for any variable: ['DeepSpeech2/conv1d_0/kernel:0', 'DeepSpeech2/conv1d_0/bias:0', 'DeepSpeech2/bn_conv_1d/gamma:0', 'DeepSpeech2/bn_conv_1d/beta:0', 'DeepSpeech2/gru_0/kernel:0', 'DeepSpeech2/gru_0/recurrent_kernel:0', 'DeepSpeech2/gru_0/bias:0', 'DeepSpeech2/batch_normalization_v2/gamma:0', 'DeepSpeech2/batch_normalization_v2/beta:0', 'DeepSpeech2/time_distributed/kernel:0', 'DeepSpeech2/time_distributed/bias:0'].
The error occurs at the line where the gradients are applied to the optimizer. When I print out my gradients variable, it is just a list of None
From what I understand, this error is indicating that there is no path from the variables to the loss in the graph but I'm not sure why I am getting this. Any help would be greatly appreciated!

Keras Custom Layer Error (Operation IsVariableInitialized has been marked as not fetchable)

I'm trying to create a custom Keras layer on a toy dataset, and am having issues. At a high level, I want to create an "Input Gate" layer, which would have trainable weights to turn each column of input on or off. So I'm starting with just trying to multiply the inputs by a sigmoid'd version of the learned weights. My code is as follows:
### This is my custom layer
class InputGate(Layer):
def __init__(self, **kwargs):
super(InputGate, self).__init__(**kwargs)
def build(self, input_shape):
self.kernel = self.add_weight(name='input_gate',
shape=input_shape[1:],
initializer='random_uniform',
trainable=True)
super(InputGate, self).build(input_shape) # Be sure to call this somewhere!
def call(self, inputs):
gate_amount = K.sigmoid(self.kernel)
return inputs * gate_amount
def get_config(self):
config = {}
base_config = super(InputGate, self).get_config()
return dict(list(base_config.items()) + list(config.items()))
def compute_output_shape(self, input_shape):
return input_shape
def create_linear_model(x, y, num_noise_vars = 0, reg_strength=0):
new_x = get_x_with_noise(x, num_noise_vars=num_noise_vars)
model = Sequential([
InputGate(input_shape=(1+num_noise_vars,)),
Dense(1, kernel_regularizer=l2(reg_strength))
])
model.compile(optimizer="rmsprop", loss="mse")
model.optimizer.lr = 0.001
return {"model": model, "new_x": new_x}
def get_x_with_noise(x, num_noise_vars):
noise_vars = []
for noise_var in range(num_noise_vars):
noise_vars.append(np.random.random(len(x)))
noise_vars.append(x)
x_with_noise = noise_vars
new_x = np.array(list(zip(*x_with_noise)))
return new_x
x = np.random.random(500)
y = (x * 3) + 10
num_noise_vars = 5
info = create_linear_model(x, y, num_noise_vars=num_noise_vars)
model = info["model"]
new_x = info["new_x"]
results = model.fit(new_x, y, epochs=num_epochs, verbose=0)
And then I get the following error:
ValueError: Operation 'input_gate_14/IsVariableInitialized' has been marked as not fetchable.
This layer is mostly taken from the docs(https://keras.io/layers/writing-your-own-keras-layers/). I'm using Keras 2.0.9, with Tensorflow backend on a CPU (Macbook Air).
This layer seems as simple as can be, and googling the error leads me to discussions that don't seem relevant. Anyone have ideas of what's causing this?
Any help is much appreciated! Thanks!

Create a custom regularizer on weights of 2 layers

Here is small snippet of my code describing my custom regularizer that I want to implement.
# Code adapted from https://github.com/keras-team/keras/issues/5563
class CustomRegularization(Layer):
def __init__(self, **kwargs):
super(CustomRegularization, self).__init__(**kwargs)
def call(self ,x ,mask=None):
ld=x[0]
rd=x[1]
reg = K.dot(K.transpose(ld), rd)
reg_norm = K.sqrt(K.sum(K.square(reg)))
self.add_loss(reg_norm, x)
return ld
def compute_output_shape(self, input_shape):
return (input_shape[0][0],input_shape[0][1])
def model():
input1 = Input(shape=(224, 224, 3))
input2 = Input(shape=(224, 224, 3))
inp1 = Flatten()(input1)
inp2 = Flatten()(input2)
layer1 = Dense(1024, activation="sigmoid")
x1_1 = layer1(inp1)
x2_1 = layer1(inp2)
layer2 = Dense(1024, activation="sigmoid")
x1_2 = layer2(inp1)
x2_2 = layer2(inp2)
# get weights of layer1 and layer2
layer1_wt = layer1.trainable_weights[0]
layer2_wt = layer2.trainable_weights[0]
# This is a regularization term on the weights of layer1 and layer2.
regularization = CustomRegularization()([layer1_wt, layer2_wt])
model = Model([input1, input2], [x1_2, x2_2, regularization])
if __name__ == "__main__":
m = model()
This returns the error AttributeError: 'Variable' object has no attribute '_keras_history' and is not able to create the model.
I know that this error would be because of incompatible outputs (since inputs are keras Input layer). [For more details refer to #fchollet's comment on issue #7362 ].
The main problem here are the layer1.trainable_weights[0] and layer2.trainable_weights[0]. These are tf.Variable (tensorflow variables) and not Keras Tensors. I would require them to convert to keras tensors. How do I do that?

Resources