Update net.parameters() without .data - pytorch

Is there any way to update the net parameters with some other tensors that carry gradients as well?
I want to do something like the following:
grads = torch.autograd.grad(loss, net.parameters(),
create_graph=True)
for param gi in zip(net.parameters(), grads):
param -= eps * gi
And I want each param to carry the grad_fn of gi.

You can do this by wrapping the whole loop with torch.no_grad():
grads = torch.autograd.grad(loss, net.parameters(), create_graph=True)
with torch.no_grad():
for param, gi in zip(net.parameters(), grads):
param -= eps*gi
Alternatively you can use the in-place copy_() on param's data property:
grads = torch.autograd.grad(loss, net.parameters(), create_graph=True)
for param, gi in zip(net.parameters(), grads):
param.data.copy_(param.data - eps*gi)
As far as I have tested, both methods update the parameters the same way.
I haven't found any way to copy the grad_fn property though. As a workaround you could copy to gi instead of param, this will overwrite the values of grads with what would have been the new parameters of the model:
grads = torch.autograd.grad(loss, net.parameters(), create_graph=True)
for param, gi in zip(net.parameters(), grads):
gi.data.copy_(param.data - eps*gi)
In case you need to keep grads unmodified, just clone it before the entering the loop.

Related

Custom activation function dependant on other output nodes in Keras

I would like to predict a multi-dimensional array using Long Short-Term Memory (LSTM) networks while imposing restrictions on the shape of the surface of interest.
I thought to accomplish this by setting some elements of the output (regions of the surface) in a functional relationship to others (simple scaling conditions).
Is it possible to set such custom activation functions for the output, whose argument are other output nodes, in Keras?
If not, is there any other interface that allows this? Do you have any source to a manual?
The keras-team on the GitHub answered the question about how to make a custom activation function.
There also is a question with a code with a custom activation function.
These pages may help you!
Additional comment
These pages were not enough for this question so I add the comment below;
Maybe PyTorch is better for customization than Keras. I tried to write such a network, though it is a very simple one, based on PyTorch tutorials and "Extending PyTorch with Custom Activation Functions"
I made a custom activation function in which the 1-th(counting from 0) elements of the output vector are equal to twice the 0-th elements. A very simple network with one layer was used for the training. After training, I checked that the condition was satisfied.
import torch
import matplotlib.pyplot as plt
# Define the custom activation function
# reference: https://towardsdatascience.com/extending-pytorch-with-custom-activation-functions-2d8b065ef2fa
def silu(input):
input[:,1] = input[:,0] * 2
return input
class SiLU(torch.nn.Module):
def __init__(self):
super().__init__() # init the base class
def forward(self, input):
return silu(input) # simply apply already implemented SiLU
# Training
# reference: https://pytorch.org/tutorials/beginner/pytorch_with_examples.html
k = 10
x = torch.rand([k,3])
y = x * 2
model = torch.nn.Sequential(
torch.nn.Linear(3, 3),
SiLU() # custom activation function
)
loss_fn = torch.nn.MSELoss(reduction='sum')
learning_rate = 1e-3
for t in range(2000):
y_pred = model(x)
loss = loss_fn(y_pred, y)
if t % 100 == 99:
print(t, loss.item())
model.zero_grad()
loss.backward()
with torch.no_grad():
for param in model.parameters():
param -= learning_rate * param.grad
# check the behaviour
yy = model(x) # predicted
print('ground truth')
print(y)
print('predicted')
print(yy)
# examples for the first five data
colorlist = ['#e41a1c', '#377eb8', '#4daf4a', '#984ea3', '#ff7f00']
plt.figure()
for i in range(5):
plt.plot(y[i,:].detach().numpy(), linestyle = "solid", label = "ground truth_" + str(i), color=colorlist[i])
plt.plot(yy[i,:].detach().numpy(), linestyle = "dotted", label = "predicted_" + str(i), color=colorlist[i])
plt.legend()
# check if the custom activation works correctly
plt.figure()
plt.plot(yy[:,0].detach().numpy()*2, label = '0th * 2')
plt.plot(yy[:,1].detach().numpy(), label = '1th')
plt.legend()
print(yy[:,0]*2)
print(yy[:,1])

How to compute the parameter importance in pytorch?

I want to develop a lifelong learning system,so i need to prevent important parameter from changing.I read related paper 'Memory Aware Synapses: Learning what (not) to forget',a method was mentioned,I need to calculate the gradient of each parameter conresponding to each input image,so how should i write my code in pytorch?
'Memory Aware Synapses: Learning what (not) to forget'
You can do it using standard optimization procedure and .backward() method on your loss function.
First, scaling as defined in your link:
class Scaler:
def __init__(self, parameters, delta):
self.parameters = parameters
self.delta = delta
def step(self):
"""Multiplies gradients in place."""
for param in self.parameters:
if param.grad is None:
raise ValueError("backward() has to be called before running scaler")
param.grad *= self.delta
One can use it just like optimizer.step(), see below (see comments):
model = torch.nn.Sequential(
torch.nn.Linear(10, 100), torch.nn.ReLU(), torch.nn.Linear(100, 1)
)
scaler = Scaler(model.parameters(), delta=0.001)
optimizer = torch.optim.Adam(model.parameters())
criterion = torch.nn.MSELoss()
X, y = torch.randn(64, 10), torch.randn(64)
# Optimization loop
EPOCHS = 10
for _ in range(EPOCHS):
output = model(X)
loss = criterion(output, y)
loss.backward() # Now model has the gradients
optimizer.step() # Optimize model's parameters
print(next(model.parameters()).grad)
scaler.step() # Scaler gradients
optimizer.zero_grad() # Zero gradient before next step
After scaler.step() you will have gradient scaled available inside param.grad for each parameter (just like those are accessed within Scaler's step method) so you can do whatever you want with them.

Using weights in CrossEntropyLoss and BCELoss (PyTorch)

I am training a PyTorch model to perform binary classification. My minority class makes up about 10% of the data, so I want to use a weighted loss function. The docs for BCELoss and CrossEntropyLoss say that I can use a 'weight' for each sample.
However, when I declare CE_loss = nn.BCELoss() or nn.CrossEntropyLoss() and then do CE_Loss(output, target, weight=batch_weights), where output, target, and batch_weights are Tensors of batch_size, I get the following error message:
forward() got an unexpected keyword argument 'weight'
Another way you could accomplish your goal is to use reduction=none when initializing the loss and then multiply the resulting tensor by your weights before computing the mean.
e.g.
loss = torch.nn.BCELoss(reduction='none')
model = torch.sigmoid
weights = torch.rand(10,1)
inputs = torch.rand(10,1)
targets = torch.rand(10,1)
intermediate_losses = loss(model(inputs), targets)
final_loss = torch.mean(weights*intermediate_losses)
Of course for your scenario you still would need to calculate the weights tensor. But hopefully this helps!
Could it be that you want to apply separate fixed weights to all elements of class 0 and class 1 in your dataset? It is not clear what value you are passing for batch_weights here. If so, then that is not what the weight parameter in BCELoss does. The weight parameter expects you to pass a separate weight for every ELEMENT in the dataset, not for every CLASS. There are several ways around this. You could construct a weight table for every element. Alternatively, you could use a custom loss function that does what you want:
def BCELoss_class_weighted(weights):
def loss(input, target):
input = torch.clamp(input,min=1e-7,max=1-1e-7)
bce = - weights[1] * target * torch.log(input) - (1 - target) * weights[0] * torch.log(1 - input)
return torch.mean(bce)
return loss
Note that it is important to add a clamp to avoid numerical instability.
HTH Jeroen
the issue is wherein your providing the weight parameter. As it is mentioned in the docs, here, the weights parameter should be provided during module instantiation.
For example, something like,
from torch import nn
weights = torch.FloatTensor([2.0, 1.2])
loss = nn.BCELoss(weights=weights)
You can find a more concrete example here or another helpful PT forum discussion here.
you need to pass weights like below:
CE_loss = CrossEntropyLoss(weight=[…])
This is similar to the idea of #Jeroen Vuurens, but the class weights are determined by the target mean:
y_train_mean = y_train.mean()
bi_cls_w2 = 1/(1 - y_train_mean)
bi_cls_w1 = 1/y_train_mean - bi_cls_w2
bce_loss = nn.BCELoss(reduction='none')
loss_fun = lambda pred, target: ((bi_cls_w1*target + bi_cls_w2) * bce_loss(pred, target)).mean()

BERT document embedding

I am trying to do document embedding using BERT. The code I use is a combination of two sources. I use BERT Document Classification Tutorial with Code, and BERT Word Embeddings Tutorial. Below is the code, I feed the first 510 tokens of each document to the BERT model. Finally, I apply K-means clustering to these embeddings, but the members of each cluster are TOTALLY irrelevant. I am wondering how this is possible. Maybe something is wrong with my code. I would appreciate if you take a look at my code and tell if there is something wrong with it. I use Google colab to run this code.
# text_to_embedding function
import torch
from keras.preprocessing.sequence import pad_sequences
def text_to_embedding(tokenizer, model, in_text):
'''
Uses the provided BERT 'model' and 'tokenizer' to generate a vector
representation of the input string, 'in_text'.
Returns the vector stored as a numpy ndarray.
'''
# ===========================
# STEP 1: Tokenization
# ===========================
MAX_LEN = 510
# 'encode' will:
# (1) Tokenize the sentence
# (2) Prepend the '[CLS]' token to the start.
# (3) Append the '[SEP]' token to the end.
# (4) Map tokens to their IDs.
input_ids = tokenizer.encode(
in_text, # sentence to encode.
add_special_tokens = True, # Add '[CLS]' and '[SEP]'
max_length = MAX_LEN, # Truncate all sentences.
#return_tensors = 'pt' # Return pytorch tensors.
)
# Pad our input tokens. Truncation was handled above by the 'encode'
# function, which also makes sure that the '[SEP]' token is placed at the
# end *after* truncating.
# Note: 'pad_sequences' expects a list of lists, but we only have one
# piece of text, so we surround 'input_ids' with an extra set of brackets.
results = pad_sequences([input_ids], maxlen=MAX_LEN, dtype="long",
value=0, truncating="post", padding="post")
# Remove the outer list.
input_ids = results[0]
# Create attention masks.
attn_mask = [int(i > 0) for i in input_ids]
# Cast to tensors.
input_ids = torch.tensor(input_ids)
attn_mask = torch.tensor(attn_mask)
# Add an extra dimension for the "batch" (even though there is only one
# input in this batch)
input_ids = input_ids.unsqueeze(0)
attn_mask = attn_mask.unsqueeze(0)
# ===========================
# STEP 1: Tokenization
# ===========================
# Put the model in evaluation mode--the dropout layers behave differently
# during evaluation.
model.eval()
# Copy the inputs to the GPU
input_ids = input_ids.to(device)
attn_mask = attn_mask.to(device)
# telling the model not to build the backward graph will make this
# a little quicker.
with torch.no_grad():
# Forward pass, returns hidden states and predictions
# This will return the logits rather than the loss because we have
# not provided labels.
outputs = model(
input_ids = input_ids,
token_type_ids = None,
attention_mask = attn_mask)
hidden_states = outputs[2]
#Sentence Vectors
#To get a single vector for our entire sentence we have multiple
#application-dependent strategies, but a simple approach is to
#average the second to last hiden layer of each token producing
#a single 768 length vector.
# `hidden_states` has shape [13 x 1 x ? x 768]
# `token_vecs` is a tensor with shape [? x 768]
token_vecs = hidden_states[-2][0]
# Calculate the average of all ? token vectors.
sentence_embedding = torch.mean(token_vecs, dim=0)
# Move to the CPU and convert to numpy ndarray.
sentence_embedding = sentence_embedding.detach().cpu().numpy()
return(sentence_embedding)
from transformers import BertTokenizer, BertModel
# Load pre-trained model (weights)
model = BertModel.from_pretrained('bert-base-uncased',
output_hidden_states = True, # Whether the model returns all hidden-states.
)
model.cuda()
from transformers import BertTokenizer
# Load the BERT tokenizer.
print('Loadin BERT tokenizer...')
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased', do_lower_case=True)
I don't know if it solves your problem but here's my 2 cent:
You don't have to calculate the attention mask and do the padding manually. Have a look at the documentation. Just call the tokenizer itself:
results = tokenizer(in_text, max_length=MAX_LEN, truncation=True)
input_ids = results.input_ids
attn_mask = results.attention_mask
# Cast to tensors
...
Instead of using the average of the second to last hidden layer, you can try the same thing with the last hidden layer; or you can use the vector represents [CLS] from the last layer

How to restore KerasClassfier?

After saved weights and json configuration of a KerasClassifier model https://github.com/keras-team/keras/blob/master/keras/wrappers/scikit_learn.py I need to restore it and verify results.
But if I restore weight and model then I have a Sequential object, how can I rebuild original KerasClassifier from that??
I'm not sure I understood you correcly, but propose following solution. KerasClassifier inherits from BaseWrapper which has the following __init__ signature:
def __init__(self, build_fn=None, **sk_params):
self.build_fn = build_fn
self.sk_params = sk_params
self.check_params(sk_params)
okay, what's the build_fn and sk_params?
The build_fn should construct, compile and return a Keras model, which
will then be used to fit/predict. One of the following
three values could be passed to build_fn:
1. A function
2. An instance of a class that implements the __call__ method
3. None. This means you implement a class that inherits from either
KerasClassifier or KerasRegressor. The __call__ method of the
present class will then be treated as the default build_fn.
...
sk_params takes both model parameters and fitting parameters. Legal model
parameters are the arguments of build_fn. Note that like all other
estimators in scikit-learn, build_fn should provide default values for
its arguments, so that you could create the estimator without passing any
values to sk_params.
...
some commints are omitted
you can read full comment at this and this links.
As the build_fn expects the function which returns compiled keras model (no matter what it is - Sequential or just Model) - you can pass as value function which returns loaded model.
Edit also you should call fit with some params to restore model using that approach.
load model as build_fn
fit method invokes a build_fn, hence each time you try to train such classifier you will load and then fit loaded clssifier.
For example:
from keras.models import load_model # or another method - but this one is simpliest
from keras.wrappers.scikit_learn import KerasClassifier
def load_model(*args, **kwargs):
"""probably this function expects sk_params, so you can use it in theory"""
path="my_model.hd5"
model = load_model(path)
return model
keras_classifier = KerasClassifier(load_model, sk_params) # use your sk_params
keras_classifier.fit(X_tr, y_tr) # I use slice (1, input_shape) to train
- it will work, as the loaded model almost trained & compiled. But it gives a small shift for your model even if you'll call it with a batch of size 1 and for 1 epoch.
load via build_fn closure
Also you can load the model first (if you wish to provide path easily and it's unnacceptable to hardcode path), then return a function which is "build_fn - acceptable":
def load_model_return_build_fn(path):
model = load_model(path)
def build_fn(*args, **kwars):
"""probably this function expects sk_params"""
return model # defined above
return build_fn
build_fn = load_model_return_build_fn("model.hd5")
keras_classifier = KerasClassifier(build_fn, sk_params) # use your sk_params
keras_classifier.fit(X_tr, y_tr) # I use slice (1, input_shape) to train
assign a model to it's attribute
If you plan just load and use pre-trained model, you can use any to load it, assign to the model attribute and don't call fit.
build_fn = load_model_return_build_fn("model.hd5")
# or the function which realy builds and fits a model
keras_classifier = KerasClassifier(build_fn, sk_params) # use your sk_params
keras_classifier.model = model # assign model here, don't call fit
- that case you set model explicitly to it's attribute. Note that build_fn should be a coorrect one build_fn - otherwise it doesn't pass the self.check_params(sk_params) test.
Inherit from KerasClassifier (not so easy as I've thought)
After all, the best solution I know is inherit from KerasClassifier and add a load and/or from_file method.
class KerasClassifierLoadable(KerasClassifier):
#classmethod
def from_file(cls, path, *args, **kwargs):
keras_classifier = cls(*args, **kwargs)
keras_classifier.model = load_model(path)
outp_shape = keras_classifier.model.layers[-1].output_shape[-1]
if outp_shape > 1:
keras_classifier.classes_ = np.arange(outp_shape, dtype='int32')
else:
raise ValueError("Inconsistent output shape: outp_shape={}".format(outp_shape))
keras_classifier.n_classes_ = len(keras_classifier.classes_)
return keras_classifier
def load(self, path):
self.model = load_model(path)
outp_shape = keras_classifier.model.layers[-1].output_shape[-1]
if outp_shape > 1:
keras_classifier.classes_ = np.arange(outp_shape, dtype='int32')
else:
raise ValueError("Inconsistent output shape: outp_shape={}".format(outp_shape))
self.n_classes_ = len(self.classes_)
here we shoul set self.classes_ to the correct class labels - but I use just an integer values from `range(0, n_classes).
Usage (the build_fn can be any appropiate build_fn):
keras_classifier = KerasClassifierLoadable.from_file("model.hd5", build_fn=build_fn)
keras_classifier = KerasClassifierLoadable(build_fn=build_fn)
keras_classifier.load("model.hd5")
If you have two files model.json and weights.h5, then you can easily load the model and use it as you want.
from keras.models import model_from_json
json_file = open('model.json', 'r')
loaded_model_json = json_file.read()
json_file.close()
loaded_model = model_from_json(loaded_model_json)
loaded_model.load_weights("model.h5")
# evaluate loaded model on test data
loaded_model.compile(loss='binary_crossentropy', optimizer='rmsprop', metrics=['accuracy'])
score = loaded_model.evaluate(X, Y, verbose=0)
print("%s: %.2f%%" % (loaded_model.metrics_names[1], score[1]*100))

Resources