pytorch: how to directly find gradient w.r.t. loss - theano

In theano, it was very easy to get the gradient of some variable w.r.t. a given loss:
loss = f(x, w)
dl_dw = tt.grad(loss, wrt=w)
I get that pytorch goes by a different paradigm, where you'd do something like:
loss = f(x, w)
loss.backwards()
dl_dw = w.grad
The thing is I might not want to do a full backwards propagation through the graph - just along the path needed to get to w.
I know you can define Variables with requires_grad=False if you don't want to backpropagate through them. But then you have to decide that at the time of variable-creation (and the requires_grad=False property is attached to the variable, rather than the call which gets the gradient, which seems odd).
My Question is is there some way to backpropagate on demand (i.e. only backpropagate along the path needed to compute dl_dw, as you would in theano)?

It turns out that this is reallyy easy. Just use torch.autograd.grad
Example:
import torch
import numpy as np
from torch.autograd import grad
x = torch.autograd.Variable(torch.from_numpy(np.random.randn(5, 4)))
w = torch.autograd.Variable(torch.from_numpy(np.random.randn(4, 3)), requires_grad=True)
y = torch.autograd.Variable(torch.from_numpy(np.random.randn(5, 3)))
loss = ((x.mm(w) - y)**2).sum()
(d_loss_d_w, ) = grad(loss, w)
assert np.allclose(d_loss_d_w.data.numpy(), (x.transpose(0, 1).mm(x.mm(w)-y)*2).data.numpy())
Thanks to JerryLin for answering the question here.

Related

calculate adjusted R2 using GridSearchCV

I am trying to use GridSearchCV with multiple scoring metrics, one of which, the adjusted R2. The latter, as far I am concerned, is not implemented in scikit-learn. I would like to confirm whether my approach is the correct one to implement the adjusted R2.
Using the scores implemented in scikit-learn (in the example below MAE and R2), I can do something like shown below (in this dummy example I am ignoring good practices, like feature scaling and a suitable number of iterations for SVR):
import numpy as np
from sklearn.svm import SVR
from sklearn.metrics import make_scorer
from sklearn.model_selection import GridSearchCV
from sklearn.metrics import r2_score, mean_absolute_error
#generate input
X = np.random.normal(75, 10, (1000, 2))
y = np.random.normal(200, 20, 1000)
#perform grid search
params = {"degree": [2, 3], "max_iter": [10]}
grid = GridSearchCV(SVR(), param_grid=params,
scoring={"MAE": "neg_mean_absolute_error", "R2": "r2"}, refit="R2")
grid.fit(X, y)
The example above will report the MAE and R2 for each cross-validated partition and will refit the best parameters based on the best R2. Following this example, I have attempted to do the same using a custom scorer:
def adj_r2(true, pred, p=2):
'''p is the number of independent variables and n is the sample size'''
n = true.size
return 1 - ((1 - r2_score(true, pred)) * (n - 1))/(n-p-1)
scorer=make_scorer(adj_r2)
grid = GridSearchCV(SVR(), param_grid=params,
scoring={"MAE": "neg_mean_absolute_error", "adj R2": scorer}, refit="adj R2")
grid.fit(X, y)
#print(grid.cv_results_)
The code above appears to generate values for the "adj R2" scorer. I have two questions:
Is the approach used above technically correct coding-wise?
If the approach is correct, how can I define p (number of independent variables) in a dynamic way? As you can see, I had to force a default when defining the function, but I would like to be able to define p in GridSearchCV.
Firstly, adjusted R2 score is not available in sklearn so far because the API of scoring functions just takes y_true and y_pred. Hence, measuring the dimensions of X is out of question.
We can do a work around for SearchCVs.
The scorer needs to have a signature of (estimator, X, y). This has been delivered in the make_scorer here.
I have provided a more simplified version of that here for wrapping the r2 scorer.
def adj_r2(estimator, X, y_true):
n, p = X.shape
pred = estimator.predict(X)
return 1 - ((1 - r2_score(y_true, pred)) * (n - 1))/(n-p-1)
grid = GridSearchCV(SVR(), param_grid=params,
scoring={"MAE": "neg_mean_absolute_error",
"adj R2": adj_r2}, refit="adj R2")
grid.fit(X, y)

Why is the gradient computed with GradientTape wrong in this case (using tfp.vi.monte_carlo_variational_loss)

The gradients from tf.GradientTape seem not to match the correct minimum in the function I'm trying to minimise.
I'm trying to use tensorflowprobability's black-box variational inference (using tf2), with the tf.GradientTape, a keras optimizer, calling the apply_gradients function. The surrogate posterior is a simple 1d Normal. I'm trying to approximate a pair of normals, see pdist function. For simplicity I just try to optimise the scale parameter.
Current code:
from scipy.special import erf
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
import tensorflow as tf
import tensorflow_probability as tfp
from tensorflow_probability import distributions as tfd
def pdist(x):
return (.5/np.sqrt(2*np.pi)) * np.exp((-(x+3)**2)/2) + (.5/np.sqrt(2*np.pi)) * np.exp((-(x-3)**2)/2)
def logpdist(x):
logp = np.log(1e-30+pdist(x))
assert np.all(np.isfinite(logp))
return logp
optimizer = tf.keras.optimizers.Adam(learning_rate=0.1)
mu = tf.Variable(0.0,dtype=tf.float64)
scale = tf.Variable(1.0,dtype=tf.float64)
for it in range(100):
with tf.GradientTape() as tape:
surrogate_posterior = tfd.Normal(mu,scale)
elbo_loss = tfp.vi.monte_carlo_variational_loss(logpdist,surrogate_posterior,sample_size=10000)
gradients = tape.gradient(elbo_loss, [scale])
optimizer.apply_gradients(zip(gradients, [scale]))
if it%10==0: print(scale.numpy(),gradients[0].numpy(),elbo_loss.numpy())
Output (showing every 10th iteration):
SCALE GRAD ELBO_LOSS
1.100, -1.000, 2.697
2.059, -0.508, 1.183
2.903, -0.354, 0.859 <<< (right answer about here)
3.636, -0.280, 1.208
4.283, -0.237, 1.989
4.869, -0.208, 3.021
5.411, -0.187, 4.310
5.923, -0.170, 5.525
6.413, -0.157, 7.250
6.885, -0.146, 8.775
For some reason the gradient doesn't reflect the true gradient, which should be about zero around scale=2.74.
Why does the gradient not relate to the actual elbo_loss?
Hopefully someone can elaborate on why the previous implementation failed (and also why it doesn't except, but instead just has the wrong answer). Anyway, I found I could fix it by ensuring that key expressions used the tensorflow maths library and not numpy's. Specifically replacing the two methods above with;
def pdist(x):
return (.5/np.sqrt(2*np.pi)) * tf.exp((-(x+3)**2)/2) + (.5/np.sqrt(2*np.pi)) * tf.exp((-(x-3)**2)/2)
def logpdist(x):
return tf.math.log(pdist(x))
The stochastic optimisation now works.
Output:
2.020, -0.874, 1.177
2.399, -0.393, 0.916
2.662, -0.089, 0.857
2.761, 0.019, 0.850
2.765, 0.022, 0.843
2.745, -0.006, 0.851
2.741, 0.017, 0.845
2.752, 0.005, 0.852
2.744, 0.015, 0.852
2.747, 0.013, 0.862
I'm not going to accept my own answer as I'd be grateful if some answers could be given that give intuition about why this now works and why it failed previously (and why the failure mode wasn't an exception or similar but instead an incorrect gradient).

What is the best way to find a function to closely approximate this data?

I am working with Python and linear regression, but can't seem to find a way to generate an accurate function. The following graph was generated from a 1000 element list of values.
I have tried Skicit-learn, but I can't get it to actually learn and improve the estimate.
Ideally, the function will closely mirror the graph. The graph itself is blatantly sinusoidal, so I imagine that this might be straightforward.
here is an example for the RandomForestRegressor It's based on a tutorial I did to learn, so intellectual property might belong to somebody else. If anybody knows the proper reference, please comment/edit!
I think this fits your data - however I'd like to add that this creates/trains a random forest model, not a function in the sense of a physical description of the process that generates the data.
import numpy as np
import matplotlib.pyplot as plt
from sklearn.ensemble import RandomForestRegressor
rng = np.random.RandomState(42)
x = 10 * rng.rand(200)
def model(x, sigma=0.3):
fast_oscillation = np.sin(5 * x)
slow_oscillation = np.sin(0.5 * x)
noise = sigma * rng.randn(len(x))
return slow_oscillation + fast_oscillation + noise
y = model(x)
forest = RandomForestRegressor(200)
forest.fit(x[:, None], y)
xfit = np.linspace(0, 10, 1000)
yfit = forest.predict(xfit[:, None])
ytrue = model(xfit, sigma=0)
plt.errorbar(x, y, 0.3, fmt='o', alpha=0.6)
plt.plot(xfit, yfit, '-r')
plt.plot(xfit, ytrue, '-k', alpha=0.5)

Element wise calculation breaks autograd

I am using pytorch to calculate loss for a logistic regression (I know pytorch can do this automatically but I have to make it myself). My function is defined below but the cast to torch.tensor breaks autograd and gives me w.grad = None. Im new to pytorch so Im sorry.
logistic_loss = lambda X,y,w: torch.tensor([torch.log(1 + torch.exp(-y[i] * torch.matmul(w, X[i,:]))) for i in range(X.shape[0])], requires_grad=True)
Your post isn't very clear on details and this is a monster of a one-liner. I first reworked it to make a minimal, complete, verifiable example. Please correct me if I misunderstood your intentions and please do it yourself next time.
import torch
# unroll the one-liner to have an easier time understanding what's going on
def logistic_loss(X, y, w):
elementwise = []
for i in range(X.shape[0]):
mm = torch.matmul(w, X[i, :])
exp = torch.exp(-y[i] * mm)
elementwise.append(torch.log(1 + exp))
return torch.tensor(elementwise, requires_grad=True)
# I assume that's the excepted dimensions of your input
X = torch.randn(5, 30, requires_grad=True)
y = torch.randn(5)
w = torch.randn(30)
# I assume you backpropagate from a reduced version
# of your sum, because you can't call .backward on multi-dimensional
# tensors
loss = logistic_loss(X, y, w).mean()
loss.mean().backward()
print(X.grad)
The simplest solution to your problem is to replace torch.tensor(elementwise, requires_grad=True) with torch.stack(elementwise). You can think of torch.tensor as a constructor for entirely new tensors, if your tensor is more of a result of some mathematical expression, you should use operations like torch.stack or torch.cat.
That being said, this code is still wildly inefficient because you do manual looping over i. Instead, you could write simply
def logistic_loss_vectorized(X, y, w):
mm = torch.matmul(X, w)
exp = torch.exp(-y * mm)
return torch.log(1 + exp)
which is mathematically equivalent, but will be much faster in practice, because it allows for better parallelization due to lack of explicit looping.
Note that there is still a numerical issue with this code - you're taking a logarithm of an exponential, but the intermediate result, called exp, is likely to attain very high values, causing loss of precision. There are workarounds for that, which is why the loss functions provided by PyTorch are preferable.

scikit learn: how to check coefficients significance

i tried to do a LR with SKLearn for a rather large dataset with ~600 dummy and only few interval variables (and 300 K lines in my dataset) and the resulting confusion matrix looks suspicious. I wanted to check the significance of the returned coefficients and ANOVA but I cannot find how to access it. Is it possible at all? And what is the best strategy for data that contains lots of dummy variables? Thanks a lot!
Scikit-learn deliberately does not support statistical inference. If you want out-of-the-box coefficients significance tests (and much more), you can use Logit estimator from Statsmodels. This package mimics interface glm models in R, so you could find it familiar.
If you still want to stick to scikit-learn LogisticRegression, you can use asymtotic approximation to distribution of maximum likelihiood estimates. Precisely, for a vector of maximum likelihood estimates theta, its variance-covariance matrix can be estimated as inverse(H), where H is the Hessian matrix of log-likelihood at theta. This is exactly what the function below does:
import numpy as np
from scipy.stats import norm
from sklearn.linear_model import LogisticRegression
def logit_pvalue(model, x):
""" Calculate z-scores for scikit-learn LogisticRegression.
parameters:
model: fitted sklearn.linear_model.LogisticRegression with intercept and large C
x: matrix on which the model was fit
This function uses asymtptics for maximum likelihood estimates.
"""
p = model.predict_proba(x)
n = len(p)
m = len(model.coef_[0]) + 1
coefs = np.concatenate([model.intercept_, model.coef_[0]])
x_full = np.matrix(np.insert(np.array(x), 0, 1, axis = 1))
ans = np.zeros((m, m))
for i in range(n):
ans = ans + np.dot(np.transpose(x_full[i, :]), x_full[i, :]) * p[i,1] * p[i, 0]
vcov = np.linalg.inv(np.matrix(ans))
se = np.sqrt(np.diag(vcov))
t = coefs/se
p = (1 - norm.cdf(abs(t))) * 2
return p
# test p-values
x = np.arange(10)[:, np.newaxis]
y = np.array([0,0,0,1,0,0,1,1,1,1])
model = LogisticRegression(C=1e30).fit(x, y)
print(logit_pvalue(model, x))
# compare with statsmodels
import statsmodels.api as sm
sm_model = sm.Logit(y, sm.add_constant(x)).fit(disp=0)
print(sm_model.pvalues)
sm_model.summary()
The outputs of print() are identical, and they happen to be coefficient p-values.
[ 0.11413093 0.08779978]
[ 0.11413093 0.08779979]
sm_model.summary() also prints a nicely formatted HTML summary.

Resources