Lack of Sparse Solution with L1 Regularization in Pytorch - python-3.x

I am trying to implement L1 regularization onto the first layer of a simple neural network (1 hidden layer). I looked into some other posts on StackOverflow that apply l1 regularization using Pytorch to figure out how it should be done (references: Adding L1/L2 regularization in PyTorch?, In Pytorch, how to add L1 regularizer to activations?). No matter how high I increase lambda (the l1 regularization strength parameter) I do not get true zeros in the first weight matrix. Why would this be? (Code is below)
import torch
import torch.nn as nn
import torch.nn.functional as F
import numpy as np
class Network(nn.Module):
def __init__(self,nf,nh,nc):
super(Network,self).__init__()
self.lin1=nn.Linear(nf,nh)
self.lin2=nn.Linear(nh,nc)
def forward(self,x):
l1out=F.relu(self.lin1(x))
out=F.softmax(self.lin2(l1out))
return out, l1out
def l1loss(layer):
return torch.norm(layer.weight.data, p=1)
nf=10
nc=2
nh=6
learningrate=0.02
lmbda=10.
batchsize=50
net=Network(nf,nh,nc)
crit=nn.MSELoss()
optimizer=torch.optim.Adagrad(net.parameters(),lr=learningrate)
xtr=torch.Tensor(xtr)
ytr=torch.Tensor(ytr)
#ytr=torch.LongTensor(ytr)
xte=torch.Tensor(xte)
yte=torch.LongTensor(yte)
#cyte=torch.Tensor(yte)
it=200
for epoch in range(it):
per=torch.randperm(len(xtr))
for i in range(0,len(xtr),batchsize):
ind=per[i:i+batchsize]
bx,by=xtr[ind],ytr[ind]
optimizer.zero_grad()
output, l1out=net(bx)
# l1reg=l1loss(net.lin1)
loss=crit(output,by)+lmbda*l1loss(net.lin1)
loss.backward()
optimizer.step()
print('Epoch [%i/%i], Loss: %.4f' %(epoch+1,it, np.float32(loss.data.numpy())))
corr=0
tot=0
for x,y in list(zip(xte,yte)):
output,_=net(x)
_,pred=torch.max(output,-1)
tot+=1 #y.size(0)
corr+=(pred==y).sum()
print(corr)
Note: The data has 10 features (2 classes and 800 training samples) and only the first 2 are relevant (by design) so one would assume true zeros should be easy enough to learn.

Your usage of layer.weight.data removes the parameter (which is a PyTorch variable) from its automatic differentiation context, making it a constant when the optimiser takes the gradients. This results in zero gradients and that the L1 loss is not computed.
If you remove the .data, the norm is computed of the PyTorch variable and the gradients should be correct.
For more information on PyTorch's automatic differentiation mechanics, see this docs article or this tutorial.

Related

Why my cross entropy loss function does not converge?

I try to write a cross entropy loss function by myself. My loss function gives the same loss value as the official one, but when i use my loss function in the code instead of official cross entropy loss function, the code does not converge. When i use the official cross entropy loss function, the code converges. Here is my code, please give me some suggestions. Thanks very much
The input 'out' is a tensor (B*C) and 'label' contains class indices (1 * B)
class MylossFunc(nn.Module):
def __init__(self):
super(MylossFunc, self).__init__()
def forward(self, out, label):
out = torch.nn.functional.softmax(out, dim=1)
n = len(label)
loss = torch.FloatTensor([0])
loss = Variable(loss, requires_grad=True)
tmp = torch.log(out)
#print(out)
torch.scalar_tensor(-100)
for i in range(n):
loss = loss - torch.max(tmp[i][label[i]], torch.scalar_tensor(-100) )/n
loss = torch.sum(loss)
return loss
Instead of using torch.softmax and torch.log, you should use torch.log_softmax, otherwise your training will become unstable with nan values everywhere.
This happens because when you take the softmax of your logits using the following line:
out = torch.nn.functional.softmax(out, dim=1)
you might get a zero in one of the components of out, and when you follow that by applying torch.log it will result in nan (since log(0) is undefined). That is why torch (and other common libraries) provide a single stable operation, log_softmax, to avoid the numerical instabilities that occur when you use torch.softmax and torch.log individually.

Why unused embedding vector changed after backward?

I'm partially fine-tuning a embedding matrix (just for some particular index). But in backward phase, other embeddings which is not expected to be modified are changed.
Here is an example:
import torch
from torch import nn
embeds = nn.Embedding(10, 3)
optim = torch.optim.Adam(params=embeds.parameters(), lr=0.001, weight_decay=1e-6)
indexes = torch.tensor([1,2,3]) # expected to finetune
bce_loss = nn.BCELoss()
optim.zero_grad()
loss = bce_loss(embeds(indexes).sum(dim=-1), torch.zeros(3))
loss.backward()
optim.step()
I only want to finetune 1st, 2nd, 3rd embeddings, but after backward, the entire embedding matrix is changed. Why? And how to solve this problem? THX!
The change in embedding matrix after optim.step() is not entirely due to the loss you defined (i.e. BCELoss). You are overlooking the other part of the loss that is being implicitely defined due to the weight_decay parameter in your optimizer. When you supply a non-zero weight decay, it adds a second commponent to the loss that is over all parameters of the model. Just turn that off
optim = torch.optim.Adam(..., weight_decay=0.)
and you will see expected result.

Different loss values and accuracies of MLP regressor in keras and scikit-learn

I have a neural network with one hidden layer implemented in both Keras and scikit-learn for solving a regression problem. In scikit-learn I used the MLPregressor class with mostly default parameters and in Keras I have a hidden Dense layer with parameters set to the same defaults as scikit-learn (which uses Adam with same learning rate and epsilon and a batch_size of 200). When I train the networks the scikit-learn model has a loss value that is about half of keras and its accuracy (measured in mean absolute error) is also better. Shouldn't the loss values be similar if not identical and the accuracies also be similar? Has anyone experienced something similar and able to make the Keras model more accurate?
Scikit-learn model:
clf = MLPRegressor(hidden_layer_sizes=(1600,), max_iter=1000, verbose=True, learning_rate_init=.001)
Keras model:
inputs = keras.Input(shape=(cols,))
x = keras.layers.Dense(1600, activation='relu', kernel_initializer="glorot_uniform", bias_initializer="glorot_uniform", kernel_regularizer=keras.regularizers.L2(.0001))(inputs)
outputs = keras.layers.Dense(1,kernel_initializer="glorot_uniform", bias_initializer="glorot_uniform", kernel_regularizer=keras.regularizers.L2(.0001))(x)
model = keras.Model(inputs=inputs, outputs=outputs)
model.compile(optimizer=keras.optimizers.Adam(epsilon=1e-8, learning_rate=.001),loss="mse")
model.fit(x=X, y=y, epochs=1000, batch_size=200)
It is because the formula of mean squared loss(MSE) from scikit-learn is different from that of tensorflow.
From the source code of scikit-learn:
def squared_loss(y_true, y_pred):
return ((y_true - y_pred) ** 2).mean() / 2
while MSE from tensorflow:
backend.mean(math_ops.squared_difference(y_pred, y_true), axis=-1)
As you can see the scikit-learn one is divided by 2, coherent with what you said:
the scikit-learn model has a loss value that is about half of keras
That implied the models from keras and scikit-learn actually achieved similar performance. That also implied learning rate 0.001 in scikit-learn is not equivalent to the same learning rate in tensorflow.
Also, another smaller but significant difference is the formula of L2 regularization.
From the source code of scikit-learn,
# Add L2 regularization term to loss
values = 0
for s in self.coefs_:
s = s.ravel()
values += np.dot(s, s)
loss += (0.5 * self.alpha) * values / n_samples
while that of tensorflow is loss = l2 * reduce_sum(square(x)).
Therefore, with the same l2 regularization parameter, tensorflow one has stronger regularization, which will result in poorer fit to the training data.

Keras: Pixelwise class imbalance in binary image segmentation

I have a task in which I input a 500x500x1 image and get out a 500x500x1 binary segmentation. When working, only a small fraction of the 500x500 should be triggered (small "targets"). I'm using a sigmoid activation at the output. Since such a small fraction is desired to be positive, the training tends to stall with all outputs at zero, or very close. I've written my own loss function that partially deals with it, but I'd like to use binary cross entropy with a class weighting if possible.
My question is in two parts:
If I naively apply binary_crossentropy as the loss to my 500x500x1 output, will it apply on a per pixel basis as desired?
Is there a way for keras to apply class weighting with the single sigmoid output per pixel?
To answer your questions.
Yes, binary_cross_entropy will work per-pixel based, provided you feed to your image segmentation neural network pairs of the form (500x500x1 image(grayscale image) + 500x500x1 (corresponding mask to your image).
By feeding the parameter 'class_weight' parameter in model.fit()
Suppose you have 2 classes with 90%-10% distribution. Then you may want to penalise your algorithm 9 times more when it makes a mistake for the less well represented class(the class with 10% in this case). Suppose you have 900 examples of class 1 and 100 examples of class 2.
Then your class weights dictionary(there are multiple ways to compute it, what is important is to assign a greater weight to the less well represented class),
class_weights = {0:1000/900,1:1000/100}
Example : model.fit(X_train, Y_train, epochs = 30, batch_size=32, class_weight=class_weight)
NOTE: This is available only on 2d cases(class_weight). For 3D or higher dimensional spaces, one should use 'sample_weights'. For segmentation purposes, you would rather use sample_weights parameter.
The biggest gain you will have is by means of other loss functions. Other losses, apart from binary_crossentropy and categorical_crossentropy, inherently perform better on unbalanced datasets. Dice Loss is such a loss function.
Keras implementation:
smooth = 1.
def dice_coef(y_true, y_pred):
y_true_f = K.flatten(y_true)
y_pred_f = K.flatten(y_pred)
intersection = K.sum(y_true_f * y_pred_f)
return (2. * intersection + smooth) / (K.sum(y_true_f) + K.sum(y_pred_f) + smooth)
def dice_coef_loss(y_true, y_pred):
return 1 - dice_coef(y_true, y_pred)
You can also use as a loss function the sum of binary_crossentropy
and other losses if it suits you : i.e. loss = dice_loss + bce

sklearn auc ValueError: Only one class present in y_true

I searched Google, and saw a couple of StackOverflow posts about this error. They are not my cases.
I use keras to train a simple neural network and make some predictions on the splitted test dataset. But when use roc_auc_score to calculate AUC, I got the following error:
"ValueError: Only one class present in y_true. ROC AUC score is not defined in that case.".
I inspect the target label distribution, and they are highly imbalanced. Some labels(in the total 29 labels) have only 1 instance. So it's likely they will have no positive label instance in the test label. So the sklearn's roc_auc_score function reported the only one class problem. That's reasonable.
But I'm curious, as when I use sklearn's cross_val_score function, it can handle the AUC calculation without error.
my_metric = 'roc_auc'
scores = cross_validation.cross_val_score(myestimator, data,
labels, cv=5,scoring=my_metric)
I wonder what happened in the cross_val_score, is it because the cross_val_score use a stratified cross-validation data split?
UPDATE
I continued to make some digging, but still can't find the difference behind.I see that cross_val_score call check_scoring(estimator, scoring=None, allow_none=False) to return a scorer, and the check_scoring will call get_scorer(scoring) which will return scorer=SCORERS[scoring]
And the SCORERS['roc_auc'] is roc_auc_scorer;
the roc_auc_scorer is made by
roc_auc_scorer = make_scorer(roc_auc_score, greater_is_better=True,
needs_threshold=True)
So, it's still using the roc_auc_score function. I don't get why cross_val_score behave differently with directly calling roc_auc_score.
I think your hunch is correct. The AUC (area under ROC curve) needs a sufficient number of either classes in order to make sense.
By default, cross_val_score calculates the performance metric one each fold separately. Another option could be to do cross_val_predict and compute the AUC over all folds combined.
You could do something like:
from sklearn.metrics import roc_auc_score
from sklearn.cross_validation import cross_val_predict
from sklearn.linear_model import LogisticRegression
from sklearn.datasets import make_classification
class ProbaEstimator(LogisticRegression):
"""
This little hack needed, because `cross_val_predict`
uses `estimator.predict(X)` internally.
Replace `LogisticRegression` with whatever classifier you like.
"""
def predict(self, X):
return super(self.__class__, self).predict_proba(X)[:, 1]
# some example data
X, y = make_classification()
# define your estimator
estimator = ProbaEstimator()
# get predictions
pred = cross_val_predict(estimator, X, y, cv=5)
# compute AUC score
roc_auc_score(y, pred)

Resources