I am doing one project by using Keras with tensorflow back-end. For business reason, I need to give high weight to some kind of errors, so I implement one customized loss function. For example:
error = np.abs(y_true - predict)
if error > low_limit:
error = error * 10
I found that this customized loss function really changed the error value displayed during the training. What I am wondering is whether this new loss function really can change the behavior of backpropagation during the training? Because I did not see too much difference from the weights of my model.
Short answer: yes, this loss function does change the behavior of BP, but I quite surprise that this loss function works...
Anyway, I feel the following loss function makes more sense
def my_mae( y_true, y_pred, low_bound=1e-3, coef=10. ) :
raw_mae = keras.losses.mae( y_true, y_pred )
mask = K.cast( raw_mae <= low_bound, dtype='float32' )
#new_mae = mask * raw_mae * coef + (1-mask) * raw_mae
new_mae = (1 + mask * (coef-1)) *raw_mae
return new_mae
which gives extra loss when a sample loss is too low.
Related
I would like to create a loss function that takes the predictions from my network and maps the values onto the normal distribution, before performing some more calculations. Below is a snippet of what I'm doing
#staticmethod
def gaussian_preds(preds):
preds_argsort = torch.argsort(preds)
fractional_ranks = (preds_argsort + 1) / (preds_argsort.shape[0] + 1)
gaussian_preds = torch.special.ndtri(fractional_ranks)
return gaussian_preds
However, the torch.argsort function obviously kills the gradient, so I cannot use this in the loss function. Does anyone have any ideas or workarounds or approximations that would get me there?
I'm writing a custom loss function for a (sort of) semantic segmentation task where I compute binary crossentropy (using keras backend) for the target (2-d array) and predictions. The overall loss is a sum of 4 different loss functions.
In two of those functions I need to make a customized target array and compute binary crossentropy with these targets and the predictions. Here I would like to ignore and not calculate the loss where label is 1 (foreground) in the target array.
In pytorch's nll_loss() there is a parameter 'ignore_index' which is similar to what I'm trying to do.
I'm trying to achieve something like:
def binary_crossentropy(y_true, y_pred, ignore_label=1):
if ignore_label == 1:
return -(1-y_true)*log(1-y_pred)
if ignore_label == 0:
return -y_true*log(y_pred)
return -(y_true*log(y_pred) + (1-y_true)*log(1-y_pred))
But for a keras custom loss function.
I'm writing a custom layer for a TF Keras application. This layer should be able to perform a 2D convolution with additional masking information.
The layer is quite simple (omitting the init and compute_output_shape functions):
def build(self, input_shape):
ks = self.kernel_size + (int(input_shape[0][-1]),self.filters)
self.kernel = self.add_weight(name = 'kernel',shape = ks)
self.ones = self.add_weight(name='ones',shape=ks,
trainable=False, initializer= initializers.get('ones'))
self.bias = self.add_weight(name='bias',shape=(self.filters,))
def call(self,x):
img,msk = x
#img = tf.multiply(img,msk)
img = tf.nn.convolution(img,self.kernel)
msk = tf.nn.convolution(msk,self.ones)
#img = tf.divide(img,msk)
img = bias_add(img,self.bias)
return [img,msk]
The problem lies within those two commented out lines. They should just provide a simple, element-wise multiplication and division. If they are commented out, everything works fine. If I just comment one in, the accuracy of my model drops by around factor 2-3.
For testing, I simply used a mask of ones. That should have no influence for the output of this layer or it's performance (in accuracy terms).
I tried this with the current version of TF (r 1.12), the current nightly (r 1.13) and the 2.0 preview. Also I tried to replace the troublesome lines with e.g. keras Lambda layers and keras Multiply layers.
This might or might not be correlated to this problem:
Custom TF-Keras Layer performs worse than built-in layer
Mathematically the element-wise operations shouldn't have an impact (as long as the mask is only consistent of ones).
Also the element-wise operations shouldn't have an impact on the performance of this layer, since they don't influence the weights, and don't influence the data.
I don't know why this happens and hope some of you have an idea.
EDIT: Added kernel initializer, which I forgot before
I want to make a Conv network and I wish to use the RELU activation function. Can someone please give me a clue of the correct way to initialize weights (I'm using Theano)
Thanks
I'm not sure there is a hard and fast best way to initialize weights and bias for a ReLU layer.
Some claim that (a slightly modified version of) Xavier initialization works well with ReLUs. Others that small Gaussian random weights plus bias=1 (ensuring the weighted sum of positive inputs will remain positive and thus not end up in the ReLUs zero region).
In Theano, these can be achieved like this (assuming weights post-multiply the input):
w = theano.shared((numpy.random.randn((in_size, out_size)) * 0.1).astype(theano.config.floatX))
b = theano.shared(numpy.ones(out_size))
or
w = theano.shared((numpy.random.randn((in_size, out_size)) * tt.sqrt(2 / (in_size + out_size))).astype(theano.config.floatX))
b = theano.shared(numpy.zeros(out_size))
I just applied the log loss in sklearn for logistic regression: http://scikit-learn.org/stable/modules/generated/sklearn.metrics.log_loss.html
My code looks something like this:
def perform_cv(clf, X, Y, scoring):
kf = KFold(X.shape[0], n_folds=5, shuffle=True)
kf_scores = []
for train, _ in kf:
X_sub = X[train,:]
Y_sub = Y[train]
#Apply 'log_loss' as a loss function
scores = cross_validation.cross_val_score(clf, X_sub, Y_sub, cv=5, scoring='log_loss')
kf_scores.append(scores.mean())
return kf_scores
However, I'm wondering why the resulting logarithmic losses are negative. I'd expect them to be positive since in the documentation (see my link above) the log loss is multiplied by a -1 in order to turn it into a positive number.
Am I doing something wrong here?
Yes, this is supposed to happen. It is not a 'bug' as others have suggested. The actual log loss is simply the positive version of the number you're getting.
SK-Learn's unified scoring API always maximizes the score, so scores which need to be minimized are negated in order for the unified scoring API to work correctly. The score that is returned is therefore negated when it is a score that should be minimized and left positive if it is a score that should be maximized.
This is also described in sklearn GridSearchCV with Pipeline and in scikit-learn cross validation, negative values with mean squared error
a similar discussion can be found here.
In this way, an higher score means better performance (less loss).
I cross checked the sklearn implementation with several other methods. It seems to be an actual bug within the framework. Instead consider the follwoing code for calculating the log loss:
import scipy as sp
def llfun(act, pred):
epsilon = 1e-15
pred = sp.maximum(epsilon, pred)
pred = sp.minimum(1-epsilon, pred)
ll = sum(act*sp.log(pred) + sp.subtract(1,act)*sp.log(sp.subtract(1,pred)))
ll = ll * -1.0/len(act)
return ll
Also take into account that the dimensions of act and pred have to Nx1 column vectors.