How To Calculate F1-Score For Multilabel Classification? - scikit-learn

I try to calculate the f1_score but I get some warnings for some cases when I use the sklearn f1_score method.
I have a multilabel 5 classes problem for a prediction.
import numpy as np
from sklearn.metrics import f1_score
y_true = np.zeros((1,5))
y_true[0,0] = 1 # => label = [[1, 0, 0, 0, 0]]
y_pred = np.zeros((1,5))
y_pred[:] = 1 # => prediction = [[1, 1, 1, 1, 1]]
result_1 = f1_score(y_true=y_true, y_pred=y_pred, labels=None, average="weighted")
print(result_1) # prints 1.0
result_2 = f1_score(y_true=y_ture, y_pred=y_pred, labels=None, average="weighted")
print(result_2) # prints: (1.0, 1.0, 1.0, None) for precision/recall/fbeta_score/support
When I use average="samples" instead of "weighted" I get (0.1, 1.0, 0.1818..., None). Is the "weighted" option not useful for a multilabel problem or how do I use the f1_score method correctly?
I also get a warning when using average="weighted":
"UndefinedMetricWarning: Recall and F-score are ill-defined and being set to 0.0 in labels with no true samples."

It works if you slightly add up data:
y_true = np.array([[1,0,0,0], [1,1,0,0], [1,1,1,1]])
y_pred = np.array([[1,0,0,0], [1,1,1,0], [1,1,1,1]])
recall_score(y_true=y_true, y_pred=y_pred, average='weighted')
>>> 1.0
precision_score(y_true=y_true, y_pred=y_pred, average='weighted')
>>> 0.9285714285714286
f1_score(y_true=y_true, y_pred=y_pred, average='weighted')
>>> 0.95238095238095244
The data suggests we have not missed any true positives and have not predicted any false negatives (recall_score equals 1). However, we have predicted one false positive in the second observation that lead to precision_score equal ~0.93.
As both precision_score and recall_score are not zero with weighted parameter, f1_score, thus, exists. I believe your case is invalid due to lack of information in the example.

Related

azure automl how to find best threshold in precision recall curve?

I use automl for a classification problem. I obtained the following precision recall curve:
Is it possible to find the best threshold that maximizes the f-score from this curve? and how?
Using any kind of optimization mechanisms, we can improve the F1 score. In the below example I tried to reproduce with Standard Scalar optimization mechanism.
from sklearn.pipeline import make_pipeline
from sklearn.linear_model import LogisticRegression as lrs
from sklearn.metrics import roc_curve, auc
import matplotlib.pyplot as plt
pipeline = make_pipeline(StandardScaler(), lrs(random_state=1))
# Create training test splits using two features
#
pipeline.fit(X_train[:,[2, 13]],y_train)
probs = pipeline.predict_proba(X_test[:,[2, 13]])
fpr1, tpr1, thresholds = roc_curve(y_test, probs[:, 1], pos_label=1)
roc_auc1 = auc(fpr1, tpr1)
#
# Create training test splits using two different features
#
pipeline.fit(X_train[:,[4, 14]],y_train)
probs2 = pipeline.predict_proba(X_test[:,[4, 14]])
fpr2, tpr2, thresholds = roc_curve(y_test, probs2[:, 1], pos_label=1)
roc_auc2 = auc(fpr2, tpr2)
#
# Create training test splits using all features
#
pipeline.fit(X_train,y_train)
probs3 = pipeline.predict_proba(X_test)
fpr3, tpr3, thresholds = roc_curve(y_test, probs3[:, 1], pos_label=1)
roc_auc3 = auc(fpr3, tpr3)
fig, ax = plt.subplots(figsize=(7.5, 7.5))
plt.plot(fpr1, tpr1, label='ROC Curve 1 (AUC = %0.2f)' % (roc_auc1))
plt.plot(fpr2, tpr2, label='ROC Curve 2 (AUC = %0.2f)' % (roc_auc2))
plt.plot(fpr3, tpr3, label='ROC Curve 3 (AUC = %0.2f)' % (roc_auc3))
plt.plot([0, 1], [0, 1], linestyle='--', color='red', label='Random Classifier')
plt.plot([0, 0, 1], [0, 1, 1], linestyle=':', color='green', label='Perfect Classifier')
plt.xlim([-0.05, 1.05])
plt.ylim([-0.05, 1.05])
plt.xlabel('False positive rate')
plt.ylabel('True positive rate')
plt.legend(loc="lower right")
plt.show()
With Standard Scaler we will get the maximum in F1 score with the max of AUC.

Difference between F1-score and Accuracy when computing micro-average [duplicate]

I have tried many examples with F1 micro and Accuracy in scikit-learn and in all of them, I see that F1 micro is the same as Accuracy. Is this always true?
Script
from sklearn import svm
from sklearn import metrics
from sklearn.cross_validation import train_test_split
from sklearn.datasets import load_iris
from sklearn.metrics import f1_score, accuracy_score
# prepare dataset
iris = load_iris()
X = iris.data[:, :2]
y = iris.target
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2)
# svm classification
clf = svm.SVC(kernel='rbf', gamma=0.7, C = 1.0).fit(X_train, y_train)
y_predicted = clf.predict(X_test)
# performance
print "Classification report for %s" % clf
print metrics.classification_report(y_test, y_predicted)
print("F1 micro: %1.4f\n" % f1_score(y_test, y_predicted, average='micro'))
print("F1 macro: %1.4f\n" % f1_score(y_test, y_predicted, average='macro'))
print("F1 weighted: %1.4f\n" % f1_score(y_test, y_predicted, average='weighted'))
print("Accuracy: %1.4f" % (accuracy_score(y_test, y_predicted)))
Output
Classification report for SVC(C=1.0, cache_size=200, class_weight=None, coef0=0.0,
decision_function_shape=None, degree=3, gamma=0.7, kernel='rbf',
max_iter=-1, probability=False, random_state=None, shrinking=True,
tol=0.001, verbose=False)
precision recall f1-score support
0 1.00 0.90 0.95 10
1 0.50 0.88 0.64 8
2 0.86 0.50 0.63 12
avg / total 0.81 0.73 0.74 30
F1 micro: 0.7333
F1 macro: 0.7384
F1 weighted: 0.7381
Accuracy: 0.7333
F1 micro = Accuracy
In classification tasks for which every test case is guaranteed to be assigned to exactly one class, micro-F is equivalent to accuracy. It won't be the case in multi-label classification.
This is because we are dealing with a multi class classification , where every test data should belong to only 1 class and not multi label , in such case where there is no TN , we can call True Negatives as True Positives.
Formula wise ,
correction : F1 score is 2* precision* recall / (precision + recall)
Micoaverage precision, recall, f1 and accuracy are all equal for cases in which every instance must be classified into one (and only one) class. A simple way to see this is by looking at the formulas precision=TP/(TP+FP) and recall=TP/(TP+FN). The numerators are the same, and every FN for one class is another classes's FP, which makes the denominators the same as well. If precision = recall, then f1 will also be equal.
For any inputs should should be able to show that:
from sklearn.metrics import accuracy_score as acc
from sklearn.metrics import f1_score as f1
f1(y_true,y_pred,average='micro')=acc(y_true,y_pred)
I had the same issue so I investigated and came up with this:
Just thinking about the theory, it is impossible that accuracy and the f1-score are the very same for every single dataset. The reason for this is that the f1-score is independent from the true-negatives while accuracy is not.
By taking a dataset where f1 = acc and adding true negatives to it, you get f1 != acc.
>>> from sklearn.metrics import accuracy_score as acc
>>> from sklearn.metrics import f1_score as f1
>>> y_pred = [0, 1, 1, 0, 1, 0]
>>> y_true = [0, 1, 1, 0, 0, 1]
>>> acc(y_true, y_pred)
0.6666666666666666
>>> f1(y_true,y_pred)
0.6666666666666666
>>> y_true = [0, 1, 1, 0, 1, 0, 0, 0, 0]
>>> y_pred = [0, 1, 1, 0, 0, 1, 0, 0, 0]
>>> acc(y_true, y_pred)
0.7777777777777778
>>> f1(y_true,y_pred)
0.6666666666666666

How to plot ROC Curve for multiclass data and measure MAUC from confusion matrix

I have used back propagation on a dataset which has 3 classes: L, B, R. I have also made a confusion matrix after making the neural network.
Actual class array:
sample_test = array([0, 1, 0, 2, 0, 2, 1, 1, 0, 1, 1, 1], dtype=int64)
Predicted class array:
yp = array([0, 1, 0, 2, 0, 2, 0, 1, 0, 1, 1, 1], dtype=int64)
Code for confusion matrix:
from sklearn import svm, datasets
from sklearn.model_selection import train_test_split
from sklearn.metrics import confusion_matrix
from sklearn.utils.multiclass import unique_labels
class_names = ['B','R','L']
def plot_confusion_matrix(y_true, y_pred, classes,
normalize=False,
title=None,
cmap=plt.cm.Blues):
"""
This function prints and plots the confusion matrix.
Normalization can be applied by setting `normalize=True`.
"""
if not title:
if normalize:
title = 'Normalized confusion matrix'
else:
title = 'Confusion matrix, without normalization'
# Compute confusion matrix
cm = confusion_matrix(y_true, y_pred)
# Only use the labels that appear in the data
classes = [0, 1, 2]
if normalize:
cm = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis]
print("Normalized confusion matrix")
else:
print('Confusion matrix, without normalization')
print(cm)
fig, ax = plt.subplots()
im = ax.imshow(cm, interpolation='nearest', cmap=cmap)
ax.figure.colorbar(im, ax=ax)
# We want to show all ticks...
ax.set(xticks=np.arange(cm.shape[1]),
yticks=np.arange(cm.shape[0]),
# ... and label them with the respective list entries
xticklabels=classes, yticklabels=classes,
title=title,
ylabel='True label',
xlabel='Predicted label')
# Rotate the tick labels and set their alignment.
plt.setp(ax.get_xticklabels(), rotation=45, ha="right",
rotation_mode="anchor")
# Loop over data dimensions and create text annotations.
fmt = '.2f' if normalize else 'd'
thresh = cm.max() / 2.
for i in range(cm.shape[0]):
for j in range(cm.shape[1]):
ax.text(j, i, format(cm[i, j], fmt),
ha="center", va="center",
color="white" if cm[i, j] > thresh else "black")
fig.tight_layout()
return ax
np.set_printoptions(precision=2)
# Plot non-normalized confusion matrix
plot_confusion_matrix(sample_test, yp, classes=class_names,
title='Confusion matrix, without normalization')
# Plot normalized confusion matrix
plot_confusion_matrix(sample_test, yp, classes=class_names , normalize=True,
title='Normalized confusion matrix')
plt.show()
Output:
Now I want to plot ROC curve for this and calculate MAUC. I saw the documentation but cant understand properly what to do.
I will be very grateful, if anyone can help me by giving some suggestions how to do that. Thanks in advance.
The ROC is calculated per class - treat each class as the "positive" class and the other classes as the "negative" classes. Note - first you will have to use predict_proba() - to get the predicted probability per class. Something like this:
import seaborn as sns
from sklearn.tree import DecisionTreeClassifier
from sklearn.model_selection import train_test_split
from sklearn import preprocessing
from sklearn.metrics import roc_auc_score
iris = sns.load_dataset('iris')
X = iris.drop('species',axis=1)
y = iris['species']
X_train, X_test, y_train, y_test = train_test_split(X,y)
le = preprocessing.LabelEncoder()
le.fit(y_train)
le.transform(y_train)
model = DecisionTreeClassifier(max_depth=1)
model.fit(X_train,le.transform(y_train))
predictions =pd.DataFrame(model.predict_proba(X_test),columns=list(le.inverse_transform(model.classes_)))
print(roc_auc_score((y_test == 'versicolor').astype(float), predictions['versicolor']))
Regarding your second question to calculate the Multiclass AUC, there are open source implementations for it. Specifically, such solutions implement equations 3 and 7 of Hand and Till 2001 original paper.
Take a look at this solution:
import numpy as np
import itertools
# pairwise class AUC
def a_value(y_true, y_pred_prob, zero_label=0, one_label=1):
"""
Approximates the AUC by the method described in Hand and Till 2001,
equation 3.
NB: The class labels should be in the set [0,n-1] where n = # of classes.
The class probability should be at the index of its label in the predicted
probability list.
Args:
y_true: actual labels of test data
y_pred_prob: predicted class probability
zero_label: label for positive class
one_label: label for negative class
Returns:
The A-value as a floating point.
"""
idx = np.isin(y_true, [zero_label, one_label])
labels = y_true[idx]
prob = y_pred_prob[idx, zero_label]
sorted_ranks = labels[np.argsort(prob)]
n0, n1, sum_ranks = 0, 0, 0
n0 = np.count_nonzero(sorted_ranks==zero_label)
n1 = np.count_nonzero(sorted_ranks==one_label)
sum_ranks = np.sum(np.where(sorted_ranks==zero_label)) + n0
return (sum_ranks - (n0*(n0+1)/2.0)) / float(n0 * n1) # Eqn 3 of the original paper
def MAUC(y_true, y_pred_prob, num_classes):
"""
Calculates the MAUC over a set of multi-class probabilities and
their labels. This is equation 7 in Hand and Till's 2001 paper.
NB: The class labels should be in the set [0,n-1] where n = # of classes.
The class probability should be at the index of its label in the
probability list.
Args:
y_true: actual labels of test data
y_pred_prob: predicted class probability
zero_label: label for positive class
one_label: label for negative class
num_classes (int): The number of classes in the dataset.
Returns:
The MAUC as a floating point value.
"""
# Find all pairwise comparisons of labels
class_pairs = [x for x in itertools.combinations(range(num_classes), 2)]
# Have to take average of A value with both classes acting as label 0 as this
# gives different outputs for more than 2 classes
sum_avals = 0
for pairing in class_pairs:
sum_avals += (a_value(y_true, y_pred_prob, zero_label=pairing[0], one_label=pairing[1]) +
a_value(y_true, y_pred_prob, zero_label=pairing[1], one_label=pairing[0])) / 2.0
return sum_avals * (2 / float(num_classes * (num_classes-1))) # Eqn 7 of the original paper
Credit: Pritom Saha Akash

Micro F1 score in Scikit-Learn with Class imbalance

I have some class imbalance and a simple baseline classifier that assigns the majority class to every sample:
from sklearn.metrics import precision_score, recall_score, confusion_matrix
y_true = [0,0,0,1]
y_pred = [0,0,0,0]
confusion_matrix(y_true, y_pred)
This yields
[[3, 0],
[1, 0]]
This means TP=3, FP=1, FN=0.
So far, so good. Now I want to calculate the micro average of precision and recall.
precision_score(y_true, y_pred, average='micro') # yields 0.75
recall_score(y_true, y_pred, average='micro') # yields 0.75
I am Ok with the precision, but why is recall not 1.0? How can they ever be the same in this example, given that FP > 0 and FN == 0? I know it must have to do with the micro averaging, but I can't wrap my head around this one.
Yes, its because of micro-averaging. See the documentation here to know how its calculated:
Note that if all labels are included, “micro”-averaging in a
multiclass setting will produce precision, recall and f-score that are all
identical to accuracy.
As you can see in the above linked page, both precision and recall are defined as:
where R(y, y-hat) is:
So in your case, Recall-micro will be calculated as
R = number of correct predictions / total predictions = 3/4 = 0.75

How to get the result auc using scikit

Hi i want to combine train/test split with a cross validation and get the results in auc.
My first approach I get it but with accuracy.
# split data into train+validation set and test set
X_trainval, X_test, y_trainval, y_test = train_test_split(dataset.data, dataset.target)
# split train+validation set into training and validation sets
X_train, X_valid, y_train, y_valid = train_test_split(X_trainval, y_trainval)
# train on classifier
clf.fit(X_train, y_train)
# evaluate the classifier on the test set
score = svm.score(X_valid, y_valid)
# combined training & validation set and evaluate it on the test set
clf.fit(X_trainval, y_trainval)
test_score = svm.score(X_test, y_test)
And I do not find how to apply roc_auc, please help.
Using scikit-learn you can do:
import numpy as np
from sklearn import metrics
y = np.array([1, 1, 2, 2])
scores = np.array([0.1, 0.4, 0.35, 0.8])
fpr, tpr, thresholds = metrics.roc_curve(y, scores, pos_label=2)
Now we get:
print(fpr)
array([ 0. , 0.5, 0.5, 1. ])
print(tpr)
array([ 0.5, 0.5, 1. , 1. ])
print(thresholds)
array([ 0.8 , 0.4 , 0.35, 0.1 ])
In your code, after training your classifier, get the predictions with:
y_preds = clf.predict(X_test)
And then use this to calculate the auc value:
from sklearn.metrics import roc_curve, auc
fpr, tpr, thresholds = roc_curve(y, y_preds, pos_label=1)
auc_roc = auc(fpr, tpr)

Resources