I am doing a multiclass classification problem. There are a total of 46 unique classes in my dataset. I have computed the AUC sore for all the class and plot it but I want to plot my AUC score for different types of models in one graph means I want to plot my graph for LogisticRegression, XGBoost and 2 more which is used to solve the multiclass problem. My code what I have done till-
n_classes = 46
best_C =1000
best_gamma =0.0001
svc_model_grid_param = SVC(C=best_C, kernel="rbf", gamma= best_gamma, )
model_OVR_svc = OneVsRestClassifier(svc_model_grid_param)
y_score = model_OVR_svc.fit(X_train, y_train).decision_function(X_valid)
# Compute ROC curve and ROC area for each class
fpr = dict()
tpr = dict()
roc_auc = dict()
# calculate dummies once
y_test_dummies = pd.get_dummies(y_valid, drop_first=False).values
for i in range(n_classes):
fpr[i], tpr[i], _ = roc_curve(y_test_dummies[:, i], y_score[:, i])
roc_auc[i] = auc(fpr[i], tpr[i])
Plotting--
import matplotlib.pylab as plt
lists = sorted(roc_auc.items()) # sorted by key, return a list of tuples
x, y = zip(*lists) # unpack a list of pairs into two tuples
plt.xlabel('Class')
plt.ylabel('AUC Score')
plt.plot(x, y)
plt.show()
Graph--
What I want to do--
Can anyone help me to do this.. Thanks in advance
I have used back propagation on a dataset which has 3 classes: L, B, R. I have also made a confusion matrix after making the neural network.
Actual class array:
sample_test = array([0, 1, 0, 2, 0, 2, 1, 1, 0, 1, 1, 1], dtype=int64)
Predicted class array:
yp = array([0, 1, 0, 2, 0, 2, 0, 1, 0, 1, 1, 1], dtype=int64)
Code for confusion matrix:
from sklearn import svm, datasets
from sklearn.model_selection import train_test_split
from sklearn.metrics import confusion_matrix
from sklearn.utils.multiclass import unique_labels
class_names = ['B','R','L']
def plot_confusion_matrix(y_true, y_pred, classes,
normalize=False,
title=None,
cmap=plt.cm.Blues):
"""
This function prints and plots the confusion matrix.
Normalization can be applied by setting `normalize=True`.
"""
if not title:
if normalize:
title = 'Normalized confusion matrix'
else:
title = 'Confusion matrix, without normalization'
# Compute confusion matrix
cm = confusion_matrix(y_true, y_pred)
# Only use the labels that appear in the data
classes = [0, 1, 2]
if normalize:
cm = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis]
print("Normalized confusion matrix")
else:
print('Confusion matrix, without normalization')
print(cm)
fig, ax = plt.subplots()
im = ax.imshow(cm, interpolation='nearest', cmap=cmap)
ax.figure.colorbar(im, ax=ax)
# We want to show all ticks...
ax.set(xticks=np.arange(cm.shape[1]),
yticks=np.arange(cm.shape[0]),
# ... and label them with the respective list entries
xticklabels=classes, yticklabels=classes,
title=title,
ylabel='True label',
xlabel='Predicted label')
# Rotate the tick labels and set their alignment.
plt.setp(ax.get_xticklabels(), rotation=45, ha="right",
rotation_mode="anchor")
# Loop over data dimensions and create text annotations.
fmt = '.2f' if normalize else 'd'
thresh = cm.max() / 2.
for i in range(cm.shape[0]):
for j in range(cm.shape[1]):
ax.text(j, i, format(cm[i, j], fmt),
ha="center", va="center",
color="white" if cm[i, j] > thresh else "black")
fig.tight_layout()
return ax
np.set_printoptions(precision=2)
# Plot non-normalized confusion matrix
plot_confusion_matrix(sample_test, yp, classes=class_names,
title='Confusion matrix, without normalization')
# Plot normalized confusion matrix
plot_confusion_matrix(sample_test, yp, classes=class_names , normalize=True,
title='Normalized confusion matrix')
plt.show()
Output:
Now I want to plot ROC curve for this and calculate MAUC. I saw the documentation but cant understand properly what to do.
I will be very grateful, if anyone can help me by giving some suggestions how to do that. Thanks in advance.
The ROC is calculated per class - treat each class as the "positive" class and the other classes as the "negative" classes. Note - first you will have to use predict_proba() - to get the predicted probability per class. Something like this:
import seaborn as sns
from sklearn.tree import DecisionTreeClassifier
from sklearn.model_selection import train_test_split
from sklearn import preprocessing
from sklearn.metrics import roc_auc_score
iris = sns.load_dataset('iris')
X = iris.drop('species',axis=1)
y = iris['species']
X_train, X_test, y_train, y_test = train_test_split(X,y)
le = preprocessing.LabelEncoder()
le.fit(y_train)
le.transform(y_train)
model = DecisionTreeClassifier(max_depth=1)
model.fit(X_train,le.transform(y_train))
predictions =pd.DataFrame(model.predict_proba(X_test),columns=list(le.inverse_transform(model.classes_)))
print(roc_auc_score((y_test == 'versicolor').astype(float), predictions['versicolor']))
Regarding your second question to calculate the Multiclass AUC, there are open source implementations for it. Specifically, such solutions implement equations 3 and 7 of Hand and Till 2001 original paper.
Take a look at this solution:
import numpy as np
import itertools
# pairwise class AUC
def a_value(y_true, y_pred_prob, zero_label=0, one_label=1):
"""
Approximates the AUC by the method described in Hand and Till 2001,
equation 3.
NB: The class labels should be in the set [0,n-1] where n = # of classes.
The class probability should be at the index of its label in the predicted
probability list.
Args:
y_true: actual labels of test data
y_pred_prob: predicted class probability
zero_label: label for positive class
one_label: label for negative class
Returns:
The A-value as a floating point.
"""
idx = np.isin(y_true, [zero_label, one_label])
labels = y_true[idx]
prob = y_pred_prob[idx, zero_label]
sorted_ranks = labels[np.argsort(prob)]
n0, n1, sum_ranks = 0, 0, 0
n0 = np.count_nonzero(sorted_ranks==zero_label)
n1 = np.count_nonzero(sorted_ranks==one_label)
sum_ranks = np.sum(np.where(sorted_ranks==zero_label)) + n0
return (sum_ranks - (n0*(n0+1)/2.0)) / float(n0 * n1) # Eqn 3 of the original paper
def MAUC(y_true, y_pred_prob, num_classes):
"""
Calculates the MAUC over a set of multi-class probabilities and
their labels. This is equation 7 in Hand and Till's 2001 paper.
NB: The class labels should be in the set [0,n-1] where n = # of classes.
The class probability should be at the index of its label in the
probability list.
Args:
y_true: actual labels of test data
y_pred_prob: predicted class probability
zero_label: label for positive class
one_label: label for negative class
num_classes (int): The number of classes in the dataset.
Returns:
The MAUC as a floating point value.
"""
# Find all pairwise comparisons of labels
class_pairs = [x for x in itertools.combinations(range(num_classes), 2)]
# Have to take average of A value with both classes acting as label 0 as this
# gives different outputs for more than 2 classes
sum_avals = 0
for pairing in class_pairs:
sum_avals += (a_value(y_true, y_pred_prob, zero_label=pairing[0], one_label=pairing[1]) +
a_value(y_true, y_pred_prob, zero_label=pairing[1], one_label=pairing[0])) / 2.0
return sum_avals * (2 / float(num_classes * (num_classes-1))) # Eqn 7 of the original paper
Credit: Pritom Saha Akash
In FastText I want to change the balance between precision and recall. Can it be done?
If you're referring to the python fasttext implementation than I'm afraid there is no built in simple method to do this, what you can do is look at the returned probabilities and call an AUC or ROC curve plot method of your choice with the probability lists, here is a code example that does just this for a binary classifier:
# label the data
labels, probabilities = fasttext_classifier.predict([re.sub('\n', ' ', sentence)
for sentence in test_sentences])
# convert fasttext multilabel results to a binary classifier (probability of TRUE)
labels = list(map(lambda x: x == ['__label__TRUE'], labels))
probabilities = [probability[0] if label else (1-probability[0])
for label, probability in zip(labels, probabilities)]
And then you are free to build your metrics using the common sklearn methods:
from sklearn.metrics import roc_curve
from sklearn.metrics import roc_auc_score
from sklearn.metrics import precision_recall_curve
from sklearn.metrics import f1_score
from sklearn.metrics import auc
from matplotlib import pyplot
auc = roc_auc_score(testy, probabilities)
print('ROC AUC=%.3f' % (auc))
# calculate roc curve
fpr, tpr, _ = roc_curve(testy, probabilities)
# plot the roc curve for the model
pyplot.plot(fpr, tpr, marker='.', label='ROC curve')
# axis labels
pyplot.xlabel('False Positive Rate (sensitivity)')
pyplot.ylabel('True Positive Rate (specificity)')
# show the legend
pyplot.legend()
# show the plot
pyplot.show()
precision_values, recall_values, _ = precision_recall_curve(testy, probabilities)
f1 = f1_score(testy, labels)
# summarize scores
print('f1=%.3f auc=%.3f' % (f1, auc))
# plot the precision-recall curves
pyplot.plot(recall_values, precision_values, marker='.', label='Precision,Recall')
# axis labels
pyplot.xlabel('Recall')
pyplot.ylabel('Precision')
# show the legend
pyplot.legend()
# show the plot
pyplot.show()
The command line fasttext version has a threshold parameter and you can perform multiple runs with different thresholds but this is needlessly time consuming.
I have 21 classes. I am using RandomForest. I want to plot a ROC curve, so I checked the example in scikit ROC with SVM
The example uses SVM. SVM has parameters like: probability and decision_function_shape which RF does not.
So how can I binarize RandomForest and plot a ROC?
Thank you
EDIT
To create the fake data. So there are 20 features and 21 classes (3 samples for each class).
df = pd.DataFrame(np.random.rand(63, 20))
label = np.arange(len(df)) // 3 + 1
df['label']=label
df
#TO TRAIN THE MODEL: IT IS A STRATIFIED SHUFFLED SPLIT
clf = make_pipeline(RandomForestClassifier())
xSSSmean10 = []
for i in range(10):
sss = StratifiedShuffleSplit(y, 10, test_size=0.1, random_state=i)
scoresSSS = cross_validation.cross_val_score(clf, x, y , cv=sss)
xSSSmean10.append(scoresSSS.mean())
result_list.append(xSSSmean10)
print("")
For multilabel random forest, each of your 21 labels has a binary classification, and you can create a ROC curve for each of the 21 classes.
Your y_train should be a matrix of 0 and 1 for each label.
Assume you fit a multilabel random forest from sklearn and called it rf, and have a X_test and y_test after a test train split. You can plot the ROC curve in python for your first label using this:
from sklearn import metrics
probs = rf.predict_proba(X_test)
fpr, tpr, threshs = metrics.roc_curve(y_test['name_of_your_first_tag'],probs[0][:,1])
Hope this helps. If you provide your code and data I could write this more specifically.
I have been trying to evaluate the performance of my one-class SVM. I have tried plotting an ROC curve using scikit-learn, and the results have been a bit bizarre.
X_train, X_test = train_test_split(compressed_dataset,test_size = 0.5,random_state = 42)
clf = OneClassSVM(nu=0.1,kernel = "rbf", gamma =0.1)
y_score = clf.fit(X_train).decision_function(X_test)
pred = clf.predict(X_train)
fpr,tpr,thresholds = roc_curve(pred,y_score)
# Plotting roc curve
plt.figure()
plt.plot(fpr, tpr, label='ROC curve (area = %0.2f)' % roc_auc)
plt.plot([0, 1], [0, 1], 'k--')
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.05])
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('ROC Curve')
plt.legend(loc="lower right")
plt.show()
The ROC curve I get:
Can somebody help me out with this?
What is bizzare about this plot? You fixed a single set of nu and gamma, thus your model is neither over- nor underfitting. Moving threshold (which is a ROC variable) does not lead to 100% TPR. Try out high gamma and very small nu (which upper bounds the training errors) and you will get more "typical" plots.
In my opinon, get the scores:
pred_scores = clf.score_samples(X_train)
Then, the pred_scores need to be min-max normalized before min-max normalize