Classification score: SVM - statistics

I am using libsvm for multi-class classification. How can I attach classification scores, to compare the confidence of classification, with the output for a given sample as:
Class 1: score1
Class 2: score2
Class 3: score3
Class 4: score4

You can use one vs all approach first and consider them as 2class classification by having the decision value option in the libSVM. This is done by having the each class as positive class and rest of the class as negative for each classification.
Then compare the decision values of the results to classify the samples. Like you can assign the sample to the class which has the highest decision values. For example, sample 1 has decision value 0.54 for class 1, 0.64 for class 2, 0.43 for class 3 and 0.80 for class4, then you can classify it to class4.
You can also use probability values to classify instead of decision function values by using -b option in libSVM.
Hope this helps..

Another option is to use the LIBLINEAR package which internally implements one-vs-all strategy for solving multi-class problem. In LIBSVM, this implementation is based on one-vs-one strategy.

Related

Multiclass AUC with 95% confidence interval

I am currently trying to figure if there is a way to get the 95% CI of the AUC in python. Currently, I have a ypred list that contains the highest probability class predictions between the 4 classes I have(so either a 0/1/2/3 at each position) and a yactual list which contains the actual labels at each position. How exactly do I go about bootstrapping samples for multiple classes?
Edit: Currently the way I am calculating the AUC is by doing a one-vs-all scheme, where I take the AUC for each classes versus the rest and averaging those 4 values to get the final AUC.
Performing a one-vs-all classification scheme for each class and reporting out per class was good enough.

What is the classifier used in scikit-learn's VotingClassifier?

I looked at the documentation of scikit-learn but it is not clear to me what sort of classification method is used under the hood of the VotingClassifier? Is it logistic regression, SVM or some sort of a tree method?
I'm interested in ways to vary the classifier method used under the hood. If Scikit-learn is not offering such an option is there a python package which can be integrated easily with scikit-learn which would offer such functionality?
EDIT:
I meant the classifier method used for the second level model. I'm perfectly aware that the first level classifiers can be any type of classifier supported by scikit-learn.
The second level classifier uses the predictions of the first level classifiers as inputs. So my question is - what method does this second level classifier use? Is it logistic regression? Or something else? Can I change it?
General
The VotingClassifier is not limited to one specific method/algorithm. You can choose multiple different algorithms and combine them to one VotingClassifier. See example below:
iris = datasets.load_iris()
X, y = iris.data[:, 1:3], iris.target
clf1 = LogisticRegression(...)
clf2 = RandomForestClassifier(...)
clf3 = SVC(...)
eclf = VotingClassifier(estimators=[('lr', clf1), ('rf', clf2), ('svm', clf3)], voting='hard')
Read more about the usage here: VotingClassifier-Usage.
When it comes down to how the VotingClassifier "votes" you can either specify voting='hard' or voting='soft'. See the paragraph below for more detail.
Voting
Majority Class Labels (Majority/Hard Voting)
In majority voting, the predicted class label for a particular sample
is the class label that represents the majority (mode) of the class
labels predicted by each individual classifier.
E.g., if the prediction for a given sample is
classifier 1 -> class 1 classifier 2 -> class 1 classifier 3 -> class
2 the VotingClassifier (with voting='hard') would classify the sample
as “class 1” based on the majority class label.
Source: scikit-learn-majority-class-labels-majority-hard-voting
Weighted Average Probabilities (Soft Voting)
In contrast to majority voting (hard voting), soft voting returns the
class label as argmax of the sum of predicted probabilities.
Specific weights can be assigned to each classifier via the weights
parameter. When weights are provided, the predicted class
probabilities for each classifier are collected, multiplied by the
classifier weight, and averaged. The final class label is then derived
from the class label with the highest average probability.
Source/Read more here: scikit-learn-weighted-average-probabilities-soft-voting
The VotingClassifier does not fit any meta model on the first level of classifiers output.
It just aggregates the output of each classifier in the first level by the mode (if voting is hard) or averaging the probabilities (if the voting is soft).
In simple terms, VotingClassifier does not learn anything from the first level of classifiers. It only consolidates the output of individual classifiers.
If you want your meta model to be more intelligent, try using the adaboost, gradientBoosting models.

What is the difference between "predict" and "predict_class" functions in keras?

What is the difference between predict and predict_class functions in keras?
Why does Model object don't have predict_class function?
predict will return the scores of the regression and predict_class will return the class of your prediction. Although it seems similar, there are some differences:
Imagine you are trying to predict if the picture is a dog or a cat (you have a classifier):
predict will return you: 0.6 cat and 0.4 dog (for example).
predict_class will return the index of the class having maximum value. For example, if cat is 0.6 and dog is 0.4, it will return 0 if the class cat is at index 0)
Now, imagine you are trying to predict house prices (you have a regressor):
predict will return the predicted price
predict_class will not make sense here since you do not have a classifier
TL:DR: use predict_class for classifiers (outputs are labels) and use predict for regressions (outputs are non-discrete)
Hope it helps!
For your second question, the answer is here
predict_classes method is only available for the Sequential class but not for the Model class
You can check this answer
Why does Model object don't have predict_class function?
The answer was given here in this github issue. (Nevertheless, this is still a very complex explanation. Any help is welcomed)
For models that have more than one output, these concepts are
ill-defined. And it would be a bad idea to make available something in
the one-output case but not in other cases (inconsistent API).
For the Sequential model, the reason this is supported is for
backwards compatibility only.
Predict_class is missing from the functional API.

Scikit-learn precision_recall_fscore_support multi-class

I am trying to get the precision, recall and fscore for multi-class classification with scikit-learn. My classes have labels 0 and 1 but this is NOT binary classification. The scikit precision_recall_fscore_support() method assumes that my classification is binary and reports results only for class 1. If I convert my labels to string then it requires pos_label. If I provide pos_label='1' then again it reports results only for class 1.
How do I make it consider '0' and '1' as two independent classes and show me averaged results for both, not just 1?
Solution is argument pos_label=None.

Classifying an unknown class in LIbsvm

I have a dataset (2598), in which 108 instances belong to Class 1 and 87 belong to Class 2 , I need to classify all the rest instance in the dataset either as Class1 or 2 or Donot belong to this classification. Is it possible to do it using Libsvm , Since i am training the dataset using Class1 and Class2 and i need to find which dont belong to both of the classes.
Do help me in this regard.
Use multiclass classification and train the 'rest' as class '3'.
The problem here is the number of class3 is too low
class 1 : 129
class 2: 239
class 3: 30
How to define weights for these in libsvm , i used multi_class_learn as referred in libsvm but there i could not assign weight and prediction went too low ..
Is there any other packages where i can do multiclass svm quite easier?? with weights

Resources