Scikit-learn precision_recall_fscore_support multi-class - scikit-learn

I am trying to get the precision, recall and fscore for multi-class classification with scikit-learn. My classes have labels 0 and 1 but this is NOT binary classification. The scikit precision_recall_fscore_support() method assumes that my classification is binary and reports results only for class 1. If I convert my labels to string then it requires pos_label. If I provide pos_label='1' then again it reports results only for class 1.
How do I make it consider '0' and '1' as two independent classes and show me averaged results for both, not just 1?

Solution is argument pos_label=None.

Related

Get the probability of a sample in sklearn.linear_model.LogisticRegression instead of class label

I am using sklearn.linear_model.LogisticRegression for a text classification project. With the features I have extracted, the samples mostly receive a low probability score. Therefore, when I use the predict() those samples always classified to class 0. But what I want to do is get the actual probabilities for samples and choose the top 25%-30% based on the probability score. How do I get the probability score for a sample? In linear regression, the predict() provides the actual output. But it is not the case for logistic regression. I am not restricted to sklearn. A different package also works.
To make it more clear, what I want from the predict function is to return actual probability value (output of the sigmoid function) instead of the class label like linear regression predict function.

Interpreting coefficientMatrix, interceptVector and Confusion matrix on multinomial logistic regression

Can anyone explain how to interpret coefficientMatrix, interceptVector , Confusion matrix
of a multinomial logistic regression.
According to Spark documentation:
Multiclass classification is supported via multinomial logistic (softmax) regression. In multinomial logistic regression, the algorithm produces K sets of coefficients, or a matrix of dimension K×J where K is the number of outcome classes and J is the number of features. If the algorithm is fit with an intercept term then a length K vector of intercepts is available.
I turned an example using spark ml 2.3.0 and I got this result.
.
If I analyse what I get :
The coefficientMatrix has dimension of 5 * 11
The interceptVector has dimension of 5
If so,why the Confusion matrix has a dimension of 4 * 4 ?
Also, can anyone give an interpretation of coefficientMatrix, interceptVector ?
Why I get negative coefficients ?
If 5 is the number of classes after classification, why I get 4 rows in the confusion matrix ?
EDIT
I forgot to mention that I am still beginner in machine learning and that my search in google didn't help, so maybe I get an Up Vote :)
Regarding the 4x4 confusion matrix: I imagine that when you split your data into test and train, there were 5 classes present in your training set and only 4 classes present in your test set. This can easily happen if the distribution of your response variable is imbalanced.
You'll want to try to perform some stratified split between test and train prior to modeling. If you are working with pyspark, you may find this library helpful: https://github.com/databricks/spark-sklearn
Now regarding negative coefficients for a multi-class Logistic Regression: As you mentioned, your returned coefficientMatrix shape is 5x11.
Spark generated five models via one-vs-all approach. The 1st model corresponds to the model where the positive class is the 1st label and the negative class is composed of all other labels. Lets say the 1st coefficient for this model is -2.23. In order to interpret this coefficient we take the exponential of -2.23 which is (approx) 0.10. Interpretation here: 'With one unit increase of 1st feature we expect a reduced odds of the positive label by 90%'

sklearn NB classifier: How to get the actual probabilities of individual samples?

I am making a machine learning program which classifies words in one of the following categories: Hardware, Software, None_of_these. I make use of the Multinomial Naive Bayes classifier from sklearn.
The function predict() gives me the prediction of every word, however, I can't see the actual probability (float ranging for 0 to 1.0) that the word matches with the predicted categorie. I didn't find this on sklearn's site either.
Is there a function which gives me the probability of every sample?
Nevermind, I found the solution.:
predict_proba(X) Returns probability estimates for the test vector X.

sci-kit SVM multi-class classification with unseen label

is there any possibility to configure an svm classifier from sci-kit such that:
1.) the svm classifier is trained with examples from 0,...,n - 1
2.) If none of the single classifiers (one-vs-rest) delivers a positive result (class membership), then the output is a designated label n which means "none of them"
Thanks!
By construction, the OvR multiclass wrapper sklearn.multiclass.OneVsRestClassifier selects the maximum decision_function output or the maximum predict_proba to be decisive of predicted class. This means that there will always be a predicted class.
If you wanted e.g. to predict "None of these" when decision_function / predict_proba all stay under a certain threshold (for all OvR problems), then you would have to write this estimator yourself, but could get inspiration from the code of sklearn.multiclass.OneVsRestClassifier and just modify the decision logic.

Regarding Probability Estimates predicted by LIBSVM

I am attempting 3 class classification by using SVM classifier. How do we interpret the probabililty estimates predicted by LIBSVM. Is it based on perpendicular distance of the instance from the maximal margin hyperplane?.
Kindly through some light on the interpretation of probability estimates predicted by LIBSVM classifier. Parameters C and gamma are first tuned and then probability estimates are outputted by using -b option with both training and testing.
Multiclass SVM is always decomposed into several binary classifiers (typically a set of one vs all classifiers). Any binary SVM classifier's decision function outputs a (signed) distance to the separating hyperplane. In short, an SVM maps the input domain to a one-dimensional real number (the decision value). The predicted label is determined by the sign of the decision value. The most common technique to obtain probabilistic output from SVM models is through so-called Platt scaling (paper of LIBSVM authors).
Is it based on perpendicular distance of the instance from the maximal margin hyperplane?
Yes. Any classifier that outputs such a one-dimensional real value can be post-processed to yield probabilities, by calibrating a logistic function on the decision values of the classifier. This is the exact same approach as in standard logistic regression.
SVM performs binary classification. In order to achieve multiclass classification libsvm performs what it's called one vs all. What you get when you invoke -bis the probability related to this technique that you can found explained here .

Resources