I know that one would evaluate the AUC of sklearn.svm.SVC by passing in the probability=True option into the constructor, and having the SVM predict probabilities, but I'm not sure how to evaluate sklearn.svm.LinearSVC's AUC. Does anyone have any idea how?
I'd like to use LinearSVC over SVC because LinearSVC seems to train faster on data with many attributes.
You can use the CalibratedClassifierCV class to extract the probabilities. Here is an example with code.
from sklearn.svm import LinearSVC
from sklearn.calibration import CalibratedClassifierCV
from sklearn import datasets
#Load iris dataset
iris = datasets.load_iris()
X = iris.data[:, :2] # Using only two features
y = iris.target #3 classes: 0, 1, 2
linear_svc = LinearSVC() #The base estimator
# This is the calibrated classifier which can give probabilistic classifier
calibrated_svc = CalibratedClassifierCV(linear_svc,
method='sigmoid', #sigmoid will use Platt's scaling. Refer to documentation for other methods.
cv=3)
calibrated_svc.fit(X, y)
# predict
prediction_data = [[2.3, 5],
[4, 7]]
predicted_probs = calibrated_svc.predict_proba(prediction_data) #important to use predict_proba
print predicted_probs
Looks like it's not possible.
https://github.com/scikit-learn/scikit-learn/issues/4820
Related
In sklearn.model_selection.cross_validate , is there a way to output the samples / indices which were used as test set by the CV splitter for each fold?
There's an option to specify the cross-validation generator, using cv option :
cv int, cross-validation generator or an iterable, default=None
Determines the cross-validation splitting strategy. Possible inputs
for cv are:
None, to use the default 5-fold cross validation,
int, to specify the number of folds in a (Stratified)KFold,
CV splitter,
An iterable yielding (train, test) splits as arrays of indices.
For int/None inputs, if the estimator is a classifier and y is either
binary or multiclass, StratifiedKFold is used. In all other cases,
KFold is used. These splitters are instantiated with shuffle=False so
the splits will be the same across calls.
If you provide it as an input to cross_validate :
from sklearn import datasets, linear_model
from sklearn.model_selection import cross_validate
from sklearn.model_selection import KFold
from sklearn.svm import LinearSVC
diabetes = datasets.load_diabetes()
X = diabetes.data[:150]
y = diabetes.target[:150]
lasso = linear_model.Lasso()
kf = KFold(5, random_state = 99, shuffle = True)
cv_results = cross_validate(lasso, X, y, cv=kf)
You can extract the index like this:
idx = [test_index for train_index, test_index in kf.split(X)]
Where the first in the list will be the test index for the 1st fold and so on..
For a multiclass problem, I can use sklearn's RandomForestClassifier out of the box:
from sklearn.datasets import load_iris
from sklearn.ensemble import RandomForestClassifier
from sklearn.model_selection import train_test_split
data = load_iris(as_frame=True)
y = data['target']
X = data['data']
X_train, X_test, y_train, y_test=train_test_split(X,y,test_size=0.2,random_state=42)
RF = RandomForestClassifier(random_state=42)
RF.fit(X_train, y_train)
y_score = RF.predict_proba(X_test) # shape is (30, 3)
Or I can replace the above
RF = RandomForestClassifier(random_state=42)
RF.fit(X_train, y_train)
y_score = RF.predict_proba(X_test) # shape is (30, 3)
with this to train 3 binary-classification models:
from sklearn.multiclass import OneVsRestClassifier
RF = OneVsRestClassifier(RandomForestClassifier(random_state=42))
RF.fit(X_train, y_train)
y_score = RF.predict_proba(X_test) # shape is (30, 3)
I can then go on and binarize the output and use y_score to plot ROC curves as per the official docs.
I am unsure which approach to take: the standard RandomForest multiclass approach, or the OneVsRest approach? For some models like SupportVectorClassifiers, one must use OvO or OvR for multiclass. However, RandomForest is different since the multiclass approach is native to the algorithm.
What makes me lean towards OvR in this case is that in the official docs is written:
ROC curves are typically used in binary classification to study the output of a classifier
and OvR is binary classification...
As an R user, I wanted to also get up to speed on scikit.
Creating a linear regression model(s) is fine, but can't seem to find a reasonable way to get a standard summary of regression output.
Code example:
# Linear Regression
import numpy as np
from sklearn import datasets
from sklearn.linear_model import LinearRegression
# Load the diabetes datasets
dataset = datasets.load_diabetes()
# Fit a linear regression model to the data
model = LinearRegression()
model.fit(dataset.data, dataset.target)
print(model)
# Make predictions
expected = dataset.target
predicted = model.predict(dataset.data)
# Summarize the fit of the model
mse = np.mean((predicted-expected)**2)
print model.intercept_, model.coef_, mse,
print(model.score(dataset.data, dataset.target))
Issues:
seems like the intercept and coef are built into the model, and I just type print (second to last line) to see them.
What about all the other standard regression output like R^2, adjusted R^2, p values, etc. If I read the examples correctly, seems like you have to write a function/equation for each of these and then print it.
So, is there no standard summary output for lin. reg. models?
Also, in my printed array of outputs of coefficients, there are no variable names associated with each of these? I just get the numeric array. Is there a way to print these where I get an output of the coefficients and the variable they go with?
My printed output:
LinearRegression(copy_X=True, fit_intercept=True, normalize=False)
152.133484163 [ -10.01219782 -239.81908937 519.83978679 324.39042769 -792.18416163
476.74583782 101.04457032 177.06417623 751.27932109 67.62538639] 2859.69039877
0.517749425413
Notes: Started off with Linear, Ridge and Lasso. I have gone through the examples. Below is for the basic OLS.
There exists no R type regression summary report in sklearn. The main reason is that sklearn is used for predictive modelling / machine learning and the evaluation criteria are based on performance on previously unseen data (such as predictive r^2 for regression).
There does exist a summary function for classification called sklearn.metrics.classification_report which calculates several types of (predictive) scores on a classification model.
For a more classic statistical approach, take a look at statsmodels.
I use:
import sklearn.metrics as metrics
def regression_results(y_true, y_pred):
# Regression metrics
explained_variance=metrics.explained_variance_score(y_true, y_pred)
mean_absolute_error=metrics.mean_absolute_error(y_true, y_pred)
mse=metrics.mean_squared_error(y_true, y_pred)
mean_squared_log_error=metrics.mean_squared_log_error(y_true, y_pred)
median_absolute_error=metrics.median_absolute_error(y_true, y_pred)
r2=metrics.r2_score(y_true, y_pred)
print('explained_variance: ', round(explained_variance,4))
print('mean_squared_log_error: ', round(mean_squared_log_error,4))
print('r2: ', round(r2,4))
print('MAE: ', round(mean_absolute_error,4))
print('MSE: ', round(mse,4))
print('RMSE: ', round(np.sqrt(mse),4))
statsmodels package gives a quiet decent summary
from statsmodels.api import OLS
OLS(dataset.target,dataset.data).fit().summary()
You can do using statsmodels
import statsmodels.api as sm
X = sm.add_constant(X.ravel())
results = sm.OLS(y,x).fit()
results.summary()
results.summary() will organize the results into three tabels
You can use the following option to have a summary table:
import statsmodels.api as sm
#log_clf = LogisticRegression()
log_clf =sm.Logit(y_train,X_train)
classifier = log_clf.fit()
y_pred = classifier.predict(X_test)
print(classifier.summary2())
Use model.summary() after predict
# Linear Regression
import numpy as np
from sklearn import datasets
from sklearn.linear_model import LinearRegression
# load the diabetes datasets
dataset = datasets.load_diabetes()
# fit a linear regression model to the data
model = LinearRegression()
model.fit(dataset.data, dataset.target)
print(model)
# make predictions
expected = dataset.target
predicted = model.predict(dataset.data)
# >>>>>>>Print out the statistics<<<<<<<<<<<<<
model.summary()
# summarize the fit of the model
mse = np.mean((predicted-expected)**2)
print model.intercept_, model.coef_, mse,
print(model.score(dataset.data, dataset.target))
I'm performing classification on a dataset and I'm using cross-validation for modelling. The cross-validation gives accuracy for each fold since the class are imbalances, accuracy is not correct measure. I want to get AUC-ROC instead of accuracy.
The cross_val_score support a large number of scoring options.
The exhaustive list is mentioned here.
['accuracy', 'recall_samples', 'f1_macro', 'adjusted_rand_score',
'recall_weighted', 'precision_weighted', 'recall_macro',
'homogeneity_score', 'neg_mean_squared_log_error', 'recall_micro',
'f1', 'neg_log_loss', 'roc_auc', 'average_precision', 'f1_weighted',
'r2', 'precision_macro', 'explained_variance', 'v_measure_score',
'neg_mean_absolute_error', 'completeness_score',
'fowlkes_mallows_score', 'f1_micro', 'precision_samples',
'mutual_info_score', 'neg_mean_squared_error', 'balanced_accuracy',
'neg_median_absolute_error', 'precision_micro',
'normalized_mutual_info_score', 'adjusted_mutual_info_score',
'precision', 'f1_samples', 'brier_score_loss', 'recall']
Here is an example to showcase how to use auc_roc.
>>> from sklearn import datasets, linear_model
>>> from sklearn.model_selection import cross_val_score
>>> import numpy as np
>>> X, y = datasets.load_breast_cancer(return_X_y=True)
>>> model = linear_model.SGDClassifier(max_iter=50, random_state=7)
>>> print(cross_val_score(model, X, y, cv=5, scoring = 'roc_auc'))
[0.96382429 0.96996124 0.95573441 0.96646546 0.91113347]
I'm trying to recompute grid.best_score_ I obtained on my own data without success...
So I tried it using a conventional dataset but no more success. Here is the code :
from sklearn import datasets
from sklearn import linear_model
from sklearn.cross_validation import ShuffleSplit
from sklearn import grid_search
from sklearn.metrics import r2_score
import numpy as np
lr = linear_model.LinearRegression()
boston = datasets.load_boston()
target = boston.target
param_grid = {'fit_intercept':[False]}
cv = ShuffleSplit(target.size, n_iter=5, test_size=0.30, random_state=0)
grid = grid_search.GridSearchCV(lr, param_grid, cv=cv)
grid.fit(boston.data, target)
# got cv score computed by gridSearchCV :
print grid.best_score_
0.677708680059
# now try a custom computation of cv score
cv_scores = []
for (train, test) in cv:
y_true = target[test]
y_pred = grid.best_estimator_.predict(boston.data[test,:])
cv_scores.append(r2_score(y_true, y_pred))
print np.mean(cv_scores)
0.703865991851
I can't see why it's different, GridSearchCV is supposed to use scorer from LinearRegression, which is r2 score. Maybe the way I code cv score is not the one used to compute best_score_... I'm asking here before going through GridSearchCV code.
Unless refit=False in the GridSearchCV constructor, the winning estimator is refit on the entire dataset at the end of fit. best_score_ is the estimator's average score using the cross-validation splits, while best_estimator_ is an estimator of the winning configuration fit on all the data.
lr2 = linear_model.LinearRegression(fit_intercept=False)
scores2 = [lr2.fit(boston.data[train,:], target[train]).score(boston.data[test,:], target[test])
for train, test in cv]
print np.mean(scores2)
Will print 0.67770868005943297.