How to compare the slopes of two regression lines? - statistics

Assume that you have two numpy arrays: blue_y and light_blue_y such as:
import numpy as np
x=np.array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9])
blue_y = np.array([0.94732871, 0.85729212, 0.86039587, 0.89169027, 0.90817473, 0.93606619, 0.93890423, 1., 0.97783521, 0.93035495])
light_blue_y = np.array([0.81346023, 0.72248919, 0.72406021, 0.74823437, 0.77759055, 0.81167983, 0.84050726, 0.90357904, 0.97354455, 1. ])
blue_m, blue_b = np.polyfit(x, blue_y, 1)
light_blue_m, light_blue_b = np.polyfit(x, light_blue_y, 1)
After fitting linear regression lines to these two numpy arrays I get the following slopes:
>>> blue_m
0.009446010787878795
>>> light_blue_m
0.028149985151515147
How to compare these two slopes and show that they are statistically different from each other or not?

import numpy as np
import statsmodels.api as sm
x=np.array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9])
blue_y = np.array([0.94732871, 0.85729212, 0.86039587, 0.89169027, 0.90817473, 0.93606619, 0.93890423, 1., 0.97783521, 0.93035495])
light_blue_y = np.array([0.81346023, 0.72248919, 0.72406021, 0.74823437, 0.77759055, 0.81167983, 0.84050726, 0.90357904, 0.97354455, 1. ])
For simplicity we can try to see the difference. If they had the same constant and the same slope that should become visible in the linear regression.
y = blue_y-light_blue_y
# Let create a linear regression
mod = sm.OLS(y, sm.add_constant(x))
res = mod.fit()
With Output
print(res.summary())
Output
==============================================================================
Dep. Variable: y R-squared: 0.647
Model: OLS Adj. R-squared: 0.603
Method: Least Squares F-statistic: 14.67
Date: Mon, 08 Feb 2021 Prob (F-statistic): 0.00502
Time: 23:12:25 Log-Likelihood: 18.081
No. Observations: 10 AIC: -32.16
Df Residuals: 8 BIC: -31.56
Df Model: 1
Covariance Type: nonrobust
==============================================================================
coef std err t P>|t| [0.025 0.975]
------------------------------------------------------------------------------
const 0.1775 0.026 6.807 0.000 0.117 0.238
x1 -0.0187 0.005 -3.830 0.005 -0.030 -0.007
==============================================================================
Omnibus: 0.981 Durbin-Watson: 0.662
Prob(Omnibus): 0.612 Jarque-Bera (JB): 0.780
Interpretation:
The constant and the slope appear different so blue_y and light_blue_y have different slope and constant term.
Alternative:
An more traditional you could run the linear regression of both cases and run your own F-stest.
As in here:
https://towardsdatascience.com/fisher-test-for-regression-analysis-1e1687867259#:~:text=The%20F%2Dtest%2C%20when%20used,its%20use%20in%20linear%20regression.
https://www.real-statistics.com/regression/hypothesis-testing-significance-regression-line-slope/comparing-slopes-two-independent-samples/

Related

How to statistically compare the intercept and slope of two different linear regression models in Python?

I have two series of data as below. I want to create an OLS linear regression model for df1 and another OLS linear regression model for df2. And then statistically test if the y-intercepts of these two linear regression models are statistically different (p<0.05), and also test if the slopes of these two linear regression models are statistically different (p<0.05). I did the following
import numpy as np
import math
import matplotlib.pyplot as plt
import pandas as pd
import statsmodels.api as sm
np.inf == float('inf')
data1 = [1, 3, 45, 0, 25, 13, 43]
data2 = [1, 1, 1, 1, 1, 1, 1]
df1 = pd.DataFrame(data1)
df2 = pd.DataFrame(data2)
fig, ax = plt.subplots()
df1.plot(figsize=(20, 10), linewidth=5, fontsize=18, ax=ax, kind='line')
df2.plot(figsize=(20, 10), linewidth=5, fontsize=18, ax=ax, kind='line')
plt.show()
model1 = sm.OLS(df1, df1.index)
model2 = sm.OLS(df2, df2.index)
results1 = model1.fit()
results2 = model2.fit()
print(results1.summary())
print(results2.summary())
Results #1
OLS Regression Results
=======================================================================================
Dep. Variable: 0 R-squared (uncentered): 0.625
Model: OLS Adj. R-squared (uncentered): 0.563
Method: Least Squares F-statistic: 10.02
Date: Mon, 01 Mar 2021 Prob (F-statistic): 0.0194
Time: 20:34:34 Log-Likelihood: -29.262
No. Observations: 7 AIC: 60.52
Df Residuals: 6 BIC: 60.47
Df Model: 1
Covariance Type: nonrobust
==============================================================================
coef std err t P>|t| [0.025 0.975]
------------------------------------------------------------------------------
x1 5.6703 1.791 3.165 0.019 1.287 10.054
==============================================================================
Omnibus: nan Durbin-Watson: 2.956
Prob(Omnibus): nan Jarque-Bera (JB): 0.769
Skew: 0.811 Prob(JB): 0.681
Kurtosis: 2.943 Cond. No. 1.00
==============================================================================
Results #2
OLS Regression Results
=======================================================================================
Dep. Variable: 0 R-squared (uncentered): 0.692
Model: OLS Adj. R-squared (uncentered): 0.641
Method: Least Squares F-statistic: 13.50
Date: Mon, 01 Mar 2021 Prob (F-statistic): 0.0104
Time: 20:39:14 Log-Likelihood: -5.8073
No. Observations: 7 AIC: 13.61
Df Residuals: 6 BIC: 13.56
Df Model: 1
Covariance Type: nonrobust
==============================================================================
coef std err t P>|t| [0.025 0.975]
------------------------------------------------------------------------------
x1 0.2308 0.063 3.674 0.010 0.077 0.384
==============================================================================
Omnibus: nan Durbin-Watson: 0.148
Prob(Omnibus): nan Jarque-Bera (JB): 0.456
Skew: 0.000 Prob(JB): 0.796
Kurtosis: 1.750 Cond. No. 1.00
==============================================================================
This is as far I have got, but I think something is wrong. Neither of these regression outcome seems to show the y-intercept. Also, I expect the coef in results #2 to be 0 since I expect the slope to be 0 when all the values are 1, but the result shows 0.2308. Any suggestions or guiding material will be greatly appreciated.
In statsmodels an OLS model does not fit an intercept by default (see the docs).
exog array_like
A nobs x k array where nobs is the number of observations and k is the number of regressors. An intercept is not included by default and should be added by the user. See statsmodels.tools.add_constant.
The documentation on the exog argument of the OLS constructor suggests using this feature of the tools module in order to add an intercept to the data.
To perform a hypothesis test on the values of the coefficients this question provides some guidance. This unfortunately only works if the variances of the residual errors is the same.
We can start by looking at whether the residuals of each distribution have the same variance (using Levine's test) and ignore coefficients of the regression model for now.
import numpy as np
import pandas as pd
from scipy.stats import levene
from statsmodels.tools import add_constant
from statsmodels.formula.api import ols ## use formula api to make the tests easier
np.inf == float('inf')
data1 = [1, 3, 45, 0, 25, 13, 43]
data2 = [1, 1, 1, 1, 1, 1, 1]
df1 = add_constant(pd.DataFrame(data1)) ## add a constant column so we fit an intercept
df1 = df1.reset_index() ## just doing this to make the index a column of the data frame
df1 = df1.rename(columns={'index':'x', 0:'y'}) ## the old index will now be called x and the old values are now y
df2 = add_constant(pd.DataFrame(data2)) ## this does nothing because the y column is already a constant
df2 = df2.reset_index()
df2 = df2.rename(columns={'index':'x', 0:'y'}) ## the old index will now be called x and the old values are now y
formula1 = 'y ~ x + const' ## define formulae
formula2 = 'y ~ x'
model1 = ols(formula1, df1).fit()
model2 = ols(formula2, df2).fit()
print(levene(model1.resid, model2.resid))
The output of the levene test looks like this:
LeveneResult(statistic=7.317386741297884, pvalue=0.019129208414097015)
So we can reject the null hypothesis that the residual distributions have the same variance at alpha=0.05.
There is no point to testing the linear regression coefficients now because the residuals don't have don't have the same distributions. It is important to remember that in a regression problem it doesn't make sense to compare the regression coefficients independent of the data they are fit on. The distribution of the regression coefficients depends on the distribution of the data.
Lets see what happens when we try the proposed test anyways. Combining the instructions above with this method from the OLS package yields the following code:
## stack the data and addd the indicator variable as described in:
## stackexchange question:
df1['c'] = 1 ## add indicator variable that tags the first groups of points
df_all = df1.append(df2, ignore_index=True).drop('const', axis=1)
df_all = df_all.rename(columns={'index':'x', 0:'y'}) ## the old index will now be called x and the old values are now y
df_all = df_all.fillna(0) ## a bunch of the values are missing in the indicator columns after stacking
df_all['int'] = df_all['x'] * df_all['c'] # construct the interaction column
print(df_all) ## look a the data
formula = 'y ~ x + c + int' ## define the linear model using the formula api
result = ols(formula, df_all).fit()
hypotheses = '(c = 0), (int = 0)'
f_test = result.f_test(hypotheses)
print(f_test)
The result of the f-test looks like this:
<F test: F=array([[4.01995453]]), p=0.05233934453138028, df_denom=10, df_num=2>
The result of the f-test means that we just barely fail to reject any of the null hypotheses specified in the hypotheses variable namely that the coefficient of the indicator variable 'c' and interaction term 'int' are zero.
From this example it is clear that the f test on the regression coefficients is not very powerful if the residuals do not have the same variance.
Note that the given example has so few points it is hard for the statistical tests to clearly distinguish the two cases even though to the human eye they are very different. This is because even though the statistical tests are designed to make few assumptions about the data but those assumption get better the more data you have. When testing statistical methods to see if they accord with your expectations it is often best to start by constructing large samples with little noise and then see how well the methods work as your data sets get smaller and noisier.
For the sake of completeness I will construct an example where the Levene test will fail to distinguish the two regression models but f test will succeed to do so. The idea is to compare the regression of a noisy data set with its reverse. The distribution of residual errors will be the same but the relationship between the variables will be very different. Note that this would not work reversing the noisy dataset given in the previous example because the data is so noisy the f test cannot distinguish between the positive and negative slope.
import numpy as np
import pandas as pd
from scipy.stats import levene
from statsmodels.tools import add_constant
from statsmodels.formula.api import ols ## use formula api to make the tests easier
n_samples = 6
noise = np.random.randn(n_samples) * 5
data1 = np.linspace(0, 30, n_samples) + noise
data2 = data1[::-1] ## reverse the time series
df1 = add_constant(pd.DataFrame(data1)) ## add a constant column so we fit an intercept
df1 = df1.reset_index() ## just doing this to make the index a column of the data frame
df1 = df1.rename(columns={'index':'x', 0:'y'}) ## the old index will now be called x and the old values are now y
df2 = add_constant(pd.DataFrame(data2)) ## this does nothing because the y column is already a constant
df2 = df2.reset_index()
df2 = df2.rename(columns={'index':'x', 0:'y'}) ## the old index will now be called x and the old values are now y
formula1 = 'y ~ x + const' ## define formulae
formula2 = 'y ~ x'
model1 = ols(formula1, df1).fit()
model2 = ols(formula2, df2).fit()
print(levene(model1.resid, model2.resid))
## stack the data and addd the indicator variable as described in:
## stackexchange question:
df1['c'] = 1 ## add indicator variable that tags the first groups of points
df_all = df1.append(df2, ignore_index=True).drop('const', axis=1)
df_all = df_all.rename(columns={'index':'x', 0:'y'}) ## the old index will now be called x and the old values are now y
df_all = df_all.fillna(0) ## a bunch of the values are missing in the indicator columns after stacking
df_all['int'] = df_all['x'] * df_all['c'] # construct the interaction column
print(df_all) ## look a the data
formula = 'y ~ x + c + int' ## define the linear model using the formula api
result = ols(formula, df_all).fit()
hypotheses = '(c = 0), (int = 0)'
f_test = result.f_test(hypotheses)
print(f_test)
The result of Levene test and the f test follow:
LeveneResult(statistic=5.451203655948632e-31, pvalue=1.0)
<F test: F=array([[10.62788052]]), p=0.005591319998324387, df_denom=8, df_num=2>
A final note since we are doing multiple comparisons on this data and stopping if we get a significant result, i.e. if the Levene test rejects the null we quit, if it doesn't then we do the f test, this is a stepwise hypothesis test and we are actually inflating our false positive error rate. We should correct our p-values for multiple comparisons before we report our results. Note that the f test is already doing this for the hypotheses we test about the regression coefficients. I am a bit fuzzy on the underlying assumptions of these testing procedures so I am not 100% sure that you are better off making the following correction but keep it in mind in case you feel you are getting false positives too often.
from statsmodels.sandbox.stats.multicomp import multipletests
print(multipletests([1, .005591], .05)) ## correct out pvalues given that we did two comparisons
The output looks like this:
(array([False, True]), array([1. , 0.01115074]), 0.025320565519103666, 0.025)
This means we rejected the second null hypothesis under the correction and that the corrected p-values looks like [1., 0.011150]. The last two values are corrections to your significance level under two different correction methods.
I hope this helps anyone trying to do this type of work. If anyone has anything to add I would welcome comments. This isn't my area of expertise so I could be making some mistakes.

Modifying old GaussianProcessor example to run with GaussianProcessRegressor

I have an example from a data science book I am trying to run in a Jupyter notebook. The code sippet looks like this
from sklearn.gaussian_process import GaussianProcess
# define the model and draw some data
model = lambda x: x * np.sin(x)
xdata = np.array([1, 3, 5, 6, 8])
ydata = model(xdata)
# Compute the Gaussian process fit
gp = GaussianProcess(corr='cubic', theta0=1e-2, thetaL=1e-4, thetaU=1E-1,
random_start=100)
gp.fit(xdata[:, np.newaxis], ydata)
xfit = np.linspace(0, 10, 1000)
yfit, MSE = gp.predict(xfit[:, np.newaxis], eval_MSE=True)
dyfit = 2 * np.sqrt(MSE) # 2*sigma ~ 95% confidence region
since GaussianProcess has been deprecated and replaced with GaussianProcessRegressor. I tried to fix the code snippet to look like this
from sklearn.gaussian_process import GaussianProcessRegressor
# define the model and draw some data
model = lambda x: x * np.sin(x)
xdata = np.array([1, 3, 5, 6, 8])
ydata = model(xdata)
# Compute the Gaussian process fit
gp = GaussianProcessRegressor(random_state=100)
gp.fit(xdata[:, np.newaxis], ydata)
xfit = np.linspace(0, 10, 1000)
yfit, MSE = gp.predict(xfit[:, np.newaxis])
dyfit = 2 * np.sqrt(MSE) # 2*sigma ~ 95% confidence region
but I get a ValueError
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-47-c04ac57d1897> in <module>
11
12 xfit = np.linspace(0, 10, 1000)
---> 13 yfit, MSE = gp.predict(xfit[:, np.newaxis])
14 dyfit = 2 * np.sqrt(MSE) # 2*sigma ~ 95% confidence region
ValueError: too many values to unpack (expected 2)
bit unsure why the predict function complains here?
The error has the answer.
At yfit, MSE = gp.predict(xfit[:, np.newaxis]) you are trying to assign the result of predict to two variables while the predict only returns a single numpy.ndarray.
To solve this issue, run
yfit = gp.predict(xfit[:, np.newaxis])

What equation is Statsmodels OLS using to create example fit line

I'm a novice to statics analysis and was looking into using statsmodels. As part of my research I ran into the following set of examples.
The section "OLS Non-linear curve but liner in parameters" is confusing the heck out of me. Example Follows:
np.random.seed(9876789)
nsample = 50
sig = 0.5
x = np.linspace(0, 20, nsample)
X = np.column_stack((x, np.sin(x), (x-5)**2, np.ones(nsample)))
beta = [0.5, 0.5, -0.02, 5.]
y_true = np.dot(X, beta)
y = y_true + sig * np.random.normal(size = nsample)
res = sm.OLS(y, X).fit()
print(res.summary())
This shows the following summary results:
OLS Regression Results
==============================================================================
Dep. Variable: y R-squared: 0.933
Model: OLS Adj. R-squared: 0.928
Method: Least Squares F-statistic: 211.8
Date: Tue, 28 Feb 2017 Prob (F-statistic): 6.30e-27
Time: 21:33:30 Log-Likelihood: -34.438
No. Observations: 50 AIC: 76.88
Df Residuals: 46 BIC: 84.52
Df Model: 3
Covariance Type: nonrobust
==============================================================================
coef std err t P>|t| [0.025 0.975]
------------------------------------------------------------------------------
x1 0.4687 0.026 17.751 0.000 0.416 0.522
x2 0.4836 0.104 4.659 0.000 0.275 0.693
x3 -0.0174 0.002 -7.507 0.000 -0.022 -0.013
const 5.2058 0.171 30.405 0.000 4.861 5.550
==============================================================================
Omnibus: 0.655 Durbin-Watson: 2.896
Prob(Omnibus): 0.721 Jarque-Bera (JB): 0.360
Skew: 0.207 Prob(JB): 0.835
Kurtosis: 3.026 Cond. No. 221.
==============================================================================
When you plot all of this you get:
Plot of Data, Fit and True Line
What's confusing me is I can't figure out how the fit comes out of the coefficients shown in the summary table. My understanding is that these coefficients for the linear fit should correspond to and equation in the format X1 * x^3 + X2 * X^2 + X3 * X + Const, but that does not result in the curve seen. My next thought was that it might have been extrapolating the equation based on the values in the X matrix, therefore something like X1 * x + X2 * sin(x) + X3 * (x-5)^2 + Const. That also does not work.
What does seem to work is a polynomial fit with a degree of around 10. I found that using np.polyfit(x, y, 10). (The Coefficients of which are not similar to the OLS ones, plus there's 6 more)
So my question is what equation is OLS using to produce the predicted values? How doe the coefficients relate to it? When no equation is specified (assuming it's using something different than normal polynomial equations) how does it determine what to use or whats the best fit?
One Note, I've figured out that I can force it to do what I was expecting by changing the x values used to a different matrix via np.vander()
X = np.vander(X, 4)
This produces results in line with what I was expecting and np.polyfit.

calculate precision and recall in a confusion matrix

Suppose I have a confusion matrix as like as below. How can I calculate precision and recall?
first, your matrix is arranged upside down.
You want to arrange your labels so that true positives are set on the diagonal [(0,0),(1,1),(2,2)] this is the arrangement that you're going to find with confusion matrices generated from sklearn and other packages.
Once we have things sorted in the right direction, we can take a page from this answer and say that:
True Positives are on the diagonal position
False positives are column-wise sums. Without the diagonal
False negatives are row-wise sums. Without the diagonal.
\ Then we take some formulas from sklearn docs for precision and recall.
And put it all into code:
import numpy as np
cm = np.array([[2,1,0], [3,4,5], [6,7,8]])
true_pos = np.diag(cm)
false_pos = np.sum(cm, axis=0) - true_pos
false_neg = np.sum(cm, axis=1) - true_pos
precision = np.sum(true_pos / (true_pos + false_pos))
recall = np.sum(true_pos / (true_pos + false_neg))
Since we remove the true positives to define false_positives/negatives only to add them back... we can simplify further by skipping a couple of steps:
true_pos = np.diag(cm)
precision = np.sum(true_pos / np.sum(cm, axis=0))
recall = np.sum(true_pos / np.sum(cm, axis=1))
I don't think you need summation at last. Without summation, your method is correct; it gives precision and recall for each class.
If you intend to calculate average precision and recall, then you have two options: micro and macro-average.
Read more here http://scikit-learn.org/stable/auto_examples/model_selection/plot_precision_recall.html
For the sake of completeness for future reference, given a list of grounth (gt) and prediction (pd). The following code snippet computes confusion matrix and then calculates precision and recall.
from sklearn.metrics import confusion_matrix
gt = [1,1,2,2,1,0]
pd = [1,1,1,1,2,0]
cm = confusion_matrix(gt, pd)
#rows = gt, col = pred
#compute tp, tp_and_fn and tp_and_fp w.r.t all classes
tp_and_fn = cm.sum(1)
tp_and_fp = cm.sum(0)
tp = cm.diagonal()
precision = tp / tp_and_fp
recall = tp / tp_and_fn
Given:
hypothetical confusion matrix (cm)
cm =
[[ 970 1 2 1 1 6 10 0 5 0]
[ 0 1105 7 3 1 6 0 3 16 0]
[ 9 14 924 19 18 3 13 12 24 4]
[ 3 10 35 875 2 34 2 14 19 19]
[ 0 3 6 0 903 0 9 5 4 32]
[ 9 6 4 28 10 751 17 5 24 9]
[ 7 2 6 0 9 13 944 1 7 0]
[ 3 11 17 3 16 3 0 975 2 34]
[ 5 38 10 16 7 28 5 4 830 20]
[ 5 3 5 13 39 10 2 34 5 853]]
Goal:
precision and recall for each class using map() to calculate list division.
from operator import truediv
import numpy as np
tp = np.diag(cm)
prec = list(map(truediv, tp, np.sum(cm, axis=0)))
rec = list(map(truediv, tp, np.sum(cm, axis=1)))
print ('Precision: {}\nRecall: {}'.format(prec, rec))
Result:
Precision: [0.959, 0.926, 0.909, 0.913, 0.896, 0.880, 0.941, 0.925, 0.886, 0.877]
Recall: [0.972, 0.968, 0.888, 0.863, 0.937, 0.870, 0.954, 0.916, 0.861, 0.880]
please note: 10 classes, 10 precisions and 10 recalls.
Agreeing with gruangly and EuWern, I modified PabTorre's solution accordingly to generate precision and recall per class.
Also, given my use case (NER) where a model could:
Never predict a class that is present in the input text (i.e. a column of zeros, i.e. TP:0, FP:0, FN: all), causing a nan in the precision array, or
Predict a class that is completely absent in the input text (i.e. a row of zeros, i.e. TP:0, FN:0, FP: all), causing a nan in the recall array...
I wrap the array with a numpy.nan_to_num() to convert any nan to zero. This is not a mathematical decision, but a per use-case, functional decision in how to handle never-predicted, or never-occuring classes.
import numpy
confusion_matrix = numpy.array([
[ 5, 0, 0, 0, 0, 3],
[ 0, 2, 0, 1, 0, 5],
[ 0, 0, 0, 3, 5, 7],
[ 0, 0, 0, 9, 0, 0],
[ 0, 0, 0, 9, 32, 3],
[ 0, 0, 0, 0, 0, 0]
])
true_positives = numpy.diag(confusion_matrix)
false_positives = numpy.sum(confusion_matrix, axis=0) - true_positives
false_negatives = numpy.sum(confusion_matrix, axis=1) - true_positives
precision = numpy.nan_to_num(numpy.divide(true_positives, (true_positives + false_positives)))
recall = numpy.nan_to_num(numpy.divide(true_positives, (true_positives + false_negatives)))
print(true_positives) # [ 5 2 0 9 32 0 ]
print(false_positives) # [ 0 0 0 13 5 18 ]
print(false_negatives) # [ 3 6 15 0 12 0 ]
print(precision) # [1. 1. 0. 0.40909091 0.86486486 0. ]
print(recall) # [0.625 0.25 0. 1. 0.72727273 0. ]
import numpy as np
n_classes=3
cm = np.array([[0,1,2],
[5,4,3],
[8,7,6]])
sp = []
f1 = []
gm = []
sens = []
acc= []
for c in range(n_classes):
tp = cm[c,c]
fp = sum(cm[:,c]) - cm[c,c]
fn = sum(cm[c,:]) - cm[c,c]
tn = sum(np.delete(sum(cm)-cm[c,:],c))
recall = tp/(tp+fn)
precision = tp/(tp+fp)
accuracy = (tp+tn)/(tp+fp+fn+tn)
specificity = tn/(tn+fp)
f1_score = 2*((precision*recall)/(precision+recall))
g_mean = np.sqrt(recall * specificity)
sp.append(specificity)
f1.append(f1_score)
gm.append(g_mean)
sens.append(recall)
acc.append(tp)
print("for class {}: recall {}, specificity {}\
precision {}, f1 {}, gmean {}".format(c,round(recall,4), round(specificity,4), round(precision,4),round(f1_score,4),round(g_mean,4)))
print("sp: ", np.average(sp))
print("f1: ", np.average(f1))
print("gm: ", np.average(gm))
print("sens: ", np.average(sens))
print("accuracy: ", np.sum(acc)/np.sum(cm))

Scikit Learn Logistic Regression confusion

I'm having some trouble understanding sckit-learn's LogisticRegression() method. Here's a simple example
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.linear_model import LogisticRegression
# Create a sample dataframe
data = [['Age', 'ZepplinFan'], [13, 0], [25, 0], [40, 1], [51, 0], [55, 1], [58, 1]]
columns=data.pop(0)
df = pd.DataFrame(data=data, columns=columns)
Age ZepplinFan
0 13 0
1 25 0
2 40 1
3 51 0
4 55 1
5 58 1
# Fit Logistic Regression
lr = LogisticRegression()
lr.fit(X=df[['Age']], y = df['ZepplinFan'])
# View the coefficients
lr.intercept_ # returns -0.56333276
lr.coef_ # returns 0.02368826
# Predict for new values
xvals = np.arange(-10,70,1)
predictions = lr.predict_proba(X=xvals[:,np.newaxis])
probs = [y for [x, y] in predictions]
# Plot the fitted model
plt.plot(xvals, probs)
plt.scatter(df.Age.values, df.ZepplinFan.values)
plt.show()
Obviously this doesn't appear to be a good fit. Furthermore, when I do this exercise in R I get different coefficients and a model that makes more sense.
lapply(c("data.table","ggplot2"), require, character.only=T)
dt <- data.table(Age=c(13, 25, 40, 51, 55, 58), ZepplinFan=c(0, 0, 1, 0, 1, 1))
mylogit <- glm(ZepplinFan ~ Age, data = dt, family = "binomial")
newdata <- data.table(Age=seq(10,70,1))
newdata[, ZepplinFan:=predict(mylogit, newdata=newdata, type="response")]
mylogit$coeff
(Intercept) Age
-4.8434 0.1148
ggplot()+geom_point(data=dt, aes(x=Age, y=ZepplinFan))+geom_line(data=newdata, aes(x=Age, y=ZepplinFan))
What am I missing here?
The problem you are facing is related to the fact that scikit learn is using regularized logistic regression. The regularization term allows for controlling the trade-off between the fit to the data and generalization to future unknown data. The parameter C is used to control the regularization, in your case:
lr = LogisticRegression(C=100)
will generate what you are looking for:
As you have discovered, changing the value of the intercept_scaling parameter also achieves similar effect. The reason is also regularization or rather how it affects estimation of the bias in the regression. The larger intercept_scaling parameter will effectively reduce the impact of regularization on the bias.
For more information about the implementation of LR and solvers used by scikit-learn, check: http://scikit-learn.org/stable/modules/linear_model.html#logistic-regression

Resources