How does logistic regression build Sigmoid curve from categorical dependent variable? - scikit-learn

I'm exploring the Scikit-learn logistic regression algorithm. I understand that as part of the training, the algorithm builds a regression curve where the y-variable ranges from 0 to 1 (sigmoid S-curve). The y-variable is a continuous variable here (although in reality it is a discrete variable). .
How is the algorithm able to learn the S-curve, when the training dataset reflects reality and includes the y-variable as a discrete variable? There is no probability estimate in the training, so I'm wondering how is the algorithm able to learn the S-curve.

There is no probability estimate in the training
Sure, but we pretend there is for modeling purposes. We want to maximize the probability of, as you call it, “reality”—if the observed response (the discrete value you refer to) is a 0, we want to predict that with probability 1; similarly, if the response is a 1, we want to predict that with probability 1.
Fitting the model to one data point, getting the right answer with probability 1, would be easy. Of course, we have more than one data point. We have to balance concerns between these. We want the predicted value sigmoid(weights * features) to be close to the true response (0 or 1) for all of the data points, but there may not be a way to set the parameters of the model to achieve this. (That is, the data may not be linearly separable.)

Good question! The fitting process in logistic regression is a search procedure that seeks the beta coefficients that minimize the error in the probabilities predicted by the model (continuous values) and the data (discrete values).
In logistic regression, you model probabilities using a logistic function (also known as a sigmoid function):
XB = B0 + B1 * X1 + B2 * X2 + ... + BN * XN
p(X) = e^(XB) / (1 + e^(XB))
The algorithm tries to find the beta coefficients that minimize the error using Maximum Likelihood estimation. The function to be minimized is called the cost function, and it can be any number of things. The most common ones are:
sum (P(X_i) - y_i)^2
sum |P(X_i) - y_i|
A random set of betas is picked at random, the cost is calculated and the algorithm will pick a new set of betas that will result in a lower cost. The algorithm stops searching for new betas when the decrease in cost is smaller than a given threshold (set by the tol parameter in sklearn).
The way the model converges to the final set of coefficients depends on the solver parameter. Each solver has a different way of converging to the final set of betas, but they usually converge to the same results.

Related

r2_score is -18.709, Why?

I'm conducting multiple linear regression in Python, ML. To the best of my knowledge r2_score supposed to be in the range of -1 to 1. But, I obtained -18.709.
What is the problem to obtain this answer and how can I correct it? Its coding and result look as follows:
calculate R
from SK-learn.meterics import r2_score
score = r2_score(y_test, y_pred)
print(score)
The output:
-18.7097
Its prediction result is as follows:
y_pred = model.predict(X_test)
print(y_pred)
Result:
[ 25000. 123000. 73000. 103000.]
The coefficient of determination r-square is defined as
Nash–Sutcliffe model efficiency coefficient (Explanation below)
There are cases where the computational definition of R2 can yield
negative values, depending on the definition used. This can arise when
the predictions that are being compared to the corresponding outcomes
have not been derived from a model-fitting procedure using those data.
Even if a model-fitting procedure has been used, R2 may still be
negative, for example when linear regression is conducted without
including an intercept, or when a non-linear function is used to fit
the data. In cases where negative values arise, the mean of the data
provides a better fit to the outcomes than do the fitted function
values, according to this particular criterion. Since the most general
definition of the coefficient of determination is also known as the
Nash–Sutcliffe model efficiency coefficient, this last notation is
preferred in many fields, because denoting a goodness-of-fit indicator
that can vary from −∞ to 1 (i.e., it can yield negative values) with a
squared letter is confusing.
SOURCE: wikipedia

Why are my r^2 values so consistently negative?

I'm not sure if the problem is with my regression estimator models, or with my understanding of what the r^2 measure-of-fittedness actually means. I am working on a project using scikit learn and ~11 different regression estimators in order to produce (rough!) predictions of baseball fantasy performance. Certain models always fare better than others (Decision Tree Regression and Extra Tree Regression produce the worst r^2 scores, while ElasticCV and LassoCV produce the best r^2 scores and every once in a while might even be a slightly positive number!).
If a horizontal line produces an r^2 score of 0, then even if all my models were worthless, and literally have zero predictive value, and are spitting out numbers entirely at random, shouldn't then I get small positive numbers for r^2 sometimes, if from sheer dumb luck alone? 8 of the 11 estimators i use, despite running over different datasets hundreds of times, have never once produced even a tiny positive number for r^2.
Am I misunderstanding how r^2 works?
I am not switching the order in sklearn's .score function either. I have double checked this many times. When I do put the order of y_pred, y_true in the wrong way, it yields r^2 values that are hugely negative (like <-50 big)
The fact that thats the case actually lends more to my confusion as to how r^2 here is a measure of fittedness, but I digress...
## I don't know whether I'm supposed to include my df4 or even a
##sample, but suffice to say here is just a single row to show what
##kind of data we have. It is all normalized and/or zscore'd
"""
>> print(df4.head(1))
HomeAway ParkFactor Salary HandedVs Hand oppoBullpen \
Points
3.0 1.0 -1.229 -0.122111 1.0 0.0 -0.90331
RibRunHistory BibTibHistory GrabBagHistory oppoTotesRank \
Points
3.0 0.964943 0.806874 -0.224993 -0.846859
oppoSwipesRank oppoWalksRank Temp Precip WindSpeed \
Points
3.0 -1.40371 -1.159115 -0.665324 -0.380048 -0.365671
WindDirection oppoPositFantasy oppoFantasy
Points
3.0 0.229944 -1.011505 0.919269
"""
def ElasticNetValidation(df4):
X = df4.values
y = df4.index
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.1)
ENTrain = ElasticNetCV(cv=20)
ENTrain.fit(X_train, y_train)
y_pred = ENTrain.predict(X_test)
EN = ElasticNetCV(cv=20)
ENModel = EN.fit(X, y)
print('ElasticNet R^2: ' + str(r2_score(y_test, y_pred)))
scores = cross_val_score(ENModel, X, y, cv=20)
print("ElasticNet Accuracy: %0.2f (+/- %0.2f)" % (scores.mean(), scores.std() * 2))
return ENModel
When i run this estimator, along with ten other regression estimators I have been experimenting with, I get both r2_score() and cross_val_score().mean() showing negative numbers nearly every time. Certain estimators ALWAYS produce negative scores that are not even close to zero (decision tree regressor, extra tree regressor). Certain estimators fare better and even sometimes produce a tiny positive score, never more than 0.01 though, and even those estimators (elasticCV, lassoCV, linearRegression) are negative most of the time, albeit only slightly negative.
Even if these models I'm building are horrible. SAy they are totally random and have no predictive power whatsoever when it comes to the target: shouldn't it predict better than a plain horizontal line as often as not? How is it that an unrelated model is predicting POORER than a horizontal line so consistently?
You most likely have issues with overfitting. As you mentioned correctly, negative R2 values can occur if your model performs worse than just fitting an intercept term. Your models do probably not capture any 'real' underlying dependence but merely fit random noise. You are calculating the R2 score on a small test set and it is very well possible that this fitting of noise yields consistently worse result than a simple intercept term on the test set.
This is a typical case of bias-variance tradeoff. Your models have low bias and high variance and therefore perform poorly on the test data. There are certain models that aim at reducing overfit / variance, for example the Lasso and Elastic Net. These models actually are among the models that you see performing better.
In order to convince yourself that the sklearn's r2_score function works properly and to get familiarised with it, I would recommend that you first fit and predict your model on training data only (leave out the CV as well). R2 can never be negative in this case. Also make sure that your models include an intercept term (wherever available).

Model evaluation : model.score Vs. ROC curve (AUC indicator)

I want to evaluate a logistic regression model (binary event) using two measures:
1. model.score and confusion matrix which give me a 81% of classification accuracy
2. ROC Curve (using AUC) which gives back a 50% value
Are these two result in contradiction? Is that possible
I'missing something but still can't find it
y_pred = log_model.predict(X_test)
accuracy_score(y_test , y_pred)
cm = confusion_matrix( y_test,y_pred )
y_test.count()
print (cm)
tpr , fpr, _= roc_curve( y_test , y_pred, drop_intermediate=False)
roc = roc_auc_score( y_test ,y_pred)
enter image description here
The accuracy score is calculated based on the assumption that a class is selected if it has a prediction probability of more than 50%. This means that you are looking only at 1 case (one working point) out of many. Let's say you'd like to classify an instance as '0' even if it has a probability greater than 30% (this may happen if one of your classes is more important for you, and its a-priori probability is very low). In this case - you will have a very different confusion matrix with a different accuracy ([TP+TN]/[ALL]). The ROC auc score examines all of these working points and gives you an estimation of your overall model. A score of 50% means that the model is equal to a random selection of classes based on your a-priori probabilities of the classes. You would like the ROC to be much higher to say that you have a good model.
So in the above case - you can say that your model does not have a good prediction strength. As a matter of fact - a better prediction will be to predict everything as "1" - in your case it will lead to an accuracy of above 99%.

Interpreting coefficientMatrix, interceptVector and Confusion matrix on multinomial logistic regression

Can anyone explain how to interpret coefficientMatrix, interceptVector , Confusion matrix
of a multinomial logistic regression.
According to Spark documentation:
Multiclass classification is supported via multinomial logistic (softmax) regression. In multinomial logistic regression, the algorithm produces K sets of coefficients, or a matrix of dimension K×J where K is the number of outcome classes and J is the number of features. If the algorithm is fit with an intercept term then a length K vector of intercepts is available.
I turned an example using spark ml 2.3.0 and I got this result.
.
If I analyse what I get :
The coefficientMatrix has dimension of 5 * 11
The interceptVector has dimension of 5
If so,why the Confusion matrix has a dimension of 4 * 4 ?
Also, can anyone give an interpretation of coefficientMatrix, interceptVector ?
Why I get negative coefficients ?
If 5 is the number of classes after classification, why I get 4 rows in the confusion matrix ?
EDIT
I forgot to mention that I am still beginner in machine learning and that my search in google didn't help, so maybe I get an Up Vote :)
Regarding the 4x4 confusion matrix: I imagine that when you split your data into test and train, there were 5 classes present in your training set and only 4 classes present in your test set. This can easily happen if the distribution of your response variable is imbalanced.
You'll want to try to perform some stratified split between test and train prior to modeling. If you are working with pyspark, you may find this library helpful: https://github.com/databricks/spark-sklearn
Now regarding negative coefficients for a multi-class Logistic Regression: As you mentioned, your returned coefficientMatrix shape is 5x11.
Spark generated five models via one-vs-all approach. The 1st model corresponds to the model where the positive class is the 1st label and the negative class is composed of all other labels. Lets say the 1st coefficient for this model is -2.23. In order to interpret this coefficient we take the exponential of -2.23 which is (approx) 0.10. Interpretation here: 'With one unit increase of 1st feature we expect a reduced odds of the positive label by 90%'

scikit learn linearsvc confidence

I'm using scikit-learn's LinearSVC SVM implementation, and I'm trying understand the multi-class prediction. Looking at coef_ and intercept_ I can get the hyperplane weights. For example, on my learning problem with two features and four labels I get
f0 = 1.99861379*x1 - 0.09489263*x2 + 0.89433196
f1 = -2.04309715*x1 - 3.51285420*x2 - 3.1206355
f2 = 0.73536996*x1 + 2.52111207*x2 - 3.04176149
f3 = -0.56607817*x1 - 0.16981337*x2 - 0.92804815
When I use the decision_function method I get the values that correspond to the above functions. But the documentation says
The confidence score for a sample is the signed distance of that
sample to the hyperplane.
But decision_function does not return the signed distance, it just returns f().
To be more specific, I'm assuming that the LinearSVC uses the standard trick of having a constant 1 feature to represent a threshold. (This might be wrong.) For my example problem this gives a three dimensional feature space where instances are always of the form (1,x1,x2). Assuming no other threshold term, the algorithm learns a hyperplane w=(w0, w1, w2) that goes through the origin in this three dimensional space. Now I get a point to predict, call it z=(1,a,b). What is the signed distance (margin) of this point to the hyperplane. It's just dot(w,z)/2norm(w). The LinearSVC code is returning dot(w,z)
Thanks,
Chris

Resources