Scikit-learn MLPRegressor - How to get results independent from random seed? - scikit-learn

I try to learn some a sine function by using a MLP. Unfortunately, the results drastically depend on the random seed.
How could I adjust the MLPRegressor, so that the results get less dependent on the random seed?
Code:
import numpy as np
from sklearn.metrics import mean_squared_error
from sklearn.neural_network import MLPRegressor
LOOK_BACK = 10
x = np.linspace(-10,10,1000)
y = np.sin(x)
dataX,dataY = [],[]
for i in range(len(y)-LOOK_BACK-1):
dataX.append(y[i:(i+LOOK_BACK)])
dataY.append(y[i+LOOK_BACK])
x_train = np.array(dataX)
y_train = np.array(dataY)
for i in range(10):
print "np.random.seed(%d)"%(i)
np.random.seed(i)
model = MLPRegressor(activation='tanh',solver='adam')
model.fit(x_train,y_train)
train_predict = model.predict(x_train)
print 'MSE train:', mean_squared_error(train_predict,y_train)
Output:
np.random.seed(0)
MSE train: 0.00167560534562
np.random.seed(1)
MSE train: 0.0050531872206
np.random.seed(2)
MSE train: 0.00279393534973
np.random.seed(3)
MSE train: 0.00224293537385
np.random.seed(4)
MSE train: 0.00154350859516
np.random.seed(5)
MSE train: 0.00383997358155
np.random.seed(6)
MSE train: 0.0265389606087
np.random.seed(7)
MSE train: 0.00195637404624
np.random.seed(8)
MSE train: 0.000590823529864
np.random.seed(9)
MSE train: 0.00393172460516
The seeds 6,9 and 8 produce different orders of the MSE. How could I prevent this?

Multi layer perceptron as well as other neural network architectures suffer from the fact that their corresponding loss functions have numerous local optima. Thus all gradient algorithms are heavily dependent on what initialization is chosen. And rather than seeing this as undesirable you can view the initialization (determined through random_state) as an additional hyperparameter that gives you flexibility.
Just for the record, the differences in your MSE are not that big and if your goal is to perfectly overfit then change the regularization parameter alpha to zero (the default value is alpha=0.0001)

Related

Cross-validation using when using the Gaussian Naive Bayes model

Well, I am trying to solve this clustering problem that involves the Gaussian Naive-Bayes algorithm.
Question:
Classification
Consider the data in the file - link below. Train the algorithm Gaussian Naive Bayes using the method of cross-validation holdout (Use the first 700 lines for the training set and the rest for the test set.) What is the accuracy of the training set? What is the accuracy of the test set? Do the same training with the method Leave-One-Out. What is the average accuracy for the training set? What is the average accuracy for the test set?
My solution that I am not sure about:
Basic Code (Full code in the Collab link below):
#Using Holdout
from sklearn.metrics import confusion_matrix, accuracy_score
y_pred_train = classifier.predict(X_train)
cm0 = confusion_matrix(y_train, y_pred_train )
cm = confusion_matrix(y_test, y_pred)
print(cm)
print(accuracy_score(y_test, y_pred))
print(accuracy_score(y_train, y_pred_train))
My Answer for the holdout:
[[ 23 51]
[ 21 205]]
0.76
0.7871428571428571
LOO:
#Using LOO
from sklearn.model_selection import cross_val_score
from sklearn.model_selection import LeaveOneOut
#This is where I got the code: https://machinelearningmastery.com/loocv-for-evaluating-machine-learning-algorithms/
cv = LeaveOneOut()
accuracies = cross_val_score(estimator=classifier, X = X_train, y = y_pred_train, scoring='accuracy',cv=cv)
print(f"Accuracy Train {accuracies.mean()}")
print(f"Standard Deviation {accuracies.std()}")
accuraciestest = cross_val_score(estimator=classifier, X = X_test, y = y_test, scoring='accuracy', cv=cv)
print(f"Accuracy Test {accuraciestest.mean()}")
print(f"Standard Deviation Test {accuraciestest.std()}")
My Answer for the LeaveOneOut:
Accuracy Train 0.9771428571428571
Standard Deviation 0.1494479637785374
Accuracy Test 0.7433333333333333
Standard Deviation Test 0.43679387460092534
Data:
https://drive.google.com/file/d/1v9V-007yV3vVckPcQN0Q5VuNZYF_JjBW/view?usp=sharing
Colabs Link: https://colab.research.google.com/drive/1X68-Li6FacnAAQ4ASg3mmqdrU15v2ReP?usp=sharing

Scikit-learn output in a pleasant way [duplicate]

As an R user, I wanted to also get up to speed on scikit.
Creating a linear regression model(s) is fine, but can't seem to find a reasonable way to get a standard summary of regression output.
Code example:
# Linear Regression
import numpy as np
from sklearn import datasets
from sklearn.linear_model import LinearRegression
# Load the diabetes datasets
dataset = datasets.load_diabetes()
# Fit a linear regression model to the data
model = LinearRegression()
model.fit(dataset.data, dataset.target)
print(model)
# Make predictions
expected = dataset.target
predicted = model.predict(dataset.data)
# Summarize the fit of the model
mse = np.mean((predicted-expected)**2)
print model.intercept_, model.coef_, mse,
print(model.score(dataset.data, dataset.target))
Issues:
seems like the intercept and coef are built into the model, and I just type print (second to last line) to see them.
What about all the other standard regression output like R^2, adjusted R^2, p values, etc. If I read the examples correctly, seems like you have to write a function/equation for each of these and then print it.
So, is there no standard summary output for lin. reg. models?
Also, in my printed array of outputs of coefficients, there are no variable names associated with each of these? I just get the numeric array. Is there a way to print these where I get an output of the coefficients and the variable they go with?
My printed output:
LinearRegression(copy_X=True, fit_intercept=True, normalize=False)
152.133484163 [ -10.01219782 -239.81908937 519.83978679 324.39042769 -792.18416163
476.74583782 101.04457032 177.06417623 751.27932109 67.62538639] 2859.69039877
0.517749425413
Notes: Started off with Linear, Ridge and Lasso. I have gone through the examples. Below is for the basic OLS.
There exists no R type regression summary report in sklearn. The main reason is that sklearn is used for predictive modelling / machine learning and the evaluation criteria are based on performance on previously unseen data (such as predictive r^2 for regression).
There does exist a summary function for classification called sklearn.metrics.classification_report which calculates several types of (predictive) scores on a classification model.
For a more classic statistical approach, take a look at statsmodels.
I use:
import sklearn.metrics as metrics
def regression_results(y_true, y_pred):
# Regression metrics
explained_variance=metrics.explained_variance_score(y_true, y_pred)
mean_absolute_error=metrics.mean_absolute_error(y_true, y_pred)
mse=metrics.mean_squared_error(y_true, y_pred)
mean_squared_log_error=metrics.mean_squared_log_error(y_true, y_pred)
median_absolute_error=metrics.median_absolute_error(y_true, y_pred)
r2=metrics.r2_score(y_true, y_pred)
print('explained_variance: ', round(explained_variance,4))
print('mean_squared_log_error: ', round(mean_squared_log_error,4))
print('r2: ', round(r2,4))
print('MAE: ', round(mean_absolute_error,4))
print('MSE: ', round(mse,4))
print('RMSE: ', round(np.sqrt(mse),4))
statsmodels package gives a quiet decent summary
from statsmodels.api import OLS
OLS(dataset.target,dataset.data).fit().summary()
You can do using statsmodels
import statsmodels.api as sm
X = sm.add_constant(X.ravel())
results = sm.OLS(y,x).fit()
results.summary()
results.summary() will organize the results into three tabels
You can use the following option to have a summary table:
import statsmodels.api as sm
#log_clf = LogisticRegression()
log_clf =sm.Logit(y_train,X_train)
classifier = log_clf.fit()
y_pred = classifier.predict(X_test)
print(classifier.summary2())
Use model.summary() after predict
# Linear Regression
import numpy as np
from sklearn import datasets
from sklearn.linear_model import LinearRegression
# load the diabetes datasets
dataset = datasets.load_diabetes()
# fit a linear regression model to the data
model = LinearRegression()
model.fit(dataset.data, dataset.target)
print(model)
# make predictions
expected = dataset.target
predicted = model.predict(dataset.data)
# >>>>>>>Print out the statistics<<<<<<<<<<<<<
model.summary()
# summarize the fit of the model
mse = np.mean((predicted-expected)**2)
print model.intercept_, model.coef_, mse,
print(model.score(dataset.data, dataset.target))

Can I use keras.losses.binary_crossentropy(y_true,y_pred) without training process?

I am new to Keras. I want to know the loss of certain instances. So I got the y_true and y_pred of these data instances. I want to call the loss function to calculate the loss but only get Tensor("Mean_5:0",shape=(),dtype=float32). How can I evaluate the value of the tensor? Is it similar to tensorflow by calling los.eval()?
y_pred is calcualted by:
y_pred = self.model.predict(x, batch_size=self.batch_size)
y_true is also an available list.
How to use binary_crossentropy()?
You almost had the answer.
from keras import backend
from keras.losses import binary_crossentropy
y_true = backend.variable(y_true)
y_pred = backend.variable(y_pred)
# calculate the average cross-entropy
mean_ce = backend.eval(binary_crossentropy(y_true, y_pred))
print('Average Cross Entropy: %.3f nats' % mean_ce)

Keras and Sklearn logreg returning different results

I'm comparing the results of a logistic regressor written in Keras to the default Sklearn Logreg. My input is one-dimensional. My output has two classes and I'm interested in the probability that the output belongs to the class 1.
I'm expecting the results to be almost identical, but they are not even close.
Here is how I generate my random data. Note that X_train, X_test are still vectors, I'm just using capital letters because I'm used to it. Also there is no need for scaling in this case.
X = np.linspace(0, 1, 10000)
y = np.random.sample(X.shape)
y = np.where(y<X, 1, 0)
Here's cumsum of y plotted over X. Doing a regression here is not rocket science.
I do a standard train-test-split:
X_train, X_test, y_train, y_test = train_test_split(X, y)
X_train = X_train.reshape(-1,1)
X_test = X_test.reshape(-1,1)
Next, I train a default logistic regressor:
from sklearn.linear_model import LogisticRegression
sk_lr = LogisticRegression()
sk_lr.fit(X_train, y_train)
sklearn_logreg_result = sk_lr.predict_proba(X_test)[:,1]
And a logistic regressor that I write in Keras:
from keras.models import Sequential
from keras.layers import Dense
keras_lr = Sequential()
keras_lr.add(Dense(1, activation='sigmoid', input_dim=1))
keras_lr.compile(loss='mse', optimizer='sgd', metrics=['accuracy'])
_ = keras_lr.fit(X_train, y_train, verbose=0)
keras_lr_result = keras_lr.predict(X_test)[:,0]
And a hand-made solution:
pearson_corr = np.corrcoef(X_train.reshape(X_train.shape[0],), y_train)[0,1]
b = pearson_corr * np.std(y_train) / np.std(X_train)
a = np.mean(y_train) - b * np.mean(X_train)
handmade_result = (a + b * X_test)[:,0]
I expect all three to deliver similar results, but here is what happens. This is a reliability diagram using 100 bins.
I have played around with loss functions and other parameters, but the Keras logreg stays roughly like this. What might be causing the problem here?
edit: Using binary crossentropy is not the solution here, as shown by this plot (note that the input data has changed between the two plots).
While both implementations are a form of Logistic Regression there's quite a few differences. While both solutions converge to a comparable minimum (0.75/0.76 ACC) they are not identical.
Optimizer - keras uses vanille SGD where sklearn's LR is based on
liblinear which implements trust region Newton method
Regularization - sklearn has built in L2 regularization
Weights -The weights are randomly initialized and probably sampled from a different distribution.

scikit-learn cross_validation over-fitting or under-fitting

I'm using scikit-learn cross_validation(http://scikit-learn.org/stable/modules/cross_validation.html) and get for example 0.82 mean score(r2_scorer).
How could I know do I have over-fitting or under-fitting using scikit-learn functions?
Unfortunately I confirm that there is no built-in tool to compare train and test scores in a CV setup. The cross_val_score tool only reports test scores.
You can setup your own loop with the train_test_split function as in Ando's answer but you can also use any other CV scheme.
import numpy as np
from sklearn.cross_validation import KFold
from sklearn.metrics import SCORERS
scorer = SCORERS['r2']
cv = KFold(5)
train_scores, test_scores = [], []
for train, test in cv:
regressor.fit(X[train], y[train])
train_scores.append(scorer(regressor, X[train], y[train]))
test_scores.append(scorer(regressor, X[test], y[test]))
mean_train_score = np.mean(train_scores)
mean_test_score = np.mean(test_scores)
If you compute the mean train and test scores with cross validation you can then find out if you are:
Underfitting: the train score is far from the perfect score (which is 1.0 for r2)
Overfitting: the train and test scores are not close from on another (the mean test score is significantly lower than the mean train score).
Note: you can be both significantly underfitting and overfitting at the same time if your model is inadequate and your data is too noisy.
You should compare your scores when testing on training and testing data. If the scores are close to equal, you are likely underfitting. If they are far apart, you are likely overfitting (unless using a method such as random forest).
To compute the scores for both train and test data, you can use something along the following (assuming your data is in variables X and Y):
from sklearn import cross_validation
#do five iterations
for i in range(5):
X_train, X_test, y_train, y_test = cross_validation.train_test_split(X, Y, test_size=0.4)
#Your predictor, linear SVM in this example
clf = svm.SVC(kernel='linear', C=1).fit(X_train, y_train)
print "Test score", clf.score(X_test, y_test)
print "Train score", clf.score(X_train, y_train)

Resources