What does "n_features" and "centers" parameters mean in make_blobs in SciKit? - scikit-learn

I have gone through the documents about n_features and centers parameters in make_blobs function in SciKit. However, every explanation I've seen doesn't sound so clear to me since I am new to SciKit and Mathematics. I am wondering what do these two parameters: n_features, centers do in make_blobs function as below.
make_blobs(n_samples=50, n_features=2, centers=2, random_state=75)
Thank you in advance.

The make_blobs function is a part of sklearn.datasets.samples_generator. All methods in the package, help us to generate data samples or datasets. In machine learning, which scikit-learn all about, datasets are used to evaluate performance of machine learning models. This is an example on how to evaluate a KNN classifier:
from sklearn.datasets.samples_generator import make_blobs
from sklearn.neighbors import KNeighborsClassifier
from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score
X, y = make_blobs(n_features=2, centers=3)
X_train, X_test, y_train, y_test = train_test_split(X, y)
model = KNeighborsClassifier()
model.fit(X_train, y_train)
y_pred = model.predict(X_test)
acc = accuracy_score(y_test, y_pred) * 100
print('accuracy: {}%'.format(acc))
Now, as you mentioned, n_features determined how many columns or features the generated datasets will have. In machine learning, features correspond to numerical characteristics data. For example, in Iris Dataset, there are 4 features (Sepal Length, Sepal Width, Petal Length and Petal Width) so there are 4 numerical columns in the dataset. So by increasing n_features in make_blobs, we are adding more features hence increase the complexity of generated dataset.
As for the centers, it is easier to understand by visualizing the generated dataset. I use matplotlib to help us on that:
from sklearn.datasets.samples_generator import make_blobs
import matplot
# plot 1
X, y = make_blobs(n_features=2, centers=1)
plt.figure()
plt.scatter(X[:, 0], X[:, 1], c=y)
plt.savefig('centers_1.png')
plt.title('centers = 1')
# plot 2
X, y = make_blobs(n_features=2, centers=2)
plt.figure()
plt.scatter(X[:, 0], X[:, 1], c=y)
plt.title('centers = 2')
# plot 3
X, y = make_blobs(n_features=2, centers=3)
plt.figure()
plt.scatter(X[:, 0], X[:, 1], c=y)
plt.title('centers = 3')
plt.show()
If you run the code above you can easily see that centers corresponds to number of classes generated in the data. It uses centers as a term because samples that belong to same class, tend to gather close to a center (coordinate).

Related

visualize predict_proba for multiclass classification

With model.predict_proba(X) I just get a big array with lots of numbers.
I am looking for a way to visualize the probabilities of a classification for all classes (in my case 13). I use a RandomForestClassifier.
Any recommendation?
Heatmaps would be nice way to visualise a 2D matrix. Of-course, if the number of records in your X is large, it is hard to visualize everything in a single go. Probably you have to sample records otherwise. Here I'm showing the visuals for first 10 records, labelling the predicted classes if the predicted probability is greater than 0.1.
Check out this example:
from sklearn.datasets import make_classification
from sklearn.ensemble import RandomForestClassifier
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import train_test_split
import matplotlib.pyplot as plt
import numpy as np
X, y = make_classification(n_samples=10000,n_features=40,
n_informative=30, n_classes=13,
n_redundant=0, n_clusters_per_class=1,
random_state=42)
X_train, X_test, y_train, y_test = train_test_split(X,y, random_state=42)
forest = RandomForestClassifier(n_estimators=10, random_state=42).fit(X_train, y_train)
pred = forest.predict_proba(X_test)[:10]
fig, ax = plt.subplots(figsize= (20,8))
im = ax.imshow(pred, cmap='Blues')
ax.grid(axis='y')
ax.set_xticklabels([])
ax.set_yticks(np.arange(pred.shape[0]))
plt.ylabel('Records', fontsize='xx-large')
plt.xlabel('Classes', fontsize='xx-large')
fig.colorbar(im, ax=ax, fraction=0.046, pad=0.04)
for i in range(pred.shape[0]):
for j in range(13):
if pred[i, j] >.1:
ax.text(j, i, j,
ha="center", va="center", color="w", fontsize=30)
If your input space is 2D, or if you use some dimensionality reduction technique to embed it in 2D, you could plot the multiclass decision surface:
# generate toy data
X, y = sklearn.datasets.make_blobs(n_samples=1000, centers=13)
# fit classifier
clf = sklearn.ensemble.RandomForestClassifier().fit(X, y)
# create decision surface
xx, yy = np.meshgrid(np.linspace(-13, 12, 100),
np.linspace(-13, 12, 100))
Z = clf.predict(np.array([xx.ravel(), yy.ravel()]).T)
Z = Z.reshape(xx.shape)
fig, ax = plt.subplots(1,1, figsize=(8,8))
ax.scatter(X[:,0], X[:,1], c=y, cmap='Paired')
ax.contourf(xx, yy, Z, cmap='Paired', alpha=0.5)
Note this is only shading per label (predict not predict_proba) but you may be able to extend this to shade differently based on the probability.

Confidence score too low

Im wondering why the model score is very low, only 0.13, i already make sure the data is clean, scaled, and also have high correlation between each features but the model score using linear regression is very low, why is this happening and how to solve this? this is my code
import numpy as np
from sklearn.linear_model import LinearRegression
from sklearn.model_selection import train_test_split
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
from sklearn import preprocessing
path = r"D:\python projects\avocado.csv"
df = pd.read_csv(path)
df = df.reset_index(drop=True)
df.set_index('Date', inplace=True)
df = df.drop(['Unnamed: 0','year','type','region','AveragePrice'],1)
df.rename(columns={'4046':'Small HASS sold',
'4225':'Large HASS sold',
'4770':'XLarge HASS sold'},
inplace=True)
print(df.head)
sns.heatmap(df.corr())
sns.pairplot(df)
df.plot()
_=plt.xticks(rotation=20)
forecast_line = 35
df['target'] = df['Total Volume'].shift(-forecast_line)
X = np.array(df.drop(['target'], 1))
X = preprocessing.scale(X)
X_lately = X[-forecast_line:]
X = X[:-forecast_line]
df.dropna(inplace=True)
y = np.array(df['target'])
X_train, X_test, y_train, y_test = train_test_split(X, y,test_size=0.2)
lr = LinearRegression()
lr.fit(X_train,y_train)
confidence = lr.score(X_test,y_test)
print(confidence)
this is the link of the dataset i use https://www.kaggle.com/neuromusic/avocado-prices
So the score function you are using is:
Return the coefficient of determination R^2 of the prediction.
The coefficient R^2 is defined as (1 - u/v), where u is the residual
sum of squares ((y_true - y_pred) ** 2).sum() and v is the total sum
of squares ((y_true - y_true.mean()) ** 2).sum(). The best possible
score is 1.0 and it can be negative (because the model can be
arbitrarily worse). A constant model that always predicts the expected
value of y, disregarding the input features, would get a R^2 score of
0.0.
So as you realise you are already above the the constant prediction.
My advice try to plot your data, to see what kind of regression you should use. Here you can see an overview which type of linear regression are available: https://scikit-learn.org/stable/modules/linear_model.html
Logistic regression makes sense if your data has a logistic curve, which means that your points are either close to 0 or to 1, and in the middle are not so many points.

Geneating Learning Curve for Logistic Regression

I have a dataset of 260 microscopic image.I want to generate a learning curve for logistic regression algorithm.But I am getting this error "'module' object is not iterable" .Perhaps I don't understand something basic, as I'm a begineer who freshly learning Python
from sklearn.cross_validation import train_test_split
from imutils import paths
from scipy import misc
import numpy as np
import argparse
import imutils
import cv2
import os
from matplotlib import pyplot as plt
from sklearn.model_selection import learning_curve
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import recall_score
from sklearn.metrics import precision_score
from sklearn.model_selection import cross_val_score
def plot_learning_curve(estimator, title, X, y, ylim=None, cv=None,
n_jobs=None, train_sizes=np.linspace(50, 80, 110)):
"""
Generate a simple plot of the test and training learning curve.
Parameters
----------
estimator : object type that implements the "fit" and "predict" methods
An object of that type which is cloned for each validation.
title : string
Title for the chart.
X : array-like, shape (n_samples, n_features)
Training vector, where n_samples is the number of samples and
n_features is the number of features.
y : array-like, shape (n_samples) or (n_samples, n_features), optional
Target relative to X for classification or regression;
None for unsupervised learning.
cv : int, cross-validation generator or an iterable, optional
Determines the cross-validation splitting strategy.
Possible inputs for cv are:
- None, to use the default 3-fold cross-validation,
- integer, to specify the number of folds.
- :term:`CV splitter`,
- An iterable yielding (train, test) splits as arrays of indices.
For integer/None inputs, if ``y`` is binary or multiclass,
:class:`StratifiedKFold` used. If the estimator is not a classifier
or if ``y`` is neither binary nor multiclass, :class:`KFold` is used.
Refer :ref:`User Guide <cross_validation>` for the various
cross-validators that can be used here.
n_jobs : int or None, optional (default=None)
Number of jobs to run in parallel.
``None`` means 1 unless in a :obj:`joblib.parallel_backend` context.
``-1`` means using all processors. See :term:`Glossary <n_jobs>`
for more details.
train_sizes : array-like, shape (n_ticks,), dtype float or int
Relative or absolute numbers of training examples that will be used to
generate the learning curve. If the dtype is float, it is regarded as a
fraction of the maximum size of the training set (that is determined
by the selected validation method), i.e. it has to be within (0, 1].
Otherwise it is interpreted as absolute sizes of the training sets.
Note that for classification the number of samples usually have to
be big enough to contain at least one sample from each class.
(default: np.linspace(0.1, 1.0, 5))
"""
plt.figure()
plt.title(title)
if ylim is not None:
plt.ylim(*ylim)
plt.xlabel("Training examples")
plt.ylabel("Score")
train_sizes, train_scores, test_scores = learning_curve(
estimator, X, y, cv=cv, n_jobs=n_jobs, train_sizes=train_sizes)
train_scores_mean = np.mean(train_scores, axis=1)
train_scores_std = np.std(train_scores, axis=1)
test_scores_mean = np.mean(test_scores, axis=1)
test_scores_std = np.std(test_scores, axis=1)
plt.grid()
plt.fill_between(train_sizes, train_scores_mean - train_scores_std,
train_scores_mean + train_scores_std, alpha=0.1,
color="r")
plt.fill_between(train_sizes, test_scores_mean - test_scores_std,
test_scores_mean + test_scores_std, alpha=0.1, color="g")
plt.plot(train_sizes, train_scores_mean, 'o-', color="r",
label="Training score")
plt.plot(train_sizes, test_scores_mean, 'o-', color="g",
label="Cross-validation score")
plt.legend(loc="best")
return plt
#training with logistic regression
clfLR = LogisticRegression(random_state=0, solver='lbfgs')
clfLR.fit(trainFeat,trainLabels)
acc = clfLR.score(testFeat, testLabels)
print("accuracy of Logistic regression ",acc)
I am facing this problem only when i Want to plot the curve.Rest of the code works fine.
#plotting the curve
estimator =LogisticRegression()
train_sizes, train_scores, valid_scores = plot_learning_curve(
estimator,'logistic learning curve ', trainFeat, trainLabels, cv=5, n_jobs=4,train_sizes=[50, 80, 110])
print(train_sizes)
plt.show()
Screenshot of the error
learning curve
Try running the code on Jupyter online IDE IDE. It plots automatically if you add "%matplotlib" line to import section.
If you want to keep working on this IDE, share your error message so maybe I can help you. You are missing one of the imports probably or it might be a Python2/3 problem.

ML Model not predicting properly

I am trying to create an ML model (regression) using various techniques like SMR, Logistic Regression, and others. With all the techniques, I'm not able to get efficiency more than 35%. Here's what I'm doing:
X_data = [X_data_distance]
X_data = np.vstack(X_data).astype(np.float64)
X_data = X_data.T
y_data = X_data_orders
#print(X_data.shape)
#print(y_data.shape)
#(10000, 1)
#(10000,)
X_train, X_test, y_train, y_test = train_test_split(X_data, y_data, test_size=0.33, random_state=42)
svr_rbf = SVC(kernel= 'rbf', C= 1.0)
svr_rbf.fit(X_train, y_train)
plt.plot(X_data_distance, svr_rbf.predict(X_data), color= 'red', label= 'RBF model')
For the plot, I'm getting the following:
I have tried various parameter tuning, changing the parameter C, gamma even tried different kernels, but nothing changes the accuracy. Even tried SVR, Logistic regression instead of SVC, but nothing helps. I tried different scaling for training input data like StandardScalar() and scale().
I used this as a reference
What should I do?
As a rule of thumb, we usually follow this convention:
For little number of features, go with Logistic Regression.
For a lot of features but not a lot of data, go with SVM.
For a lot of features and a lot of data, go with Neural Network.
Because your dataset is a 10K cases, it'd be better to use Logistic Regression because SVM will take forever to finish!.
Nevertheless, because your dataset contains a lot of classes, there is a chance of classes imbalance in your implementation. Thus I tried to workaround this problem via using the StratifiedKFold instead of train_test_split which doesn't guarantee balanced classes in the splits.
Moreover, I used GridSearchCV with StratifiedKFold to perform Cross-Validation in order to tune the parameters and try all different optimizers!
So the full implementation is as follows:
import pandas as pd
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import accuracy_score
from sklearn.model_selection import GridSearchCV, StratifiedKFold, StratifiedShuffleSplit
import numpy as np
def getDataset(path, x_attr, y_attr):
"""
Extract dataset from CSV file
:param path: location of csv file
:param x_attr: list of Features Names
:param y_attr: Y header name in CSV file
:return: tuple, (X, Y)
"""
df = pd.read_csv(path)
X = X = np.array(df[x_attr]).reshape(len(df), len(x_attr))
Y = np.array(df[y_attr])
return X, Y
def stratifiedSplit(X, Y):
sss = StratifiedShuffleSplit(n_splits=1, test_size=0.2, random_state=0)
train_index, test_index = next(sss.split(X, Y))
X_train, X_test = X[train_index], X[test_index]
Y_train, Y_test = Y[train_index], Y[test_index]
return X_train, X_test, Y_train, Y_test
def run(X_data, Y_data):
X_train, X_test, Y_train, Y_test = stratifiedSplit(X_data, Y_data)
param_grid = {'C': [0.01, 0.1, 1, 10, 100, 1000], 'penalty': ['l1', 'l2'],
'solver':['newton-cg', 'lbfgs', 'liblinear', 'sag', 'saga']}
model = LogisticRegression(random_state=0)
clf = GridSearchCV(model, param_grid, cv=StratifiedKFold(n_splits=10))
clf.fit(X_train, Y_train)
print(accuracy_score(Y_train, clf.best_estimator_.predict(X_train)))
print(accuracy_score(Y_test, clf.best_estimator_.predict(X_test)))
X_data, Y_data = getDataset("data - Sheet1.csv", ['distance'], 'orders')
run(X_data, Y_data)
Despite all the attempts with all different algorithms, the accuracy didn't exceed 36%!!.
Why is that?
If you want to make a person recognize/classify another person by their T-shirt color, you cannot say: hey if it's red that means he's John and if it's red it's Peter but if it's red it's Aisling!! He would say "really, what the hack is the difference"?!!.
And that's exactly what is in your dataset!
Simply, run print(len(np.unique(X_data))) and print(len(np.unique(Y_data))) and you'll find that the numbers are so weird, in a nutshell you have:
Number of Cases: 10000 !!
Number of Classes: 118 !!
Number of Unique Inputs (i.e. Features): 66 !!
All classes are sharing hell a lot of information which make it impressive to have even up to 36% accuracy!
In other words, you have no informative features which lead to a lack in the uniqueness of each class model!
What to do?
I believe you are not allowed to remove some classes, so the only two solutions you have are:
Either live with this very valid result.
Or add more informative feature(s).
Update
Having you provided same dataset but with more features (i.e. complete set of features), the situation now is different.
I recommend you do the following:
Pre-process your dataset (i.e. prepare it by imputing missing values or deleting rows containing missing values, and converting dates to some unique values (example) ...etc).
Check what features are most important to the Orders Classes, you can achieve that by using of Forests of Trees to evaluate the importance of features. Here is a complete and simple example of how to do that in Scikit-Learn.
Create a new version of the dataset but this time hold Orders as the Y response, and the above-found features as the X variables.
Follow the same GrdiSearchCV and StratifiedKFold procedure that I showed you in the implementation above.
Hint
As per mentioned by Vivek Kumar in the comment below, stratify parameter has been added in Scikit-learn update to the train_test_split function.
It works by passing the array-like ground truth, so you don't need my workaround in the function stratifiedSplit(X, Y) above.

Generating Difficult Classification Data Sets using scikit-learn

I am trying to generate a range of synthetic data sets using make_classification in scikit-learn, with varying sample sizes, prevalences (i.e., proportions of the positive class), and accuracies. Varying the sample size and prevalence is fairly straightforward, but I am having difficult generating any data sets that have less than 50% accuracy using logistic regression. Playing around with the number of informative columns, the number of clusters per class, and the flip_y parameter (which randomly flips the class of a given proportion of observations) seem to reduce the accuracy, but not as much as I would like. Is there a way to vary the parameters of make_classification in such a way to reduce this further (e.g., to 20%)?
Thanks!
Generally, the combination of a fairly low number of n_samples, a high probability of randomly flipping the label flip_y and a large number of n_classes should get you where you want.
You can try the following:
from sklearn.cross_validation import cross_val_score
from sklearn.datasets import make_classification
from sklearn.linear_model import LogisticRegression
lr = LogisticRegression()
# 2-class problem
X, y = make_classification(n_samples=100, n_informative=2, flip_y=0.8, random_state=42)
cross_val_score(estimator=lr, X=X, y=y, scoring='accuracy', cv=10)
# Output
array([ 0.54545455, 0.27272727, 0.45454545, 0.2 , 0.4 ,
0.5 , 0.7 , 0.55555556, 0.55555556, 0.44444444])
# 8-class problem
X, y = make_classification(n_samples=100, n_classes=8, n_informative=4, n_clusters_per_class=1, flip_y=0.5, random_state=42)
cross_val_score(estimator=lr, X=X, y=y, scoring='accuracy', cv=5)
# Output
array([ 0.16666667, 0.19047619, 0.15 , 0.16666667, 0.29411765])
In case you go with binary classification only, you should carefully choose flip_y. If, for example, you choose flip_y to be high, that means you flip almost every label, hence making the problem easier!. (the consistency is preserved)
Hence, in binary classification, flip_y is really min(flip_y,1-flip_y), and setting it as 0.5 will make the classification really hard.
Another thing you can do: after creating the data, do dimension reduction, using PCA:
from sklearn.cross_validation import cross_val_score
from sklearn.datasets import make_classification
from sklearn.decomposition import PCA
from sklearn.linear_model import LogisticRegression
clf = LogisticRegression()
X, y = make_classification(n_samples=10000, n_informative=18,n_features=20, flip_y=0.15, random_state=217)
print cross_val_score(estimator=clf, X=X, y=y, scoring='accuracy', cv=4)
#prints [ 0.80287885 0.7904 0.796 0.78751501]
pca = PCA(n_components=10)
X = pca.fit_transform(X)
print cross_val_score(estimator=clf, X=X, y=y, scoring='accuracy', cv=4)
#prints [ 0.76409436 0.7684 0.7628 0.75830332]
you can reduce n_components to get even poorer results, while having the original number of features:
pca = PCA(n_components=1)
X = pca.fit_transform(X)
X = np.concatenate((X, np.random.rand(X.shape[0],19)),axis=1) #concatenating random features
cross_val_score(estimator=clf, X=X, y=y, scoring='accuracy', cv=10)
print cross_val_score(estimator=clf, X=X, y=y, scoring='accuracy', cv=4)
#prints [ 0.5572 0.566 0.5552 0.5664]
Getting less than 50% accuracy is 'hard' - even when you take random vectors, the expectancy of accuracy is still 0.5:
X = np.random.rand(10000,20)
print np.average(cross_val_score(estimator=clf, X=X, y=y, scoring='accuracy', cv=100))
#prints 0.501489999
So 55% accuracy is considered very low.

Resources