How to apply random forest properly? - python-3.x

I am new to machine learning and python. Now I am trying to apply random forest to predict binary results of a target. In my data I have 24 predictors (1000 observations) where one of them is categorical(gender) and all the others numerical. Among numerical ones, there are two types of values which are volume of money in euros (very skewed and scaled) and numbers (number of transactions from an atm). I have transformed the big scale features and did the imputation. Last, I have checked correlation and collinearity and based on that removed some features (as a result I had 24 features.) Now when I implement RF it is always perfect in the training set while the ratios not so good according to crossvalidation. And even applying it in the test set it gives very very low recall values. How should I remedy this?
def classification_model(model, data, predictors, outcome):
# Fit the model:
model.fit(data[predictors], data[outcome])
# Make predictions on training set:
predictions = model.predict(data[predictors])
# Print accuracy
accuracy = metrics.accuracy_score(predictions, data[outcome])
print("Accuracy : %s" % "{0:.3%}".format(accuracy))
# Perform k-fold cross-validation with 5 folds
kf = KFold(data.shape[0], n_folds=5)
error = []
for train, test in kf:
# Filter training data
train_predictors = (data[predictors].iloc[train, :])
# The target we're using to train the algorithm.
train_target = data[outcome].iloc[train]
# Training the algorithm using the predictors and target.
model.fit(train_predictors, train_target)
# Record error from each cross-validation run
error.append(model.score(data[predictors].iloc[test, :], data[outcome].iloc[test]))
print("Cross-Validation Score : %s" % "{0:.3%}".format(np.mean(error)))
# Fit the model again so that it can be refered outside the function:
model.fit(data[predictors], data[outcome])
outcome_var = 'Sold'
model = RandomForestClassifier(n_estimators=20)
predictor_var = train.drop('Sold', axis=1).columns.values
classification_model(model,train,predictor_var,outcome_var)
#Create a series with feature importances:
featimp = pd.Series(model.feature_importances_, index=predictor_var).sort_values(ascending=False)
print(featimp)
outcome_var = 'Sold'
model = RandomForestClassifier(n_estimators=20, max_depth=20, oob_score = True)
predictor_var = ['fet1','fet2','fet3','fet4']
classification_model(model,train,predictor_var,outcome_var)

In Random Forest it is very easy to overfit. To resolve this you need to do parameter search a little more rigorously to know the best parameter to use. [Here](http://scikit-learn.org/stable/auto_examples/model_selection/randomized_search.html
) is the link on how to do this: (from the scikit doc).
It is overfitting and you need to search for the best parameter that will work work on the model. The link provides implementation for Grid and Randomized search for hyper parameter estimation.
And it will also be fun to go through this MIT Artificial Intelligence lecture to get get deep theoretical orientation: https://www.youtube.com/watch?v=UHBmv7qCey4&t=318s.
Hope this helps!

Related

How to Fasten Knn Algorithm for face recognition in real time

I am doing my work on face detection and recognition, where I want to detect the faces in real time,
but when coming to the point of training it is taking very long time to train the
data is it possible to reduce the timing of training the data can any one help
me out with this problem
'''
def train(train_dir, model_save_path=None, n_neighbors=None, knn_algo='ball_tree', verbose=False):
X = []
y = []
# Loop through each person in the training set
for class_dir in tqdm(os.listdir(train_dir)):
if not os.path.isdir(os.path.join(train_dir, class_dir)):
continue
# Loop through each training image for the current person
for img_path in image_files_in_folder(os.path.join(train_dir, class_dir)):
image = face_recognition.load_image_file(img_path)
face_bounding_boxes = face_recognition.face_locations(image)
if len(face_bounding_boxes) != 1:
# If there are no people (or too many people) in a training image, skip the image.
if verbose:
print("Image {} not suitable for training: {}".format(img_path, "Didn't find a face" if len(face_bounding_boxes) < 1 else "Found more than one face"))
else:
# Add face encoding for current image to the training set
X.append(face_recognition.face_encodings(image, known_face_locations=face_bounding_boxes)[0])
y.append(class_dir.split('_')[0])
# Determine how many neighbors to use for weighting in the KNN classifier
if n_neighbors is None:
n_neighbors = int(round(math.sqrt(len(X))))
if verbose:
print("Chose n_neighbors automatically:", n_neighbors)
# Create and train the KNN classifier
knn_clf = neighbors.KNeighborsClassifier(n_neighbors=n_neighbors, algorithm=knn_algo, weights='distance')
print(knn_clf)
knn_clf.fit(X, y)
# Save the trained KNN classifier
if model_save_path is not None:
with open(model_save_path, 'wb') as f:
pickle.dump(knn_clf, f)
return knn_clf
'''
this the final call
'''
def trainer():
# STEP 1: Train the KNN classifier and save it to disk
# Once the model is trained and saved, you can skip this step next time.
print("Training KNN classifier...")
classifier = train("app/facerec/dataset", model_save_path="app/facerec/models/trained_model.clf", n_neighbors=3)
print("Training complete!")
'''
also wants to know is there any possibility instead of rewriting the 'trained_model.clf' file can we update the file instead.
Training kNN model shouldn't impose high runtime overhead. After all, the straightforward ("exact search") model is lazy. It stores the vectors and performs brute-force search at query (or classification) time.
I speculate the embedding computations dominate your training time.
As mentioned by #johncasey, you might want to use approximated-kNN models (or similarity search engines). There are many open-source similarity search libraries. Yet, if you need a production-ready, robust, real-time, efficient solution, then you should check out pinecone.io. (Disclaimer, I work for Pinecone.)
k-nn algorithm has a O(n) time complexity. I recommend you to use approximate nearest neighbor (a-nn) algorithm. Its time complexity is too low. For example, Google image search is based on this algorithm.
Spotify annoy, Facebook faiss, nmslib are a-nn libraries.

cross Validation in Sklearn using a Custom CV

I am dealing with a binary classification problem.
I have 2 lists of indexes listTrain and listTest, which are partitions of the training set (the actual test set will be used only later). I would like to use the samples associated with listTrain to estimate the parameters and the samples associated with listTest to evaluate the error in a cross validation process (hold out set approach).
However, I am not be able to find the correct way to pass this to the sklearn GridSearchCV.
The documentation says that I should create "An iterable yielding (train, test) splits as arrays of indices". However, I do not know how to create this.
grid_search = GridSearchCV(estimator = model, param_grid = param_grid,cv = custom_cv, n_jobs = -1, verbose = 0,scoring=errorType)
So, my question is how to create custom_cv based on these indexes to be used in this method?
X and y are respectivelly the features matrix and y is the vector of labels.
Example: Supose that I only have one hyperparameter alpha that belongs to the set{1,2,3}. I would like to set alpha=1, estimate the parameters of the model (for instance the coefficients os a regression) using the samples associated with listTrain and evaluate the error using the samples associated with listTest. Then I repeat the process for alpha=2 and finally for alpha=3. Then I choose the alpha that minimizes the error.
EDIT: Actual answer to question. Try passing cv command a generator of the indices:
def index_gen(listTrain, listTest):
yield listTrain, listTest
grid_search = GridSearchCV(estimator = model, param_grid =
param_grid,cv = index_gen(listTrain, listTest), n_jobs = -1,
verbose = 0,scoring=errorType)
EDIT: Before Edits:
As mentioned in the comment by desertnaut, what you are trying to do is bad ML practice, and you will end up with a biased estimate of the generalisation performance of the final model. Using the test set in the manner you're proposing will effectively leak test set information into the training stage, and give you an overestimate of the model's capability to classify unseen data. What I suggest in your case:
grid_search = GridSearchCV(estimator = model, param_grid = param_grid,cv = 5,
n_jobs = -1, verbose = 0,scoring=errorType)
grid_search.fit(x[listTrain], y[listTrain]
Now, your training set will be split into 5 (you can choose the number here) folds, trained using 4 of those folds on a specific set of hyperparameters, and tested the fold that was left out. This is repeated 5 times, till all of your training examples have been part of a left out set. This whole procedure is done for each hyperparameter setting you are testing (5x3 in this case)
grid_search.best_params_ will give you a dictionary of the parameters that performed the best over all 5 folds. These are the parameters that you use to train your final classifier, using again only the training set:
clf = LogisticRegression(**grid_search.best_params_).fit(x[listTrain],
y[listTrain])
Now, finally your classifier is tested on the test set and an unbiased estimate of the generalisation performance is given:
predictions = clf.predict(x[listTest])

How to get Precision, Recall, Accuracy and F1 for Binary Class

I'm building a machine learning model using Apache Spark's ML library and let's say RandomForestClassifier.
I divide the dataset to training and test as below
(tr,test) = dataframe.randomSplit([0.8,0.2]), seed = 23)
apply the model
rf = RandomForestClassifier(numTrees=10,featuresCol = "features",
labelCol = "label")
model= rf.fit(tr)
prediction = model.transform(test)
eval = BinaryClassificationEvaluator(rawPredictionCol="rawPrediction")
eval.evaluate(prediction)
I'm under the impression that this gives me AUC which is not accuracy. How do I get the Precision, recall, F1 and accuracy for this model?
My class variable is binary (0 or 1).
AUC is the area under the ROC curve. Nothing to do with accuracy, but more useful metric according to my opinion. Gives a much better overview of your model's capability.
All the metrics you need are here:
https://spark.apache.org/docs/latest/mllib-evaluation-metrics.html#binary-classification
Take notice that all the metrics are computed towards one label (depends if your true positives are the 0s or the 1s). If you have a class imbalance and you compute your metrics for the major class (lets say the 1s), then your results might be misleading. So use the label that is more important for your model to correctly classify.
Please, read the documentation carefully before using the metrics to fully understand what they are all about.
Cheers.
You can use MulticlassMetrics to get precision and recall.
predictionAndLabels = prediction.select("prediction","label").rdd
# Instantiate metrics objects
multi_metrics = MulticlassMetrics(predictionAndLabels)
precision_score = multi_metrics.weightedPrecision
recall_score = multi_metrics.weightedRecall
Alternatively, you can get the Confusion Matrix and calculate your own.
confusion_matrix = multi_metrics.confusionMatrix().toArray()

Get feature importance PySpark Naive Bayes classifier

I have a Naive Bayes classifier that I wrote in Python using a Pandas data frame and now I need it in PySpark. My problem here is that I need the feature importance of each column. When looking through the PySpark ML documentation I couldn't find any info on it. documentation
Does anyone know if I can get the feature importance with the Naive Bayes Spark MLlib?
The code using Python is the following. The feature importance is retrieved with .coef_
df = df.fillna(0).toPandas()
X_df = df.drop(['NOT_OPEN', 'unique_id'], axis = 1)
X = X_df.values
Y = df['NOT_OPEN'].values.reshape(-1,1)
mnb = BernoulliNB(fit_prior=True)
y_pred = mnb.fit(X, Y).predict(X)
estimator = mnb.fit(X, Y)
# coef_: For a binary classification problems this is the log of the estimated probability of a feature given the positive class. It means that higher values mean more important features for the positive class.
feature_names = X_df.columns
coefs_with_fns = sorted(zip(estimator.coef_[0], feature_names))
If you're interested in an equivalent of coef_, the property, you're looking for, is NaiveBayesModel.theta
log of class conditional probabilities.
New in version 2.0.0.
i.e.
model = ... # type: NaiveBayesModel
model.theta.toArray() # type: numpy.ndarray
The resulting array is of size (number-of-classes, number-of-features), and rows correspond to consecutive labels.
It is, probably, better to evaluate a difference
log(P(feature_X|positive)) - log(P(feature_X|negative))
as a feature importance.
Because, we are interested in the Discriminative power of each feature_X (sure-sure NB is a generative model).
Extreme example: some feature_X1 has the same value across all + and - samples, so no discriminative power.
So, the probability of this feature value is high for both + and - samples, but the difference of log probabilities = 0.

How to tune weights in Voting Classifier (Sklearn)

I am trying to do the following:
vc = VotingClassifier(estimators=[('gbc',GradientBoostingClassifier()),
('rf',RandomForestClassifier()),('svc',SVC(probability=True))],
voting='soft',n_jobs=-1)
params = {'weights':[[1,2,3],[2,1,3],[3,2,1]]}
grid_Search = GridSearchCV(param_grid = params, estimator=vc)
grid_Search.fit(X_new,y)
print(grid_Search.best_Score_)
In this, I want to tune the parameter weights. If I use GridSearchCV, it is taking a lot of time. Since it needs to fit the model for each iteration. Which is not required, I guess. Better would be use something like prefit used in SelectModelFrom function from sklearn.model_selection.
Is there any other option or I am misinterpreting something?
The following code (in my repo) would do this.
It contains a class VotingClassifierCV. It first makes cross-validated predictions for all classifiers. Then loops over all weights, choosing the best combination, and using pre-calculated predictions.
A compute friendlier way would be to first parameter tune each classifier separately on your training data. Then weight each classifier proportional to your target metric (say accuracy_score) from your validate data.
# parameter tune
models = {
'rf': GridSearchCV(rf_params, RandomForestClassifier()).fit(X_trian, y_train),
'svc': GridSearchCV(svc_params, SVC()).fit(X_train, y_train),
}
# relative weights
model_scores = {
name: sklearn.metrics.accuracy_score(
y_validate,
model.predict(X_validate),
normalized=True
)
for name, model in models.items()
}
total_score = sum(model_scores.values())
# combine the parts
combined_model = VotingClassifier(
list(models.items()),
weights=[
model_scores[name] / total_score
for name in models.keys()
]
).fit(X_learn, y_learn)
Finally, you may fit the combined model with your learning (train + validate) data & evaluate with your test data.

Resources