I have a dataset which has a fraud_label and some other sets of feature variable. How can I find the best rule which would help me identify fraud_label correctly with the best precision and recall values. Example of features are number_of_site_visits, external_fraud_score etc. I need to be able to come up with a rule which says that if number_of_site_visits is less than X and external_fraud_score is greater than Y then we will get the best precision and recall. I have to do this in Python and any help you can provide or direction would be very helpful.
I have tried Random Forest model but that gives me feature importances and not exact threshold values.
The best way to find the best rule for identifying fraud_label correctly with the best precision and recall values is to use a supervised machine learning algorithm such as logistic regression or support vector machines. These algorithms can be used to train a model on your dataset and then use the trained model to predict the fraud_label. The model can then be evaluated using metrics such as precision and recall.
You can also use grid search or cross-validation to find the optimal parameters for your model, which will help you identify the best thresholds for each feature variable. This will allow you to create a rule that will give you the best precision and recall values.
In Python, you can use scikit-learn library for implementing these algorithms.
Related
I am classifying data with categorical variables. It is data where people have provided information.
My training dataset is of varying quality. I have a greater confidence in some of the data i.e. I have a higher confidence that people have provided correct information whereas in some the data I am not so sure.
How can I pass this information into a classification algorithm such as Naive Bayes or K nearest neighbour?
Or should I instead look to another algorithm?
I think what you want to do, is to provide individual weights (for the importance/confidence) for each data point you have.
For instance, if you are very certain that one data point is of higher quality and should have a higher weight than others, in which you are less confident in, you can specify that when fitting your classifier.
Sklearn provides for instance the Gaussian Naive Bayes classifier (GaussianNB) for that.
Here, you can specify sample_weights when calling the fit() method.
I am novice in DS/ML stuff. I am trying to solve Titanic case study in Kaggle, however my approach is not systematic till now. I have used correlation to find relationship between variables and have used KNN and Random Forest Classification, however my models performance has not improved. I have selected features based on the result of correlation between variables.
Please guide me if there are certain sk-learn methods which can be used to identify features which can contribute significantly in forecasting.
Through Various Boosting Techniques You can Improve accuracy approx 99% I suggest you to use Gradient Boosting.
I am using Random forests in scikit-learn. I used feature_importances_ to see how much each feature is important in prediction goal. But I don't understand what is this score. Googling feature_importances_ says it is the mean decrease impurity. But I'm still confused whether this is the same as mean decrease gigi impurity. If so, how it is calculated for trees and random forests? Beside the math I want to really understand what does it mean.
feature_importances_ function will tell you how much each feature is contributing towards prediction (Information gain)
Random forest classify the independent variables or features based on Gini, Information Gain, Chi-square or entropy. Those features will get high score which contribute maximum to the information gain.
I am working with sklearn's implementation of KNN. While my input data has about 20 features, I believe some of the features are more important than others. Is there a way to:
set the feature weights for each feature when "training" the KNN learner.
learn what the optimal weight values are with or without pre-processing the data.
On a related note, I understand generally KNN does not require training but since sklearn implements it using KDTrees, the tree must be generated from the training data. However, this sounds like its turning KNN into a binary tree problem. Is that the case?
Thanks.
kNN is simply based on a distance function. When you say "feature two is more important than others" it usually means difference in feature two is worth, say, 10x difference in other coords. Simple way to achive this is by multiplying coord #2 by its weight. So you put into the tree not the original coords but coords multiplied by their respective weights.
In case your features are combinations of the coords, you might need to apply appropriate matrix transform on your coords before applying weights, see PCA (principal component analysis). PCA is likely to help you with question 2.
The answer to question to is called "metric learning" and currently not implemented in Scikit-learn. Using the popular Mahalanobis distance amounts to rescaling the data using StandardScaler. Ideally you would want your metric to take into account the labels.
I am running a random forest classifier using scikit's learn, and would like to calculate a precision metric (how many predictions matched the target value) as part of the results. Is there a built-in option to do that? If not what would be the easiest way to implement it?
Thanks!
Yes, see the reference documentation on performance metrics: http://scikit-learn.org/stable/modules/classes.html#module-sklearn.metrics