I use DecisionTreeClassifier from sklearn
I need to correct splitter (feature), min_samples_leaf used in the particular tree node.
How can I do it?
You cannot define the min_samples_leaf for a single node, because the model would probably end up assigning fewer samples to other nodes than the min_samples_leaf of the whole model to ensure compliance with the rule applicable to this individual node.
If you are dealing with imbalanced data set, I suggest you oversample or undersample your data prior to inputting in the model or you could manually set the class weights.
According to scikit-learn's user guide:
Balance your dataset before training to prevent the tree from being
biased toward the classes that are dominant. Class balancing can be
done by sampling an equal number of samples from each class, or
preferably by normalizing the sum of the sample weights
(sample_weight) for each class to the same value.
Related
I have trained a model using lightgbm.sklearn.LGBMClassifier from the lightgbmpackage. I can find out the number of columns and column names of the training data from the model but I have not found a way to find the row number of the training data. Is it possible to do so? The best solution would be to obtain the training data from the model but I have not come across anything like that.
# This gives the number of the columns the model is trained with
lgbm_model.n_features_
# Any way to find out the row number of the training data as well?
lgbm_model.n_instances_ # does not exist!
The tree structure of a LightGBM model includes information about how many records from the training data would fall into each node in the tree if that node were a leaf node. In LightGBM's code, this value is called internal_count.
Since all data matches the root node of each tree, in most situations you can use that information to figure out, given a LightGBM model, how many instances were in the training data.
Consider the following example, using lightgbm==3.3.2 and Python 3.8.8.
import lightgbm as lgb
from sklearn.datasets import make_blobs
X, y = make_blobs(n_samples=1234, centers=[[-4, -4], [-4, 4]])
clf = lgb.LGBMClassifier(num_iterations=10, subsample=0.5)
clf.fit(X, y)
num_data = clf.booster_.dump_model()["tree_info"][0]["tree_structure"]["internal_count"]
print(num_data)
# 1234
This will work in most cases. There are two special circumstances where this number could be misleading as an answer to the question "how much data was used to train this model":
if you set bagging_fraction<1.0, then at each iteration LightGBM will only use a fraction of the training data to evaluate splits (see the LightGBM docs for details on bagging_fraction)
if you use "training continuation", where you take an existing model and perform additional boosting rounds, and you use a different training set for those additional boosting rounds, then "how much data was used to train this model" will have a complicated answer that depends on which range of boosting rounds you're referring to by "this model"
I have a dataset on my own, and the dataset contains two classes, let's say 0 and 1. Besides, there is a large part of nodes which class is unlabeled. My goal is to predict these unlabeled nodes using GCN. But I am confused about how to deal with these unlabeled nodes in Pytorch Geometric.
As far as I can think about, I can label the nodes into 3 classes, 0, 1 and unknown. But if I do it this way, that means I am trying to classify the dataset into three classes? (But that's not what I want since unknown is not a class).
And another way to deal with these node is to ignore them, simply run PyG on the labeled node. But in this way, it seems that these unlabeled node(with feature) is useless in the dataset?
That very much depends on your use case and the data!
Case 1 - Graph Autoencoder
For this case let's assume the task is to find similar tweets. A way of doing this is to train a Graph Autoencoder (see example).
This approach is completely unsupervised and thus does not need any data to be labeled.
The resulting model should be able to generate an embedding for each node (in this case each tweet) so that the distance between similar tweets is lower than between non-similar (measured e. g. by cosine distance).
Case 2 - Semi-Supervised GCN
Another case would be to classify tweets as advertisement vs. non-advertisement. Since the idea behind GCNs is to train in a semi-supervised manner it would be no problem to only have labels for some of the tweets.
In order to tell PYG which ones have labels and should be used for training you can define a train_mask. All nodes with missing labels will still technically need a y-value which can be set to -1.
Source & Credits
I'm using GradientBoostingClassifier for my unbalanced labeled datasets. It seems like class weight doesn't exist as a parameter for this classifier in Sklearn. I see I can use sample_weight when fit but I cannot use it when I deal with VotingClassifier or GridSearch. Could someone help?
Currently there isn't a way to use class_weights for GB in sklearn.
Don't confuse this with sample_weight
Sample Weights change the loss function and your score that you're trying to optimize. This is often used in case of survey data where sampling approaches have gaps.
Class Weights are used to correct class imbalances as a proxy for over \ undersampling. There is no direct way to do that for GB in sklearn (you can do that in Random Forests though)
Very late, but I hope it can be useful for other members.
In the article of Zichen Wang in towardsdatascience.com, the point 5 Gradient Boosting it is told:
For instance, Gradient Boosting Machines (GBM) deals with class imbalance by constructing successive training sets based on incorrectly classified examples. It usually outperforms Random Forest on imbalanced dataset For instance, Gradient Boosting Machines (GBM) deals with class imbalance by constructing successive training sets based on incorrectly classified examples. It usually outperforms Random Forest on imbalanced dataset.
And a chart shows that the half of the grandient boosting model have an AUROC over 80%. So considering GB models performances and the way they are done, it seems not to be necessary to introduce a kind of class_weight parameter as it is the case for RandomForestClassifier in sklearn package.
In the book Introduction To Machine Learning with Pyhton written by Andreas C. Müller and Sarah Guido, edition 2017, page 89, Chapter 2 *Supervised Learning, section Ensembles of Decision Trees, sub-section Gradient boosted regression trees (gradient boosting machines):
They are generally a bit more sensitive to
parameter settings than random forests, but can provide better accuracy if the parameters are set correctly.
Now if you still have scoring problems due to imbalance proportions of categories in the target variable, it is possible you should see if your data should be splited to apply different models on it, because they are not as homogeneous as it seems to be. I mean it may have a variable you have not in your dataset train (an hidden variable clearly) that influences a lot the model results, then it is difficult even for the greater GB to give correct scoring because it misses a huge information that you cannot make appear in the matrix to compute sometimes for many reasons.
Some updates:
I found, by random, there are libraries that implement it as parameters of their gradient boosting instance objects. It is the case of H2O where for the parameter balance_classes it is told:
Balance training data class counts via over/under-sampling (for
imbalanced data).
Type: bool (default: False).
If you want to keep with sklearn you should do as HakunaMaData told: over/under-sampling because that's what other libraries finally do when the parameter exist.
I am using Spark Mlib to train the data for classification using Random Forest Algorithm. The MLib provides a RandomForest Class which has trainClassifier Method which does the required.
Can I set a threshold value while training the data set, similar to the cutoff option provided in R's randomForest Package.
http://cran.r-project.org/web/packages/randomForest/randomForest.pdf
I found the RandomForest Class of MLib provides options only to pass number of trees, impurity, number of classes etc but there is nothing like threshold or cut off option available. Can it be done by any way.
The short version is no, if we look at RandomForestClassifier.scala you can see that it always simply selects the max. You could override the predict function if, but its not super clean. I've added a jira to track adding this.
I am working with sklearn's implementation of KNN. While my input data has about 20 features, I believe some of the features are more important than others. Is there a way to:
set the feature weights for each feature when "training" the KNN learner.
learn what the optimal weight values are with or without pre-processing the data.
On a related note, I understand generally KNN does not require training but since sklearn implements it using KDTrees, the tree must be generated from the training data. However, this sounds like its turning KNN into a binary tree problem. Is that the case?
Thanks.
kNN is simply based on a distance function. When you say "feature two is more important than others" it usually means difference in feature two is worth, say, 10x difference in other coords. Simple way to achive this is by multiplying coord #2 by its weight. So you put into the tree not the original coords but coords multiplied by their respective weights.
In case your features are combinations of the coords, you might need to apply appropriate matrix transform on your coords before applying weights, see PCA (principal component analysis). PCA is likely to help you with question 2.
The answer to question to is called "metric learning" and currently not implemented in Scikit-learn. Using the popular Mahalanobis distance amounts to rescaling the data using StandardScaler. Ideally you would want your metric to take into account the labels.