catboost eval_metrics return value - decision-tree

I'm using CatBoostClassifier's eval_metrics to compute some metrics on my test set, and I'm confused about its output. For a given metric, by default, it seems to return an array, of size equaling the number of iterations.
This seems to be inconsistent with the predict function, which returns a single value only. Which number in the array returned by eval_metrics is consistent with the predict function?
I checked the documentation at https://catboost.ai/docs/concepts/python-reference_catboostclassifier_eval-metrics.html#python-reference_catboostclassifier_eval-metrics__output-format, but it's still not clear to me.

The Catboost classifier is a type of Ensemble Classifiers which uses Boosting Methods. Simply put, Boosting algorithms iteratively train weaker algorithms (Decision Trees in this case) to make predictions. Each Tree that is created learns from the collective errors that the previous weaker trees made and tries to learn from those errors. Catboost is based on Gradient Boosting which I won't dwell to deep into. What is relevant here is that a number of weaker trees are generated in the process, and when you call the eval_metrics() method you are getting the eval metric for each of the generated trees. You specify the number of trees generated when you provided iterations, num_boost_round, n_estimators or num_trees when creating the model (If not specified it has a default value of 1000).
The other arguments you specify to the eval_metrics() method will define the range of trees taken ntree_start to ntree_end at intervals of eval_period. If these aren't provided you will get the specified metrics for your data for all of the weaker trees generated, which is why you get a list of values.

Related

In the scikit learn implementation of LDA what is the difference between transform and decision_function?

I am currently working on a project that uses Linear Discriminant Analysis to transform some high-dimensional feature set into a scalar value according to some binary labels.
So I train LDA on the data and the labels and then use either transform(X) or decision_function(X) to project the data into a one-dimensional space.
I would like to understand the difference between these two functions. My intuition would be that the decision_function(X) would be transform(X) + bias, but this is not the case.
Also, I found that those two functions give a different AUC score, and thus indicate that it is not a monotonic transformation as I would have thought.
In the documentation, it states that the transform(X) projects the data to maximize class separation, but I would have expected decision_function(X) to do this.
I hope someone could help me understand the difference between these two.
LDA projects your multivariate data onto a 1D space. The projection is based on a linear combination of all your attributes (columns in X). The weights of each attribute are determined by maximizing the class separation. Subsequently, a threshold value in 1D space is determined which gives the best classification results. transform(X) gives you the value of each observation in this 1D space x' = transform(X). decision_function(X) gives you the log-likelihood of an attribute being a positive class log(P(y=1|x')).

Is there a built in sklearn cross validation function for only validating against higher indices?

I'd like to use GridSearchCV, but with the condition that the lowest index of the data in the validation set be greater than the largest in the training set. The reason being that the data is in time, and future data gives unfair insight that would inflate the score. There's some discussion on this:
If the data ordering is not arbitrary (e.g. samples with the same class label are contiguous), shuffling it first may be essential to get a meaningful cross- validation result. However, the opposite may be true if the samples are not independently and identically distributed. For example, if samples correspond to news articles, and are ordered by their time of publication, then shuffling the data will likely lead to a model that is overfit and an inflated validation score: it will be tested on samples that are artificially similar (close in time) to training samples.
but it's not clear to me whether any of the splitting methods listed can accomplish what i'm looking for. It seems to be the case that I can define an itterable of indices and pass that into cv, but in that case it's not clear how many I should define (does it always use all of them? do different tests get different indices?)

why is sklearn.feature_selection.RFECV giving different results for each run

I tried to do feature selection with RFECV but it is giving out different results each time, does cross-validation divide the sample X into random chunks or into sequential deterministic chunks?
Also, why is the score different for grid_scores_ and score(X,y)? why are the scores sometimes negative?
Does cross-validation divide the sample X into random chunks or into sequential deterministic chunks?
CV divides the data into deterministic chunks by default. You can change this behaviour by setting the shuffle parameter to True.
However, RFECV uses sklearn.model_selection.StratifiedKFold if the y is binary or multiclass.
This means that it will split the data such that each fold has the same (or nearly the same ratio of classes). In order to do this, the exact data in each fold can change slightly in different iterations of CV. However, this should not cause major changes in the data.
If you are passing a CV iterator using the cv parameter, then you can fix the splits by specifying a random state. The random state is linked to random decisions made by the algorithm. Using the same random state each time will ensure the same behaviour.
Also, why is the score different for grid_scores_ and score(X,y)?
grid_scores_ is an array of cross-validation scores. grid_scores_[i] is the cross-validation score for the i-th iteration. This means that the first score is the score for all features, the second is the score when one set of features is removed and so on. The number of features removed in each is equal to the value of the step parameter. This is = 1 by default.
score(X, y) selects the optimal number of features and returns the score for those features.
why are the scores sometimes negative?
This depends on the estimator and scorer you are using. If you have set no scorer RFECV will use the default score function for the estimator. Generally, this is accuracy, but in your particular case, might be something that returns a negative value.

How to provide weighted eval set to XGBClassifier.fit()?

From the sklearn-style API of XGBClassifier, we can provide eval examples for early-stopping.
eval_set (list, optional) – A list of (X, y) pairs to use as a
validation set for early-stopping
However, the format only mentions a pair of features and labels. So if the doc is accurate, there is no place to provide weights for these eval examples.
Am I missing anything?
If it's not achievable in the sklearn-style, is it supported in the original (i.e. non-sklearn) XGBClassifier API? A short example will be nice, since I never used that version of the API.
As of a few weeks ago, there is a new parameter for the fit method, sample_weight_eval_set, that allows you to do exactly this. It takes a list of weight variables, i.e. one per evaluation set. I don't think this feature has made it into a stable release yet, but it is available right now if you compile xgboost from source.
https://github.com/dmlc/xgboost/blob/b018ef104f0c24efaedfbc896986ad3ed1b66774/python-package/xgboost/sklearn.py#L235
EDIT - UPDATED per conversation in comments
Given that you have a target-variable representing real-valued gain/loss values which you would like to classify as "gain" or "loss", and you would like to make sure the validation-set of the classifier weighs the large-absolute-value gains/losses heaviest, here are two possible approaches:
Create a custom classifier which is just XGBoostRegressor fed to a treshold where the real-valued regression predictions are converted to 1/0 or "gain"/"loss" classifications. The .fit() method of this classifier would just call .fit() of xgbregressor, while .predict() method of this classifier would call .predict() of the regressor and then return the thresholded category predictions.
you mentioned you would like to try weighting the treatment of the records in your validation set, but there is no option for this in xgboost. The way to implement this would be to implement a custom eval-metric. However, you pointed out that eval_metric must be able to return a score for a single label/pred record at a time, so it couldn't accept all your row-values and perform the weighting in the eval metric. The solution to this you mentioned in your comment was "create a callable which has a ref to all validation examples, pass the indices (instead of labels and scores) into eval_set, use the indices to fetch labels and scores from within the callable and return metric for each validation examples." This should also work.
I would tend to prefer option 1 as more straightforward, but trying two different approaches and comparing results is generally a good idea if you have the time, so interested how these turn out for you.

Spark Naive Bayes Result accuracy (Spark ML 1.6.0) [duplicate]

I am using Spark ML to optimise a Naive Bayes multi-class classifier.
I have about 300 categories and I am classifying text documents.
The training set is balanced enough and there is about 300 training examples for each category.
All looks good and the classifier is working with acceptable precision on unseen documents. But what I am noticing that when classifying a new document, very often, the classifier assigns a high probability to one of the categories (the prediction probability is almost equal to 1), while the other categories receive very low probabilities (close to zero).
What are the possible reasons for this?
I would like to add that in SPARK ML there is something called "raw prediction" and when I look at it, I can see negative numbers but they have more or less comparable magnitude, so even the category with the high probability has comparable raw prediction score, but I am finding difficulties in interpreting this scores.
Lets start with a very informal description of Naive Bayes classifier. If C is a set of all classes and d is a document and xi are the features, Naive Bayes returns:
Since P(d) is the same for all classes we can simplify this to
where
Since we assume that features are conditionally independent (that is why it is naive) we can further simplify this (with Laplace correction to avoid zeros) to:
Problem with this expression is that in any non-trivial case it is numerically equal to zero. To avoid we use following property:
and replace initial condition with:
These are the values you get as the raw probabilities. Since each element is negative (logarithm of the value in (0, 1]) a whole expression has negative value as well. As you discovered by yourself these values are further normalized so the maximum value is equal to 1 and divided by the sum of the normalized values
It is important to note that while values you get are not strictly P(c|d) they preserve all important properties. The order and ratios are exactly (ignoring possible numerical issues) the same. If none other class gets prediction close to one it means that, given the evidence, it is a very strong prediction. So it is actually something you want to see.

Resources