Can BaggingClassifier manually define multiple base_estimator in Sklearn? - scikit-learn

I'm trying to use BaggingClassifier from Sklearn to define multiple base_estimator. From my understanding, something would be similar to this.
clf = BaggingClassifier(base_estimator=[SVC(), DecisionTreeClassifier()], n_estimators=3, random_state=0)
But BaggingClassifier here doesn't take a list as its base_estimator.
I assume I can switch to StackingRegressor(estimators=) to define multiple estimators manually. But it will be a pain to list out, for example 100 estimators, no mentioning there will be many permutations and combinations of the base estimators.
Can you help me understand how to define multiple base_estimator in sklearn.BaggingClassifier?

You can only pass one estimator to base_estimator. The whole idea behind BaggingBlassifier is to train one model on random samples of the training data in an attempt to reduce its variance.
If you need two or more estimators, each one of them trained on random subsets of data, I suggest two different options:
Create your own voting process from two separate bagging classifiers
Train two different BaggingClassifiers and pass them to sklearn.ensemble.StackingClassifier.

Related

sklearn.ensemble Can you use less estimators than the number trained in final model?

Most sklearn.ensemble models (GradientBoostingClassifier, RandomForestClassifier etc.) take an n_estimators param for number of estimators in the ensemble. If you've trained a model with X estimators, can you use less than X estimators in your prediction? This can be useful for model selection.
Example: train 800 trees, you might want to see how a 400 tree model performs. Given that you have an 800 tree model, you should just be able to predict with the first 400 trees rather than training it again.
This can be done in boosting models, but a bagging model like random forest may not have this option. Decision trees in boosting models are sequential, so to use the first 400 trees from the 800 trees would make sense. But trees in random forest are without sequence, so you would have to randomly sample 400 trees, which I don't think the module offers.
The boosting models (GradientBoostingClassifier, AdaBoostClassifier, and HistGradientBoostingClassifier) all support this through the staged_xyz methods. You don't directly set the number of estimators; instead, you get all the partial predictions, and can extract whichever one(s) you want.
For others like RandomForestClassifier there isn't builtin support, but you can access its estimators_ and do the aggregation of the predictions yourself. You can also overwrite the attribute estimators_ with a subset (in a deep copy of the estimator, say) and then use the predict functionality directly; I wouldn't count on that working in future versions, but it does work as of 0.22.

Average weights from two .h5 folders in keras

I have trained two models on different datasets and saved weights of each model as ModelA.h5 and ModelB.h5
I want to average these weights and create a new folder called ModelC.h5 and load it on the same model architechture.
How do I do it?
Model trained on different datasets can't just be added like this. It looks something like this. Let's say like this, train one person to classify 1000 images into 5 classes, then, train another person to classify another 1000 images into same 5 classes. Now, you want to combine them into one.
Rather, what you can do is take ensemble of both the networks. There are multiple ways to ensemble the predictions of both models using Max Voting, Averaging or Weighted Average, Bagging and Boosting, etc. Ensemble helps to boost the weak classifiers into one strong classifier.
You can refer to this link to read more about different types of ensemble: Link

How to convert a tf.estimator to a keras model?

In package tf.estimator, there's a lot of defined estimators. I want to use them in Keras.
I checked TF docs, there's only one converting method that could convert keras. Model to tf. estimator, but no way to convert from estimator to Model.
For example, if we want to convert the following estimator:
tf.estimator.DNNLinearCombinedRegressor
How could it be converted into Keras Model?
You cannot because estimators can run arbitrary code in their model_fn functions and Keras models must be much more structured, whether sequential or functional they must consist of layers, basically.
A Keras model is a very specific type of object that can therefore be easily wrapped and plugged into other abstractions.
Estimators are based on arbitrary Python code with arbitrary control flow and so it's quite tricky to force any structure onto them.
Estimators support 3 modes - train, eval and predict. Each of these could in theory have completely independent flows, with different weights, architectures etc. This is almost unthinkable in Keras and would essentially amount to 3 separate models.
Keras, in contrast, supports 2 modes - train and test (which is necessary for things like Dropout and Regularisation).

What will GridsearchCV choose if there are multiple estimators having the same score?

I'm using RandomForestClassifier in sklearn, and using GridsearchCV for getting best estimator.
I'm wondering when there are many estimators (from simple one to complex one) having the same scores in GridsearchCV, what will be the resulted estimator out of GridsearchCV? The simplest one? or random one?
GridSearchCV does not assess the model complexity (though that would be a neat feature). Neither does it choose among the best models randomly.
Instead, GridSearchCV simply performs an np.argmin() on the stored errors. See the corresponding line in the source code.
Now, according to the NumPy docs,
In case of multiple occurrences of the minimum values, the indices corresponding to the first occurrence are returned.
That is, GridSearchCV will always select the first among the best models.

How to adopt multiple different loss functions in each steps of LSTM in Keras

I have a set of sentences and their scores, I would like to train a marking system that could predict the score for a given sentence, such one example is like this:
(X =Tomorrow is a good day, Y = 0.9)
I would like to use LSTM to build such a marking system, and also consider the sequential relationship between each word in the sentence, so the training example shown above is transformed as following:
(x1=Tomorrow, y1=is) (x2=is, y2=a) (x3=a, y3=good) (x4=day, y4=0.9)
When training this LSTM, I would like the first three time steps using a softmax classifier, and the final step using a MSE. It is obvious that the loss function used in this LSTM is composed of two different loss functions. In this case, it seems the Keras does not provide the way to address my problem directly. In addition, I am not sure whether my method to build the marking system is correct or not.
Keras support multiple loss functions as well:
model = Model(inputs=inputs,
outputs=[lang_model, sent_model])
model.compile(optimizer='sgd',
loss=['categorical_crossentropy', 'mse'],
metrics=['accuracy'], loss_weights=[1., 1.])
Based on your explanation, I think you need a model that first, predict a token based on previous tokens, in NLP domain it usually called Language model, and then compute a score which I assume it is a sentiment (it is applicable to other domain).
To do so, you can train your language model with LSTM and pick the last output of LSTM for your ranking task. To this end, you need to define two loss function: categorical_crossentropy for the language model and MSE for the ranking task.
This tutorial would be helpful: https://www.pyimagesearch.com/2018/06/04/keras-multiple-outputs-and-multiple-losses/

Resources