LightGBM and XGBoost models can be dumped to plain text files containing human-readable model structure. In the end, they are just tree ensembles.
Is there any library to load these dumped models to the scikit-learn framework, e.g. construct sklearn ensembles with same splits and values?
That could be quiet convinient as there are some nice libraries attached to sklearn API, e.g. treeinterpreter.
For XGBoost you can use the xgbfir library which parses the xgb model display feature interactions and ranking. Install it with:
pip install xgbfir
For lightGBM, I'm not aware of good options. Microsoft's lightGBM library allows PMML export, so perhaps you could export then use some PMML parsers.
Related
I am building an API for training models, and figured I wanted to use ONNX to send the models back and forth.
I am testing with a sklearn XGboost model, and it seems that it is a requirement to fit the model before I can export it to onnx.
I want to define a custom or standard sklearn model, convert to onnx for transport, reopen and train, save in ONNX
Is this feasable at all?
My end goal is to have an API that can accept any sklearn, tensorflow or similar model in an untrained state and then train on the server.
Onnx is used to deliver model results, including pre and post-processing or other manipulations, "in production".
The assumption is the model is already trained and you only need to "predict" (or whatever similar action) on new data.
Sound like what you need is a Python (or other) code that will receive your API calls, translate them into the appropriate models, train the models, and then, if you want to be independent from an MLOps point of view, transform the result into Onnx.
I have a gradient boost model saved in the .pkl format. I have to load this model in tensorflowjs. i can see that there is a way to load a keras model but I can't find a way to load a sklearn model. Is it possible to do this?
It is not possible to load sklearn model in tensorflow.js. Tensorflow.js allows to load models written in tensorflow.
Though, I haven't tried myself, but I think that you can possibly use the scikit learn wrapper to rewrite the classifier in tensorflow. The model can be saved and converted to a format that can be loaded in tensorflow.js.
If we want to use weights from pretrained BioBERT model, we can execute following terminal command after downloading all the required BioBERT files.
os.system('python3 extract_features.py \
--input_file=trial.txt \
--vocab_file=vocab.txt \
--bert_config_file=bert_config.json \
--init_checkpoint=biobert_model.ckpt \
--output_file=output.json')
The above command actually reads individual file containing the text, reads the textual content from it, and then writes the extracted vectors to another file. So, the problem with this is that it could not be scaled easily for very large data-sets containing thousands of sentences/paragraphs.
Is there is a way to extract these features on the go (using an embedding layer) like it could be done for the word2vec vectors in PyTorch or TF1.3?
Note: BioBERT checkpoints do not exist for TF2.0, so I guess there is no way it could be done with TF2.0 unless someone generates TF2.0 compatible checkpoint files.
I will be grateful for any hint or help.
You can get the contextual embeddings on the fly, but the total time spend on getting the embeddings will always be the same. There are two options how to do it: 1. import BioBERT into the Transformers package and treat use it in PyTorch (which I would do) or 2. use the original codebase.
1. Import BioBERT into the Transformers package
The most convenient way of using pre-trained BERT models is the Transformers package. It was primarily written for PyTorch, but works also with TensorFlow. It does not have BioBERT out of the box, so you need to convert it from TensorFlow format yourself. There is convert_tf_checkpoint_to_pytorch.py script that does that. People had some issues with this script and BioBERT (seems to be resolved).
After you convert the model, you can load it like this.
import torch
from transformers import *
# Load dataset, tokenizer, model from pretrained model/vocabulary
tokenizer = BertTokenizer.from_pretrained('directory_with_converted_model')
model = BertModel.from_pretrained('directory_with_converted_model')
# Call the model in a standard PyTorch way
embeddings = model([tokenizer.encode("Cool biomedical tetra-hydro-sentence.", add_special_tokens=True)])
2. Use directly BioBERT codebase
You can get the embeddings on the go basically using the code that is exctract_feautres.py. On lines 346-382, they initialize the model. You get the embeddings by calling estimator.predict(...).
For that, you need to format your format the input. First, you need to format the string (using code on line 326-337) and then apply and call convert_examples_to_features on it.
I am using gradient boosting regressor to build a predictive model.
After all the tuning/CV, finally I get my prediction right. I am now thinking about dump the model to a file, so that my production c++ program can load it and use it.
It seem that sklearn provides model persistence through pickle, but I am wondering if there is a way to convert pickle model into txt, like what xgboost has. My production code is c++ so having pickle as media is really not handy
Is there a 'dumpModel' function in the library?
Anyone has any experience ?
Thanks
I have trained a SVM (svc) using scikit-learn over half a terabyte of data. The model is working fine and I need to port it to C, but I don't want to re-train the SVM from scratch because it takes way too long for me. Is there a way to easily export the model generated by scikit-learn and import it into LibSVM? Internally scikit-learn uses LibSVM so theoretically it should be possible, but I haven't been able to find anything in the documentation. Any suggestion?
Is there a way to easily export the model generated by scikit-learn and import it into LibSVM?
No. The scikit-learn version of LIBSVM has been hacked up severely to fit it into the Python environment and the model is stored as NumPy/SciPy data structures.
Your best shot is to study the SVM decision function and reimplement it in C. The support vectors can be obtained from the SVC object as NumPy arrays, which are easily translated to C arrays.