ML Models runs very slow on transformed Text Data - scikit-learn

I Have a lemmatized - tfidf vectorized text dataset with 32K rows and max features = 10002, for a regression problem
why do ensemble models(RandomForest, ExtraTree, GradientBoosting) and SVM(SVR) model take about 5-6 hours for fitting into data while DecisionTree, and KNN fits faster
any remedy to speed up the data fitting part

Related

Multilabel text classification with BERT and highly imbalanced training data

I'm trying to train a multilabel text classification model using BERT. Each piece of text can belong to 0 or more of a total of 485 classes. My model consists of a dropout layer and a linear layer added on top of the pooled output from the bert-base-uncased model from Hugging Face. The loss function I'm using is the BCEWithLogitsLoss in PyTorch.
I have millions of labeled observations to train on. But the training data are highly unbalanced, with some labels appearing in less than 10 observations and others appearing in more than 100K observations! I'd like to get a "good" recall.
My first attempt at training without adjusting for data imbalance produced a micro recall rate of 70% (good enough) but a macro recall rate of 45% (not good enough). These numbers indicate that the model isn't performing well on underrepresented classes.
How can I effectively adjust for the data imbalance during training to improve the macro recall rate? I see we can provide label weights to BCEWithLogitsLoss loss function. But given the very high imbalance in my data leading to weights in the range of 1 to 1M, can I actually get the model to converge? My initial experiments show that a weighted loss function is going up and down during training.
Alternatively, is there a better approach than using BERT + dropout + linear layer for this type of task?
In your case it might be helpful to balance the labels in the training data. You have a lot of data, so you could afford to loose a part of it by balancing. But before you do this, I recommend to read this answer about balancing classes in traing data.
If you really only care about recall, you could try to tune your model maximizing recall.

sklearn HistGradientBoostingClassifier with large unbalanced data

I've been using Sklearn HistGradientBoostingClassifier to classify some data. My experiment is multi-class classification with single label predictions (20 labels).
My experience shows two cases. The first case is the measurement of the accuracy of these algorithms without data augmentation (around unbalanced 3,000 samples). The second case is the measurement of accuracy with data augmentation (around 12,000 unbalanced samples). I am using default parameters.
In the first case, the HistGradientBoostingClassifier shows an accuracy of around 86.0%. However, with data augmentation, results show weak accuracy, around 23%.
I am wondering if this accuracy was coming from unbalanced datasets, but since there are no features to fix unbalanced datasets for the HistGradientBoostingClassifier algorithm within the Sklearn library, I cannot verify that fact.
Do some people have the same kind of problem with large dataset and HistGradientBoostingClassifier?
Edit: I tried other algorithms with the same data split, and the results seems normal (accuracy around 5% more w/ data augmentation). I am wondering why I am only getting this with HistGradientBoostingClassifier.
Accuracy is a poor metric when dealing with imbalanced data. Suppose I have 90:10 class 0 and class 1. A DummyClassifier that only predicts class 0 will achieve 90% accuracy.
You'll have to look at precision, recall, f1, confusion matrix, and not just accuracy alone.
I have found something that could be the reason of the lack of accuracy while using HistGradientBoostingClassifier algorithm with default parameters on augmented dataset of roughly 12,000 samples.
I compared HistGradientBoostingClassifier and LightGBM algorithms on the same data split (HistGradientBoostingClassifier from sklearn is an implementation of Microsoft's LightGBM.). HistGradientBoostingClassifier shows a weak accuracy of 24.7% and LightGBM a strong one 87.5%.
As I can read on sklearn's and Microsoft's docs, HistGradientBoostingClassifier "cannot handle properly" unbalanced dataset while LightGBM can. The latter has this parameter: class_weigth (dict, 'balanced' or None, optional (default=None)) (found on that page)
My hypothesis is that, for the time being, the dataset becomes more unbalanced with augmentation and, without any feature for the HistGradientBoostingClassifier algorithm to handle unbalanced data, the algorithm is misled.
Also, as mentioned by Hanafi Haffidz in comments the algorithm could tend to overfit with default parameters.

Would training a BERT Multi-Label Classifier for 100 labels decrease accuracy a lot?

I am trying to train a text classifier which would be able to classify a sentence as being of a certain query type. I have used the BERT Model and trained a Multi-Label classifier which does the job with 90% accuracy for about 20 labels.
My question is that if I have to train the model for 100/200 labels would the accuracy be impacted severely?
If your class distributions does not have a large overlap and you have the good amount of train data representing each class, your accuracy should not be severely impacted. For data hungry model like BERT its all about data. If you have large amount of data represent your 100/200 class you are good to go.

How to improve validation accuracy in training convolutional neural network?

I am training a CNN model(made using Keras). Input image data has around 10200 images. There are 120 classes to be classified. Plotting the data frequency, I can see that sample data for every class is more or less uniform in terms of distribution.
Problem I am facing is loss plot for training data goes down with epochs but for validation data it first falls and then goes on increasing. Accuracy plot reflects this. Accuracy for training data finally settles down at .94 but for validation data its around 0.08.
Basically its case of over fitting.
I am using learning rate of 0.005 and dropout of .25.
What measures can I take to get better accuracy for validation? Is it possible that sample size for each class is too small and I may need data augmentation to have more data points?
Hard to say what could be the reason. First you can try classical regularization techniques like reducing the size of your model, adding dropout or l2/l1-regularizers to the layers. But this is more like randomly guessing the models hyperparameters and hoping for the best.
The scientific approach would be to look at the outputs for your model and try to understand why it produces these outputs and obviously checking your pipeline. Did you had a look at the outputs (are they all the same)? Did you preprocess the validation data the same way as the training data? Did you made a stratified train/test-split, i.e. keeping the class distribution the same in both sets? Is the data shuffles when you feed it to your model?
In the end you have about ~85 images per class which is really not a lot, compare CIFAR-10 resp. CIFAR-100 with 6000/600 images per class or ImageNet with 20k classes and 14M images (~500 images per class). So data augmentation could be beneficial as well.

Is there any way to evaluate RMSE and MAE on out of bag data for random forest regressor

enter image description hereI am working with random forest regression model for my thesis work. My data set is small (about 3000 samples and 20 features) and it's over fitting on train data. Since data set is small and i don't want to split data into train(train+oob), test sets.So I am using a bagging regression to avoid over fitting problem.
I'm trying to evaluate performance metrics for regression model. I can calculate RMSE and MAE values for train set but I don't know how to check these metrics for out of bag data. Need suggestions
Thanks in advance

Resources