Is it possible to Oversample just one class out of 13? - text

I was wondering if it was possible to perform SMOTE or similar techniques to only one minor class. I have a text classification problem where all minor classes have good accuracies (unique words that differentiate them) except for one class where all the words in it are overlapping with other 13 classes and thus a very low prediction accuracy (31%)
I'm trying to increase the number of samples of only this class!

Yes, you can use synthetic oversampling on a single class. If you just want to reinforce the existing distribution of the minority class SMOTE can help, or if you're more worried about decision surfaces an ensemble of techniques like oversampling with ADASYN and undersampling with Majority Tomek Link removal might be worth trying.

Related

How to put more weight on one class during training in Pytorch [duplicate]

I have a multilabel classification problem, which I am trying to solve with CNNs in Pytorch. I have 80,000 training examples and 7900 classes; every example can belong to multiple classes at the same time, mean number of classes per example is 130.
The problem is that my dataset is very imbalance. For some classes, I have only ~900 examples, which is around 1%. For “overrepresented” classes I have ~12000 examples (15%). When I train the model I use BCEWithLogitsLoss from pytorch with a positive weights parameter. I calculate the weights the same way as described in the documentation: the number of negative examples divided by the number of positives.
As a result, my model overestimates almost every class… Mor minor and major classes I get almost twice as many predictions as true labels. And my AUPRC is just 0.18. Even though it’s much better than no weighting at all, since in this case the model predicts everything as zero.
So my question is, how do I improve the performance? Is there anything else I can do? I tried different batch sampling techniques (to oversample minority class), but they don’t seem to work.
I would suggest either one of these strategies
Focal Loss
A very interesting approach for dealing with un-balanced training data through tweaking of the loss function was introduced in
Tsung-Yi Lin, Priya Goyal, Ross Girshick, Kaiming He and Piotr Dollar Focal Loss for Dense Object Detection (ICCV 2017).
They propose to modify the binary cross entropy loss in a way that decrease the loss and gradient of easily classified examples while "focusing the effort" on examples where the model makes gross errors.
Hard Negative Mining
Another popular approach is to do "hard negative mining"; that is, propagate gradients only for part of the training examples - the "hard" ones.
see, e.g.:
Abhinav Shrivastava, Abhinav Gupta and Ross Girshick Training Region-based Object Detectors with Online Hard Example Mining (CVPR 2016)
#Shai has provided two strategies developed in the deep learning era. I would like to provide you some additional traditional machine learning options: over-sampling and under-sampling.
The main idea of them is to produce a more balanced dataset by sampling before starting your training. Note that you probably will face some problems such as losing the data diversity (under-sampling) and overfitting the training data (over-sampling), but it might be a good start point.
See the wiki link for more information.

impact of labels' distribution in deep learning for regression problem

I am trying to train a CNN model for a regression problem, after that, I categorize predicted labels into 4 classes and check some accuracy metrics. In confusion matrix accuracy of class 2,3 are around 54% and accuracy of class 1,4 are more than 90%. labels are between 0-100 and classes are 1: 0-45,2: 45-60, 3:60-70, 4:70-100. I do not know where the problem comes from Is it because of the distribution of labels in the training set and what is the solution! Regards...
I attached the plot in the following link.
Training set target distribution
It's not a good idea to create classes that way. Giving to some classes a smaller window of values (i.e. you predict 2 for 15 values and 1 for 45 values), it is intrinsically more difficult for your model to predict class 2, and the best thing the model will learn during training will be to avoid class 2 as much as possible.
You may confirm this having a look at False Negatives for classes 2 and 3, if they are too many, it might be due to this.
The best thing to do would be categorizing your output space in equal portions, and trusting your model will learn which classes are less frequent, without trying to force that proportion by yourself.
If you don't have good results, it means you have to improve your model in other ways, maybe using data augmentation to get a uniform distribution of training samples may help.
If this doesn't sound convincing for you, try to have a look at this paper:
https://papers.nips.cc/paper/95-alvinn-an-autonomous-land-vehicle-in-a-neural-network.pdf
In end-to-end models for autonomous driving, neural networks have to predict classes indicating the steering angle. The distribution of these values is highly imbalanced as most of the time the car is going straight. Despite this, the best models do not discriminate against some classes to adapt to data distribution.
Good luck!

Multilabel classification with class imbalance in Pytorch

I have a multilabel classification problem, which I am trying to solve with CNNs in Pytorch. I have 80,000 training examples and 7900 classes; every example can belong to multiple classes at the same time, mean number of classes per example is 130.
The problem is that my dataset is very imbalance. For some classes, I have only ~900 examples, which is around 1%. For “overrepresented” classes I have ~12000 examples (15%). When I train the model I use BCEWithLogitsLoss from pytorch with a positive weights parameter. I calculate the weights the same way as described in the documentation: the number of negative examples divided by the number of positives.
As a result, my model overestimates almost every class… Mor minor and major classes I get almost twice as many predictions as true labels. And my AUPRC is just 0.18. Even though it’s much better than no weighting at all, since in this case the model predicts everything as zero.
So my question is, how do I improve the performance? Is there anything else I can do? I tried different batch sampling techniques (to oversample minority class), but they don’t seem to work.
I would suggest either one of these strategies
Focal Loss
A very interesting approach for dealing with un-balanced training data through tweaking of the loss function was introduced in
Tsung-Yi Lin, Priya Goyal, Ross Girshick, Kaiming He and Piotr Dollar Focal Loss for Dense Object Detection (ICCV 2017).
They propose to modify the binary cross entropy loss in a way that decrease the loss and gradient of easily classified examples while "focusing the effort" on examples where the model makes gross errors.
Hard Negative Mining
Another popular approach is to do "hard negative mining"; that is, propagate gradients only for part of the training examples - the "hard" ones.
see, e.g.:
Abhinav Shrivastava, Abhinav Gupta and Ross Girshick Training Region-based Object Detectors with Online Hard Example Mining (CVPR 2016)
#Shai has provided two strategies developed in the deep learning era. I would like to provide you some additional traditional machine learning options: over-sampling and under-sampling.
The main idea of them is to produce a more balanced dataset by sampling before starting your training. Note that you probably will face some problems such as losing the data diversity (under-sampling) and overfitting the training data (over-sampling), but it might be a good start point.
See the wiki link for more information.

I am getting average_precision_score(y_test,y_predict) =1 . what is the intuition behind it?

I am working on an imbalanced binary classification problem and the data is 97% in favour of a class. I am using a naive-bayes classifier and i am getting the test cv score as 1 . I have used average_precision_score() also as 1 . what is the intuition behind this result and how can i better classify this problem.
General things you need to do:
1. CV approach that considers class imbalance (something like StratifiedKFold). This way you can be sure that you always have minor class in your test set
2. Another metric (probably even custom one that uses different weights for different error types). For example, take a look at the focal loss
3. Oversampling/downsampling techniques (imblearn in Python)
Further steps
4. Visualization (TSNE). Can give you some ideas about the general pattern
5. Feature importance and feature engineering based on important features (can make classification easier)
5. Another models (depend on (4)), boosting
To better classify the problem you need to deal with class imbalance issue. Try reading articles on how to handle class imbalances like this one:
https://www.analyticsvidhya.com/blog/2017/03/imbalanced-classification-problem/

Latent Class Analysis Model Selection

When conducting Latent Class Analysis sometimes the information criterion (i.e., AIC, BIC, aBIC) don't select the same model. Such is the case in a study of substance use patterns that I am conducting among 774 men who have sex with men. Figure 1 shows the fit criterion plotted for each number of latent classes. BIC and CAIC select the three class model (See Figure 2). However, the aBIC selects a five class model (See Figure 2).
How do you select a model solution under these circumstances? Is there a way to select variables or collapse variables down in order to optimize results?
It is never easy to select the number of classes for LCA, but there are some rules of thumb that I follow:
Based on Nylund, Asparouhov & Muthén (2007) you want to follow BIC and bootstrap likelihood ratio test (BLRT). Even then, they seldom agree – BLRT will tell you to pick a model with more classes, BIC will be more conservative and suggest fewer classes. But this is as close as you can get by using statistical tests.
Rely on the available theory underlying your model. Look for potential discrepancies with your theoretical knowledge and try to deduce from the theory how many classes are to be expected. There is no golden rule, LCA is a good method, but without theory it is quite meaningless. If you have little theory, what you can do to double check your findings is to relate your latent variable to a distal outcome (covariate) about which you might have some theory and see if it works out. For example, you suspect that one of your latent classes will be dominated by one gender: associate your latent variable with gender and see.
Parsimony rule: simple models are preferred to complex ones (Collins & Lanza, 2010). If a simpler model does all the work, why choose a complex one?
In your case, I would start with a 3 class model, since it is suggested by BIC and parsimony. After finishing the analysis and interpreting the findings, I would re-run the model with 4/5 classes and see if I would reach substantially different findings - something that is worth reporting on, any important or contradicting findings to what I have found with a 3 class model. If it just adds complexity, but does not contradict or improve what I have already known, I'd stick to a 3 class model.
Looking at the results, I think that the 5 class model does not provide anything beyond the 3 classes. In the 3 class model, you have one class of extensive drug users (16%), moderate drug users dominated by cannabis, popper, hallucinogens and cocaine (40%), and finally a class of light users dominated by alcohol and cannabis (44%). The 5 class model split the first two groups into specific smaller sub-groups, but you have to decide whether these splits are important for your research - whether they make sense for your research question.
I would also recommend checking bivariate residuals. It is possible that the model misfit that is suggesting more classes is generated by a residual association between your indicators. If you can justify it theoretically (for example by finding some similarity between the indicators beyond the latent class), you can add the residual association and obtain a similarly good fit with the 3 class model.
One last point, avoid using AIC for LCA altogether - it is a very poorly performing index! Use cAIC, BIC and aBIC instead. AIC does not correct for the sample size, which can be quite problematic with larger samples.
Sources:
Collins, L. M., & Lanza, S. T. (2010). Latent class and latent transition analysis: With applications in the social, behavioral, and health sciences. New York: Wiley.

Resources