Naive bayes classifiers are bad estimators - scikit-learn

I read this in the Scikit-learn guide (https://scikit-learn.org/stable/modules/naive_bayes.html):
[...] although naive Bayes is known as a decent classifier,
it is known to be a bad estimator, so the probability outputs
from predict_proba are not to be taken too seriously'
Bad and decent at the same time?

Ok, let's put some context, with a quote from the 2nd edition of Francois Chollet's book on Deep Learning, that I believe will shed light on your interrogation. The point is that Naive Bayes is one of the 1st so-called Machine learning classifier, which assumes that the features in the input data are all independent (Naive assumption), still providing a "descent" result; but, as compared to more recent neural networks, the performance on classification tasks is often much lower.
Naive Bayes is a type of machine-learning classifier based on applying Bayes' theorem while assuming that the features in the input data are all independent (a strong, or “naive” assumption, which is where the name comes from). This form of data analysis predates computers and was applied by hand decades before its first computer implementation (most likely dating back to the 1950s). Bayes' theorem and the foundations of statistics date back to the eighteenth century, and these are all you need to start using Naive Bayes classifiers.
A closely related model is the logistic regression (logreg for short),
which is sometimes considered to be the “hello world” of modern
machine learning. Don’t be misled by its name—logreg is a
classification algorithm rather than a regression algorithm. Much like
Naive Bayes, logreg predates computing by a long time, yet it’s still
useful to this day, thanks to its simple and versatile nature. It’s
often the first thing a data scientist will try on a dataset to get a
feel for the classification task at hand.

Related

Multilabel classification with class imbalance in Pytorch

I have a multilabel classification problem, which I am trying to solve with CNNs in Pytorch. I have 80,000 training examples and 7900 classes; every example can belong to multiple classes at the same time, mean number of classes per example is 130.
The problem is that my dataset is very imbalance. For some classes, I have only ~900 examples, which is around 1%. For “overrepresented” classes I have ~12000 examples (15%). When I train the model I use BCEWithLogitsLoss from pytorch with a positive weights parameter. I calculate the weights the same way as described in the documentation: the number of negative examples divided by the number of positives.
As a result, my model overestimates almost every class… Mor minor and major classes I get almost twice as many predictions as true labels. And my AUPRC is just 0.18. Even though it’s much better than no weighting at all, since in this case the model predicts everything as zero.
So my question is, how do I improve the performance? Is there anything else I can do? I tried different batch sampling techniques (to oversample minority class), but they don’t seem to work.
I would suggest either one of these strategies
Focal Loss
A very interesting approach for dealing with un-balanced training data through tweaking of the loss function was introduced in
Tsung-Yi Lin, Priya Goyal, Ross Girshick, Kaiming He and Piotr Dollar Focal Loss for Dense Object Detection (ICCV 2017).
They propose to modify the binary cross entropy loss in a way that decrease the loss and gradient of easily classified examples while "focusing the effort" on examples where the model makes gross errors.
Hard Negative Mining
Another popular approach is to do "hard negative mining"; that is, propagate gradients only for part of the training examples - the "hard" ones.
see, e.g.:
Abhinav Shrivastava, Abhinav Gupta and Ross Girshick Training Region-based Object Detectors with Online Hard Example Mining (CVPR 2016)
#Shai has provided two strategies developed in the deep learning era. I would like to provide you some additional traditional machine learning options: over-sampling and under-sampling.
The main idea of them is to produce a more balanced dataset by sampling before starting your training. Note that you probably will face some problems such as losing the data diversity (under-sampling) and overfitting the training data (over-sampling), but it might be a good start point.
See the wiki link for more information.

When and why would you want to use a Probability Density Function?

A wanna-be data-scientist here and am trying to understand as a data scientist, when and why would you use a Probability Density Function (PDF)?
Sharing a scenario and a few pointers to learn about this and other such functions like CDF and PMF would be really helpful. Know of any book that talks about these functions from practice stand-point?
Why?
Probability theory is very important for modern data-science and machine-learning applications, because (in a lot of cases) it allows one to "open up a black box" and shed some light into the model's inner workings, and with luck find necessary ingredients to transform a poor model into a great model. Without it, a data scientist's work is very much restricted in what they are able to do.
A PDF is a fundamental building block of the probability theory, absolutely necessary to do any sort of probability reasoning, along with expectation, variance, prior and posterior, and so on.
Some examples here on StackOverflow, from my own experience, where a practical issue boils down to understanding data distribution:
Which loss-function is better than MSE in temperature prediction?
Binary Image Classification with CNN - best practices for choosing “negative” dataset?
How do neural networks account for outliers?
When?
The questions above provide some examples, here're a few more if you're interested, and the list is by no means complete:
What is the 'fundamental' idea of machine learning for estimating parameters?
Role of Bias in Neural Networks
How to find probability distribution and parameters for real data? (Python 3)
I personally try to find probabilistic interpretation whenever possible (choice of loss function, parameters, regularization, architecture, etc), because this way I can move from blind guessing to making reasonable decisions.
Reading
This is very opinion-based, but at least few books are really worth mentioning: The Elements of Statistical Learning, An Introduction to Statistical Learning: with Applications in R or Pattern Recognition and Machine Learning (if your primary interest is machine learning). That's just a start, there are dozens of books on more specific topics, like computer vision, natural language processing and reinforcement learning.

Adaboost vs. Gaussian Naive Bayes

I'm new to Adaboost, but have been reading about it, and it seemed like the perfect solution for a problem I've been working on.
I have a data set where the classes are 'UP' and 'DOWN'. The Gaussian Naive Bayes classifier classifies both classes with ~55% accuracy (weakly accurate). I thought that using Adaboost with Gaussian Naive Bayes as my base estimator would allow me to get a greater accuracy, however when I do this, my accuracy drops to around 45-50%.
Why is this? I find it very unusual that Adaboost would underperform its base estimator. Additionally, any tips for getting Adaboost to work better would be appreciated. I have tried it with many different estimators with similar poor results.
The reason could be the Diversity dilemma of the Ensemble methods, which particularly concerns the Adaboost algorithm.
Diversity is the error between the component classifiers of the Adaboost algorithm, which we prefer to keep uncorrelated. Otherwise, component classifiers will perform worse than single component classifiers. On the other hand, if we use weak base classifiers but achieve reasonable accuracy, the final ensemble will achieve higher accuracy.
This is well explained in this paper.
From which we can retrieve this explanation:
Accuracy and Diversity dilemma of Adaboost
This diagram is a scatter-plot where each point corresponds to a component classifier. The x coordinate value of a point is the diversity value of the corresponding component classifier while the y coordinate value is the accuracy value of the corresponding component classifier. From this figure, it can be observed that, if the component classifiers are too accurate, it is difficult to find very diverse ones, and combining these accurate but non-diverse classifiers often leads to very limited improvement (Windeatt, 2005). On the other hand, if the component classifiers are too inaccurate, although we can find diverse ones, the
combination result may be worse than that of combining both more accurate and diverse component classifiers. This is because if the combination result is dominated by too many inaccurate component classifiers, it will be wrong most of the time, leading to poor classification result
To directly answer your question, it may be that using the Guassian Naive Bayes as base estimators is creating classifiers that do not disagree (enough) with each other (diversify the error), hence Adaboost generalizes even worse than the single Gaussian Naive Bayes.

text classification using svm

i read this article :A hybrid classification method of k nearest neighbor, Bayesian methods
and genetic algorithm
it's proposed to use genetic algorithm in order to improve text classification
i want to replace Genetic algorithm with SVM but i don't know if it works or not
i mean i do not know if the new idea and the result will be better than this article
i read somewhere Ga is better than SVM but i dono if it's right or not?
SVM and Genetic Algorithms are in fact completely different methods. SVM is basicaly a classification tool, while genetic algorithms are meta optimisation heuristic. Unfortunately I do not have access to the cited paper, but I can hardly imagine, how putting sVM in the place of GA could work.
i read somewhere Ga is better than SVM but i dono if it's right or not?
No, it is not true. These methods are not comparable as they are completely different tools.

Which classifier to choose in NLTK

I want to classify text messages into several categories like, "relation building", "coordination", "information sharing", "knowledge sharing" & "conflict resolution". I am using NLTK library to process these data. I would like to know which classifier, in nltk, is better for this particular multi-class classification problem.
I am planning to use Naive Bayes Classification, is it advisable?
Naive Bayes is the simplest and easy to understand classifier and for that reason it's nice to use. Decision Trees with a beam search to find the best classification are not significantly harder to understand and are usually a bit better. MaxEnt and SVM tend be more complex, and SVM requires some tuning to get right.
Most important is the choice of features + the amount/quality of data you provide!
With your problem, I would focus first on ensuring you have a good training/testing dataset and also choose good features. Since you are asking this question you haven't had much experience with machine learning for NLP, so I'd say start of easy with Naive Bayes as it doesn't use complex features- you can just tokenize and count word occurrences.
EDIT:
The question How do you find the subject of a sentence? and my answer are also worth looking at.
Yes, Training a Naive Bayes Classifier for each category and then labeling each message to a class based on which Classifier provides the highest score is a standard first approach to problems like this. There are more sophisticated single class classifier algorithms which you could substitute in for Naive Bayes if you find performance inadequate, such as a Support Vector Machine ( Which I believe is available in NLTK via a Weka plug in, but not positive). Unless you can think of anything specific in this problem domain that would make Naieve Bayes especially unsuitable, its ofen the go-to "first try" for a lot of projects.
The other NLTK classifier I would consider trying would be MaxEnt as I believe it natively handles multiclass classification. (Though the multiple binary classifer approach is very standard and common as well). In any case the most important thing is to collect a very large corpus of properly tagged text messages.
If by "Text Messages" you are referring to actual cell phone text messages these tend to be very short and the language is very informal and varied, I think feature selection may end up being a larger factor in determining accuracy than classifier choice for you. For example, using a Stemmer or Lemmatizer that understands common abbreviations and idioms used, tagging part of speech or chunking , entity extraction, extracting probably relationships between terms may provide more bang than using more complex classifiers.
This paper talks about classifying Facebook status messages based on sentiment, which has some of the same issues, and may provide some insights into this. The links is to a google cache because I'm having problems w/ the original site:
http://docs.google.com/viewer?a=v&q=cache:_AeBYp6i1ooJ:nlp.stanford.edu/courses/cs224n/2010/reports/ssoriajr-kanej.pdf+maxent+classifier+multiple+classes&hl=en&gl=us&pid=bl&srcid=ADGEESi-eZHTZCQPo7AlcnaFdUws9nSN1P6X0BVmHjtlpKYGQnj7dtyHmXLSONa9Q9ziAQjliJnR8yD1Z-0WIpOjcmYbWO2zcB6z4RzkIhYI_Dfzx2WqU4jy2Le4wrEQv0yZp_QZyHQN&sig=AHIEtbQN4J_XciVhVI60oyrPb4164u681w&pli=1

Resources