text classification using svm - svm

i read this article :A hybrid classification method of k nearest neighbor, Bayesian methods
and genetic algorithm
it's proposed to use genetic algorithm in order to improve text classification
i want to replace Genetic algorithm with SVM but i don't know if it works or not
i mean i do not know if the new idea and the result will be better than this article
i read somewhere Ga is better than SVM but i dono if it's right or not?

SVM and Genetic Algorithms are in fact completely different methods. SVM is basicaly a classification tool, while genetic algorithms are meta optimisation heuristic. Unfortunately I do not have access to the cited paper, but I can hardly imagine, how putting sVM in the place of GA could work.
i read somewhere Ga is better than SVM but i dono if it's right or not?
No, it is not true. These methods are not comparable as they are completely different tools.

Related

Naive bayes classifiers are bad estimators

I read this in the Scikit-learn guide (https://scikit-learn.org/stable/modules/naive_bayes.html):
[...] although naive Bayes is known as a decent classifier,
it is known to be a bad estimator, so the probability outputs
from predict_proba are not to be taken too seriously'
Bad and decent at the same time?
Ok, let's put some context, with a quote from the 2nd edition of Francois Chollet's book on Deep Learning, that I believe will shed light on your interrogation. The point is that Naive Bayes is one of the 1st so-called Machine learning classifier, which assumes that the features in the input data are all independent (Naive assumption), still providing a "descent" result; but, as compared to more recent neural networks, the performance on classification tasks is often much lower.
Naive Bayes is a type of machine-learning classifier based on applying Bayes' theorem while assuming that the features in the input data are all independent (a strong, or “naive” assumption, which is where the name comes from). This form of data analysis predates computers and was applied by hand decades before its first computer implementation (most likely dating back to the 1950s). Bayes' theorem and the foundations of statistics date back to the eighteenth century, and these are all you need to start using Naive Bayes classifiers.
A closely related model is the logistic regression (logreg for short),
which is sometimes considered to be the “hello world” of modern
machine learning. Don’t be misled by its name—logreg is a
classification algorithm rather than a regression algorithm. Much like
Naive Bayes, logreg predates computing by a long time, yet it’s still
useful to this day, thanks to its simple and versatile nature. It’s
often the first thing a data scientist will try on a dataset to get a
feel for the classification task at hand.

Multilingual free-text-items Text Classification for improving a recommender system

To improve the recomender system for Buyer Material Groups, our company is willing to train a model using customer historial spend data. The model should be trained on historical "Short text descriptions" to predict the appropriate BMG. The dataset has more that 500.000 rows and the text descriptions are multilingual (up to 40 characters).
1.Question: can i use supervised learning if i consider the fact that the descriptions are in multiple languages? If Yes, are classic approaches like multinomial naive bayes or SVM suitable?
2.Question: if i want to improve the first model in case it is not performing well, and use unsupervised multilingual emdedding to build a classifier. how can i train this classifier on the numerical labels later?
if you have other ideas or approaches please feel free :). (It is a matter of a simple text classification problem)
Can I use supervised learning if i consider the fact that the descriptions are in multiple languages?
Yes, this is not a problem except it makes your data more sparse. If you actually only have 40 characters (is that not 40 words?) per item, you may not have enough data. Also the main challenge for supervised learning will be whether you have labels for the data.
If Yes, are classic approaches like multinomial naive bayes or SVM suitable?
They will work as well as they always have, though these days building a vector representation is probably a better choice.
If i want to improve the first model in case it is not performing well, and use unsupervised multilingual emdedding to build a classifier. how can i train this classifier on the numerical labels later?
Assuming the numerical labels are labels on the original data, you can add them as tokens like LABEL001 and the model can learn representations of them if you want to make an unsupervised recommender.
Honestly these days I wouldn't start with Naive Bayes or classical models, I'd go straight to word vectors as a first test for clustering. Using fasttext or word2vec is pretty straightforward. The main problem is that if you really only have 40 characters per item, that just might not be enough data to cluster usefully.

Features Vectors to build classifier to detect subjectivity

I am trying to build a classifier to detect subjectivity. I have text files tagged with subjective and objective . I am little lost with the concept of features creation from this data. I have found the lexicon of the subjective and objective tag. One thing that I can do is to create a feature of having words present in respective dictionary. Maybe the count of words present in subjective and objective dictionary. After that I intend to use naive bayes or SVM to develop the model
My problem is as follow
Is my approach correct ?
Can I create more features ? If possible suggest some or point me to some paper or link
Can I do some test like chi -sq etc to identify effective words from the dictionary ?
You are basically on the right track. I would try and apply classifier with features you already have and see how well it will work, before doing anything else.
Actually best way to improve your work is to google for subjectivity classification papers and read them (there are a quite a number of them). For example this one lists typical features for this task.
And yes Chi-squared can be used to construct dictionaries for text classification (other commonly used methods are TD*IDF, pointwise mutal information and LDA)
Also, recently new neural network-based methods for text classification such as paragraph vector and dynamic convolutional neural networks with k-max pooling demonstrated state-of-the-art results on sentiment analysis, thus they should probably be good for subjectivity classification as well.

Large scale naïve Bayes classifier with top-k output

I need a library for naïve Bayes large scale, with millions of training examples and +100k binary features. It must be an online version (updatable after training). I also need top-k output, that is multiple classifications for a single instance. Accuracy is not very important.
The purpose is an automatic text categorization application.
Any suggestions for a good library is very appreciated.
EDIT: The library should preferably be in Java.
If a learning algorithm other than naïve Bayes is also acceptable, then check out Vowpal Wabbit (C++), which has the reputation of being one of the best scalable text classification algorithms (online stochastic gradient descent + LDA). I'm not sure if it does top-K output.

Which classifier to choose in NLTK

I want to classify text messages into several categories like, "relation building", "coordination", "information sharing", "knowledge sharing" & "conflict resolution". I am using NLTK library to process these data. I would like to know which classifier, in nltk, is better for this particular multi-class classification problem.
I am planning to use Naive Bayes Classification, is it advisable?
Naive Bayes is the simplest and easy to understand classifier and for that reason it's nice to use. Decision Trees with a beam search to find the best classification are not significantly harder to understand and are usually a bit better. MaxEnt and SVM tend be more complex, and SVM requires some tuning to get right.
Most important is the choice of features + the amount/quality of data you provide!
With your problem, I would focus first on ensuring you have a good training/testing dataset and also choose good features. Since you are asking this question you haven't had much experience with machine learning for NLP, so I'd say start of easy with Naive Bayes as it doesn't use complex features- you can just tokenize and count word occurrences.
EDIT:
The question How do you find the subject of a sentence? and my answer are also worth looking at.
Yes, Training a Naive Bayes Classifier for each category and then labeling each message to a class based on which Classifier provides the highest score is a standard first approach to problems like this. There are more sophisticated single class classifier algorithms which you could substitute in for Naive Bayes if you find performance inadequate, such as a Support Vector Machine ( Which I believe is available in NLTK via a Weka plug in, but not positive). Unless you can think of anything specific in this problem domain that would make Naieve Bayes especially unsuitable, its ofen the go-to "first try" for a lot of projects.
The other NLTK classifier I would consider trying would be MaxEnt as I believe it natively handles multiclass classification. (Though the multiple binary classifer approach is very standard and common as well). In any case the most important thing is to collect a very large corpus of properly tagged text messages.
If by "Text Messages" you are referring to actual cell phone text messages these tend to be very short and the language is very informal and varied, I think feature selection may end up being a larger factor in determining accuracy than classifier choice for you. For example, using a Stemmer or Lemmatizer that understands common abbreviations and idioms used, tagging part of speech or chunking , entity extraction, extracting probably relationships between terms may provide more bang than using more complex classifiers.
This paper talks about classifying Facebook status messages based on sentiment, which has some of the same issues, and may provide some insights into this. The links is to a google cache because I'm having problems w/ the original site:
http://docs.google.com/viewer?a=v&q=cache:_AeBYp6i1ooJ:nlp.stanford.edu/courses/cs224n/2010/reports/ssoriajr-kanej.pdf+maxent+classifier+multiple+classes&hl=en&gl=us&pid=bl&srcid=ADGEESi-eZHTZCQPo7AlcnaFdUws9nSN1P6X0BVmHjtlpKYGQnj7dtyHmXLSONa9Q9ziAQjliJnR8yD1Z-0WIpOjcmYbWO2zcB6z4RzkIhYI_Dfzx2WqU4jy2Le4wrEQv0yZp_QZyHQN&sig=AHIEtbQN4J_XciVhVI60oyrPb4164u681w&pli=1

Resources