Train and predict using SVM theory - svm

I have implemented character recognition using a library
but I still don't get how SVM theory works in training and prediction process, I just understand SVM is only finding the hyperplane
E.g., suppose I have a training image as follows
image from google, number zero
How do we find hyperplane for each training data like above?
How is the prediction process is done?
How can the SVM classify the data based on those hyperplane?
Thank you very much if you can help me

You can use opencv and python.Opencv has implemented svm and you can use it by function call.
SVM is machine leraning model for data classification.We can use SVM to classify images.the steps are
you must have a training dataset(a dataset of images whose labels are known)
Extract features [features are color,shape,hog,surf,sift etc..] from that images and store that,also store the assosiated labels
then train svm using these datas
Now you can use svm to predict labels of unkonwn images
this link will help you

First, It is a non linear separable problem you have to implement kernel SVM which projects them into higher dimensional space where it becomes linearly separable. You can use sklearn library to achieve the above.

Related

What type of optimization to perform on my multi-label text classification LSTM model with Keras?

I'm using Windows 10 machine. Libraries: Keras with Tensorflow 2.0 Embeddings: Glove(100 dimensions).
I am trying to implement an LSTM architecture for multi-label text classification.
I am using different types of fine-tuning to achieve better results but with no luck so far.
The main problem I believe is the difference in class distributions of my dataset but after a lot of tries and errors, I couldn't implement stratified-k-split in Keras.
I am also experimenting with dropout layers, batch sizes, # of layers, learning rates, clip values, validation splits but I get a minimum boost or worst performance sometimes.
For metrics, I use mainly ROC and F1.
I also followed the suggestion from a StackOverflow member who said to delete some of my examples so I can balance my dataset but if I do that I will have a very low number of examples.
What would you suggest to me?
If someone can provide code based on my implementation for
stratified-k-split I would be grateful cause I have checked all the
online resources but can't implement it.
Any tips, suggestions will be really helpful.
Metrics Plots
Dataset form+Embedings form+train-test-split form
Dataset's labels distribution
My LSTM implementation

Why my LSTM for Multi-Label Text Classification underperforms?

I'm using Windows 10 machine.
Libraries: Keras with Tensorflow 2.0
Embeddings:Glove(100 dimensions)
I am trying to implement an LSTM architecture for multi-label text classification.
My problem is that no matter how much fine-tuning I do, the results are really bad.
I am not experienced in DL practical implementations that's why I ask for your advice.
Below I will state basic information about my dataset and my model so far.
I can't embed images since I am a new member so they appear as links.
Dataset form+Embedings form+train-test-split form
Dataset's labels distribution
My Implementation of LSTM
Model's Summary
Model's Accuracy plot
Model's Loss plot
As you can see my dataset is really small (~6.000 examples) and maybe that's one reason why I cannot achieve better results. Still, I chose it because it's unbiased.
I'd like to know if there is any fundamental mistake in my code regarding the dimensions, shape, activation functions, and loss functions for multi-label text classification?
What would you recommend to achieve better results on my model? Also any general advice regarding optimizing, methods,# of nodes, layers, dropouts, etc is very welcome.
Model's best val accuracy that I achieved so far is ~0.54 and even if I tried to raise it, it seems stuck there.
There are many ways to get this wrong but the most common mistake is to get your model overfit the training data.
I suspect that 0.54 accuracy means that your model selects the most common label (offensive) for almost all cases.
So, consider one of these simple solutions:
Create balanced training data: like 400 samples from each class.
or sample balanced batches for training (exactly the same number of labels on each training batch)
In addition to tracking accuracy and loss, look at precision-recall-f1 or even better try plotting area under curve, maybe different classes need different thresholds of activation. (If you are using Sigmoid on last layer maybe one class could perform better with 0.2 activations and another class with 0.7)
first try simple model. embedding 1 layer LSTM than classify
how to tokenize text , is vocab size enough ?
try dice loss

Can HSV images be used for CNN training

I am currently working on fingers-count deep learning problem. When you look at the dataset, images in the training and validation set are very basic and are almost the same. The network can achieve high training and validation accuracies. But when it comes to prediction in real-life images, it performs very badly(this is because the model has been trained on very basic images).
To overcome this, I converted the training and validation images to HSV(Hue-Saturation-Value) and trained the model on new HSV images. Example of 1 such image from new training set is:
I then convert my image from real life to HSV and pass it to model for prediction. But still, the model is not able to predict correctly. I assumed that since the training images and predicting image are almost same after applying HSV, the model should be predicting good. Is there something which I am thinking incorrectly here? Can HSV images be actually used for training CNN?
It seems you have the overfitting issue, and your model only memorize the simple samples of the training set and in contrast it can not generalize to more complex and diverse data.
In the context of Deep Learning there are various methods to avoid overfitting and I think you don't need to transform your input to HSV necessarily. First of all you can apply various data augmentation methods like random crop or rotation to create various versions of your data. If this method does not work, you can use a smaller model or applying techniques such as Drop Out or Regularization.
Here is a good tutorial from TensorFlow.

Anomaly detection in Text Classification

I have built a text classifier using OneClassSVM.
I have the training set which corresponds to only one label i.e("Yes") and I don't have the other("NO") label data. My task is to build a classifier which classifies the new unseen sentence(test data) as 1 if it is very similar to the training data. Else, it classifies as -1 i.e,(anomaly).
I have used Word2Vec to build the word embeddings for my training data. Then, I am using word-vector averaging with OneClassSVM to build a anomaly detector classifier.
This classifier is currently giving accuracy of about 50%-55%. I have to enhance this further to build a robust classifier.
Any suggestions to this problem would be helpful...
I'd suggest a very different approach since you have no training examples for the negative class at all.
You could train a language model on your training data. At inference time, you score the input with the language model, and classify it according to some threshold on the perplexity of the input sentence according to the LM.

Is there a better approach for personality detection from twitter data?

I have tried different approaches like multinomialNB, SVM, MLPClassifier, CNN as well as LSTM network to train the dataset that consists of tweets and labels (big 5 classes - openness, conscientiousness, extraversion, agreeable, neuroticism). But the accuracy is at around 60% even after using word2vec, NRC features & MRC features. Is there something that I can do to improve the accuracy?
Would you please add few more details about the dataset you are using?
For example I would add:
Dataset size (number of samples)
Classes distribution (are they balanced or not)
Do you do any preprocessing?
Without the above information I would just guess but if I were you would try:
clean the tweets from noise e.g usernames,garbage symbols etc.
If the dataset is small
try random search on models (Naive Bayes ,SVM, Logistic regression) using various vectorizations strategies e.g bag of words, tf-idf and do hyper-parameter search
try applying transfer learning from a model trained on tweets, for example for sentiment analysis.
If the dataset is large enough
try neural network approach
Embedding(Glove, word2vec, fasttext) + RNN(LSTM, GRU) + Attention
try training own embedding
use pretrained ones such as those
Embedding + CNN + RNN
Bag of words + FNN
If classes are not balanced
use weighted loss
try to balance them
try stacking multiple models (ensemble)
Hope it helps!
Is the main premise of your project to do personality detection? If not, I would recommend using the Google Sentiment API to calculate sentiment of Twitter data.

Resources