I'm working on face detection algorithm which extracts Haar-like features and then classifies the face and non faces using SVM. I'll be implementing whole algorithm including SVM in C language because i have to run the code on Stretch SCP board.
I have lot of doubts regarding which kernel is most suitable for face-detection problem; is it linear, RBF or something else?
I already extracted haar-features and tried to classify using libsvm and liblinear but didn't get appropriate results.
Please suggest which kernel to be used and what parameter to be considered ?
Related
I am working with a very large dataset (1.5 Million rows) and thought about using an SVR.
Since there is so much data I though about switching to a linear SVM and using the nystroem
method to make a kernel from the uniform sampled data.
However I would rather like to construct the kernel via Kernel K-Means, but I did not find an official
implementation yet.
This link provides a unofficual method, but this results in a very large model since it is serialized.
https://tslearn.readthedocs.io/en/stable/gen_modules/clustering/tslearn.clustering.KernelKMeans.html
Maybe someone has a clue where to look for this or how to implement this codewise from an arbitrary dataset?
I'm trying to make a OpenCV program in Python 3 to detect the faces of my friends. I've seen that one can train a Cascade Classifier using OpenCV to specify a certain type of object. However, it isn't clear whether that could create a classifier refined enough to pick only my friends' faces out of a large sample size, or whether this is something I could achieve without making my own Cascade Classifier. Can anyone help?
Cascade classifiers usually are built for face detection. You are trying to solve a different problem, face recognition.
Deep learning is a common framework nowdays, but other models do exist. http://www.face-rec.org/algorithms/ makes a very good job at presenting the main algorithms.
This presents an interesting implementation in OpenCV.
The primary objective (my assigned work) is to do an image segmentation for the underwater images using a convolutional neural network. The camera shots taken from the underwater structure will have poor image quality due to severe noise and bad light exposure. In order to achieve higher classification accuracy, I want to do an automatic image enhancement for the images (see the attached file). So, I want to know, which CNN architecture will be best to do both tasks. Please kindly suggest any possible solutions to achieve the objective.
What do you need to segment? I'd be nice so see some labels of the segmentation.
You may not need to enhance the image, if all your dataset has that same amount of noise, the network will generalize properly.
Regarding CNNs architectures, it depends on the constraints you have with processing power and accuracy. If that is not a constrain go with something like MaskRCNN, check that repo as a good starting point, some results are like this:
Be mindful it's a bit of a complex architecture so inference times might be a bit too high (but it's doable on realtime depending your gpu).
Other simple architectures are FCN (Fully Convolutional Networks) with are basically your CNN but instead of fully connected layers:
You replace with with Fully Convolutional Layers:
Images taken from HERE.
The advantage of this FCNs are that they are really easy to implement and modify since you can go with simple architectures (FCN-Alexnet), to more complex and more accurate ones (FCN-VGG, FCN-Resnet).
Also, I think you don't mention framework, there are many to choose from and it depends on your familiarly with languages, most of them you can do them with python:
TensorFlow
Pytorch
MXNet
But if you are a beginner, try starting with a GUI based one, Nvidia Digits is a great starting point and really easy to configure, it's based on Caffe so it's fairly fast when deploying and can easily be integrated with accelerators like TensorRT.
I study SVM and I will implement svm using python sklearn.svm.SVC.
As i know SVM problem can be represented a QP(Quadratic Programming)
So here i was wondering which QP solver is used to solve the SVM QP problem in sklearn svm.
I think it may be SMO or coordinate descent algorithm.
Please let me know what the exact algorithm is used in sklearn svm
Off-the-shelf QP-solvers have been used in the past, but for many years now dedicated code is used (much faster and more robust). Those solvers are not (general) QP-solvers anymore and are just build for this one use-case.
sklearn's SVC is a wrapper for libsvm (proof).
As the link says:
Since version 2.8, it implements an SMO-type algorithm proposed in this paper:
R.-E. Fan, P.-H. Chen, and C.-J. Lin. Working set selection using second order information for training SVM. Journal of Machine Learning Research 6, 1889-1918, 2005.
(link to paper)
I have been looking for a maximum entropy classification implementation which can deal with an output size of 500 classes and 1000 features. My training data has around 30,000,000 lines.
I have tried using MegaM, the 64-bit R maxent package, the maxent tool from the University of Edinburgh but as expected, none of them can handle the size of data. However, the size of the data set doesn't seem too out of the world for nlp tasks of this nature.
Are there any techniques that I should be employing? Or any suggestion for a toolkit which I may use?
I am trying to run this on a 64-bit Windows machine with 8GB of RAM,using Cygwin where required.
Vowpal Wabbit is currently regarded as the fastest large-scale learner. LibLinear is an alternative, but I'm not sure if it can handle matrices of 3e10 elements.
Note that the term "MaxEnt" is used almost exclusively by NLP people; machine learning folks call it logistic regression or logit, so if you search for that you might find many more tools than when you search for MaxEnt.