Rapidminer has a SVM module based on libsvm. How can I know which version of libsvm it uses?
I tested the SVM classifier against the same data set using both libsvm module in Rapidminer and Libsvm itself, and the resulting prediction score are different even they use the same parameter setting.
Just check the RapidMiner's root directory in the file CHANGES.txt (e.g. RM_HOME/CHANGES.txt):
...
Completely revised tree, cluster model, and similarity
visualization
Latest release of LibSVM integrated (2.84)
Latest release of xstream integrated (1.2.2)
...
Related
I try to optimize my model using 'MvNCCompile' but it doesn't accept my frozen TF2 graph.
'Check whether your GraphDef-interpreting binary is up to date with your GraphDef-generating binary.'.
Can I somehow convert my TF2 (keras) model to TF1 graph format so it can be used? Or is there another way to get the TF2 Keras model to be accepted by the Intel optimisation tool?
Hm. MvNCCompile is a deprecated tool to work with NCS2. I suggest you to try the latest version of OpenVINO: https://docs.openvinotoolkit.org/
I trained some data science models with scikit learn from v0.19.1. The models are stored in a pickle file. After upgrading to latest version (v0.23.1), I get the following error when I try to load them:
File "../../Utils/WebsiteContentSelector.py", line 100, in build_page_selector
page_selector = pickle.load(pkl_file)
AttributeError: Can't get attribute 'DeprecationDict' on <module 'sklearn.utils.deprecation' from '/usr/local/lib/python3.6/dist-packages/sklearn/utils/deprecation.py'>
Is there a way to upgrade without retraining all my models (which is very expensive)?
You used a new version of sklearn to load a model which was trained by an old version of sklearn.
So, the options are:
Retrain the model with current version of sklearn if you have the training script and data
Or fall back to the lower sklearn version reported in the warning message
Depending on the kind of sklearn model used, if the model is simple regression model, what is probably needed is to get the actual weights and bias (or intercept) values.
You can check these values in your model:
model.classes_
model.coef_
model.intercept_
they are of numpy type and can be pickled easily. Also, you need to get the same parameters passed to the model construction. For example:
tol
max_iter
and so on. With this, in the upgraded version, the same model created with the same parameters can read the weights and intercept.
In this way, no re-training is needed and you can use the upgrade sklearn.
When lib versions are not backward compatible you can do the following:
Downgrade sklearn back to the original version
Load each model, extract and store its coefficients (which are model-specific - check documentation)
Upgrade sklearn, load coefficients and init models with them, save models
Related question.
I want to extract features using caffe and train those features using SVM. I have gone through this link: http://caffe.berkeleyvision.org/gathered/examples/feature_extraction.html. This links provides how we can extract features using caffenet. But I want to use Lenet architecture here. I am unable to change this line of command for Lenet:
./build/tools/extract_features.bin models/bvlc_reference_caffenet/bvlc_reference_caffenet.caffemodel examples/_temp/imagenet_val.prototxt fc7 examples/_temp/features 10 leveldb
And also, after extracting the features, how to train these features using SVM? I want to use python for this. For eg: If I get features from this code:
features = net.blobs['pool2'].data.copy()
Then, how can I train these features using SVM by defining my own classes?
You have two questions here:
Extracting features using LeNet
Training an SVM
Extracting features using LeNet
To extract the features from LeNet using the extract_features.bin script you need to have the model file (.caffemodel) and the model definition for testing (.prototxt).
The signature of extract_features.bin is here:
Usage: extract_features pretrained_net_param feature_extraction_proto_file extract_feature_blob_name1[,name2,...] save_feature_dataset_name1[,name2,...] num_mini_batches db_type [CPU/GPU] [DEVICE_ID=0]
So if you take as an example val prototxt file this one (https://github.com/BVLC/caffe/blob/master/models/bvlc_alexnet/train_val.prototxt), you can change it to the LeNet architecture and point it to your LMDB / LevelDB. That should get you most of the way there. Once you did that and get stuck, you can re-update your question or post a comment here so we can help.
Training SVM on top of features
I highly recommend using Python's scikit-learn for training an SVM from the features. It is super easy to get started, including reading in features saved from Caffe's format.
Very lagged reply, but should help.
Not 100% what you want, but I have used the VGG-16 net to extract face features using caffe and perform a accuracy test on a small subset of the LFW dataset. Exactly what you needed is in the code. The code creates classes for training and testing and pushes them into the SVM for classification.
https://github.com/wajihullahbaig/VGGFaceMatching
I have trained a SVM (svc) using scikit-learn over half a terabyte of data. The model is working fine and I need to port it to C, but I don't want to re-train the SVM from scratch because it takes way too long for me. Is there a way to easily export the model generated by scikit-learn and import it into LibSVM? Internally scikit-learn uses LibSVM so theoretically it should be possible, but I haven't been able to find anything in the documentation. Any suggestion?
Is there a way to easily export the model generated by scikit-learn and import it into LibSVM?
No. The scikit-learn version of LIBSVM has been hacked up severely to fit it into the Python environment and the model is stored as NumPy/SciPy data structures.
Your best shot is to study the SVM decision function and reimplement it in C. The support vectors can be obtained from the SVC object as NumPy arrays, which are easily translated to C arrays.
I have a training dataset (text) for a particular category (say Cancer). I want to train a SVM classifier for this class in weka. But when i try to do this by creating a folder 'cancer' and putting all those training files to that folder and when i run to code i get the following error:
weka.classifiers.functions.SMO: Cannot handle unary class!
what I want to do is if the classifier finds a document related to 'cancer' it says the class name correctly and once i fed a non cancer document it should say something like 'unknown'.
What should I do to get this behavior?
The SMO algorithm in Weka only does binary classification between two classes. Sequential Minimal Optimization is a specific algorithm for solving an SVM and in Weka this a basic implementation of this algorithm. If you have some examples that are cancer and some that are not, then that would be binary, perhaps you haven't labeled them correctly.
However, if you are using training data which is all examples of cancer and you want it to tell you whether a future example fits the pattern or not, then you are attempting to do one-class SVM, aka outlier detection.
LibSVM in Weka can handle one-class svm. Unlike the Weka SMO implementation, LibSVM is a standalone program which has been interfaced into Weka and incorporates many different variants of SVM. This post on the Wekalist explains how to use LibSVM for this in Weka.