SVM with Dynamic Time Warping kernel - svm

I need to use Support Vector Machine (SVM) classifier with a Dynamic Time Warping (DTW) kernel for some audio processing task. All the tools I know (Weka, LibSVM, SKLearn) have SVMs with standard kernels (linear, poly, RBF) only. Where can I find a SVM tool/library with DTW kernel?

Related

Is there any way to speed up the predicting process for tensorflow lattice?

I build my own model with Keras Premade Models in tensorflow lattice using python3.7 and save the trained model. However, when I use the trained model for predicting, the speed of predicting each data point is at millisecond level, which seems very slow. Is there any way to speed up the predicting process for tfl?
There are multiple ways to improve speed, but they may involve a tradeoff with prediction accuracy. I think the three most promising options are:
Reduce the number of features
Reduce the number of lattices per feature
Use an ensemble of lattice models where every lattice model only gets a subsets of the features and then average the predictions of the different models (like described here)
As the lattice model is a standard Keras model, I recommend trying OpenVINO. It optimizes your model by converting to Intermediate Representation (IR), performing graph pruning and fusing some operations into others while preserving accuracy. Then it uses vectorization in runtime. OpenVINO is optimized for Intel hardware, but it should work with any CPU.
It's rather straightforward to convert the Keras model to OpenVINO. The full tutorial on how to do it can be found here. Some snippets are below.
Install OpenVINO
The easiest way to do it is using PIP. Alternatively, you can use this tool to find the best way in your case.
pip install openvino-dev[tensorflow2]
Save your model as SavedModel
OpenVINO is not able to convert the HDF5 model, so you have to save it as SavedModel first.
import tensorflow as tf
from custom_layer import CustomLayer
model = tf.keras.models.load_model('model.h5', custom_objects={'CustomLayer': CustomLayer})
tf.saved_model.save(model, 'model')
Use Model Optimizer to convert SavedModel model
The Model Optimizer is a command-line tool that comes from OpenVINO Development Package. It converts the Tensorflow model to IR, a default format for OpenVINO. You can also try the precision of FP16, which should give you better performance without a significant accuracy drop (change data_type). Run in the command line:
mo --saved_model_dir "model" --data_type FP32 --output_dir "model_ir"
Run the inference
The converted model can be loaded by the runtime and compiled for a specific device, e.g., CPU or GPU (integrated into your CPU like Intel HD Graphics). If you don't know what the best choice for you is, use AUTO. If you care about latency, I suggest adding a performance hint (as shown below) to use the device that fulfills your requirement. If you care about throughput, change the value to THROUGHPUT or CUMULATIVE_THROUGHPUT.
# Load the network
ie = Core()
model_ir = ie.read_model(model="model_ir/model.xml")
compiled_model_ir = ie.compile_model(model=model_ir, device_name="AUTO", config={"PERFORMANCE_HINT":"LATENCY"})
# Get output layer
output_layer_ir = compiled_model_ir.output(0)
# Run inference on the input image
result = compiled_model_ir([input_image])[output_layer_ir]
Disclaimer: I work on OpenVINO.

Different kernels for different features - scikit-learn SVM

I am trying to build a classifier using sklearn.svm.SVC but I would like to train the kernel separately on different subsets of features to better represent the feature space (as described here).
I have read the User Guide page and I understand that I can create kernels that are sums of individual kernels or feed into the SVC a precomputed kernel (kernel = 'precomputed'), but I do not understand how I apply different kernels to different features? Is there a way to implement this in sklearn?
I have found a way to calculate kernels in sklearn (https://scikit-learn.org/stable/modules/gaussian_process.html#gp-kernels), and so I could calculate the kernel on each set separately. However, once I output the distance matrix, I am not sure how I would use it to train the SVM.
Do I have to create a custom kernel like:
if feature == condition1:
use kernel X
else:
use kernel Y
and add it to the SVM?
Or is there any other python libraries I could be using for this?
You are referring to the problem of Multiple Kernel Learning (MKL). Where you can train different kernels for different groups of features. I have used this in a multi-modal case, where I wanted different kernels for image and text.
I am not sure if you actually can do it via scikit-learn.
There are some libraries provided on GitHub, for example, this one: https://github.com/IvanoLauriola/MKLpy1
Hopefully, it can help you to achieve your goal.
Multiple kernel learning is possible in sklearn. Just specify kernel='precomputed' and then pass the kernel matrix you want to use to fit.
Suppose your kernel matrix is the sum of two other kernel matrices. You can compute K1 and K2 however you like and use SVC.fit(X=K1 + K2, y=y).

Is there any built in library for linear kernel in scikit learn like RBF, SE?

I want to multiply linear kernel with RBF for a dataset but finding no way to implement linear kernel. How can I implement the linear kernel here.
For example RBF, SE can be used in Scikit learn like :
k2 = 2.0**2 * RBF(length_scale=100.0)
k_exp = ExpSineSquared(length_scale=1.0, periodicity=1.0,
periodicity_bounds="fixed")
The linear kernel for use in gaussian processes in scikit-learn is provided as the DotProduct kernel. According to the gaussian processes book by Rasmussen and Williams (Chapter 4.2.2) setting sigma_0=0 gives the homogeneous linear kernel whereas otherwise is the inhomogeneous linear kernel. There's an example of using the DotProduct kernel in scikit here. In the case of gaussian processes you don't have to give the vector inputs into DotProduct as you will give them when you call the .fit(X,Y) function.
In case you want the dot product two vectors to check their similarity then you can you use linear kernel which gives the dot product of the two vectors (with checks for sparse inputs). Finally note that the linear kernel doesn't have any optional noise term whereas DotProduct kernel does.

Fast implementation of convolution for CNN inference

I am looking for an advice for as fast as possible implementation of a convolution algorithm for CNN inference but not a training.
This convolution neural networks modeled as alexnet, mobilenet, resnet etc.. will run on embedded ARM device (A72, A53, A35) and possibly on embedded GPU as well.
I understand there is various implementation outthere and NN frameworks which have various implementations such as direct convolution, unrolling based convolution (im2col), FFT based or Winograd but mine primary focus is to execute CNN under performance constrain of embedded device.
If anybody has experience and can recommend convolution implementation for CPU and parallel implementation as well, point to research paper or open source implementation I would very appreciate it.
If it is still actual. I found a small framework to inference of pre-trained neural network on CPU. It uses Simd Library to accelerate its work. The library has very fast (single thread) implementation of Convolution, Pooling, Relu and many other network layers for CPU (x86 and ARM). CNN convolution includes Winograd's method.

Which kernel is to be used for Face detection using SVM?

I'm working on face detection algorithm which extracts Haar-like features and then classifies the face and non faces using SVM. I'll be implementing whole algorithm including SVM in C language because i have to run the code on Stretch SCP board.
I have lot of doubts regarding which kernel is most suitable for face-detection problem; is it linear, RBF or something else?
I already extracted haar-features and tried to classify using libsvm and liblinear but didn't get appropriate results.
Please suggest which kernel to be used and what parameter to be considered ?

Resources