Different kernels for different features - scikit-learn SVM - python-3.x

I am trying to build a classifier using sklearn.svm.SVC but I would like to train the kernel separately on different subsets of features to better represent the feature space (as described here).
I have read the User Guide page and I understand that I can create kernels that are sums of individual kernels or feed into the SVC a precomputed kernel (kernel = 'precomputed'), but I do not understand how I apply different kernels to different features? Is there a way to implement this in sklearn?
I have found a way to calculate kernels in sklearn (https://scikit-learn.org/stable/modules/gaussian_process.html#gp-kernels), and so I could calculate the kernel on each set separately. However, once I output the distance matrix, I am not sure how I would use it to train the SVM.
Do I have to create a custom kernel like:
if feature == condition1:
use kernel X
else:
use kernel Y
and add it to the SVM?
Or is there any other python libraries I could be using for this?

You are referring to the problem of Multiple Kernel Learning (MKL). Where you can train different kernels for different groups of features. I have used this in a multi-modal case, where I wanted different kernels for image and text.
I am not sure if you actually can do it via scikit-learn.
There are some libraries provided on GitHub, for example, this one: https://github.com/IvanoLauriola/MKLpy1
Hopefully, it can help you to achieve your goal.

Multiple kernel learning is possible in sklearn. Just specify kernel='precomputed' and then pass the kernel matrix you want to use to fit.
Suppose your kernel matrix is the sum of two other kernel matrices. You can compute K1 and K2 however you like and use SVC.fit(X=K1 + K2, y=y).

Related

How to use soft labels in computer vision with PyTorch?

I have an image dataset with soft labels (i.e. the images don't belong to a single class, but rather I have a probability distribution saying that there's a 66% chance this image belong in one class and 33% chance it belongs in some other class).
I am struggling to figure out how to setup my PyTorch code to allow this to be represented by the model and outputted correctly. The probabilities are saved in a csv file. I have looked at the PyTorch docs and other resources which mention the cross entropy loss function but I am still unclear how to import the data successfully and make use of soft labels.
What you are trying to solve is a multi-label classification task, i.e. instances can be classified with more than one label at a time. You cannot use torch.CrossEntropyLoss since it only allows for single-label targets. So you have two options:
Either use a soft version of the nn.CrossEntropyLoss function, this can be done by implementing the loss by hand allowing for soft targets. You can find such implementation on Soft Cross Entropy in PyTorch.
Or consider the task a multiple "independent" binary classification tasks, in this case, you would use nn.BCEWithLogitsLoss (this layer contains a sigmoid function).
Pytorch CrossEntropyLoss Supports Soft Labels Natively Now
Thanks to the Pytorch team, I believe this problem has been solved with the current version of the torch CROSSENTROPYLOSS.
You can directly input probabilities for each class as target (see the doc).
Here is the forum discussion that pushed this enhancement.

Using Kernel K-Means in Scikit

I am working with a very large dataset (1.5 Million rows) and thought about using an SVR.
Since there is so much data I though about switching to a linear SVM and using the nystroem
method to make a kernel from the uniform sampled data.
However I would rather like to construct the kernel via Kernel K-Means, but I did not find an official
implementation yet.
This link provides a unofficual method, but this results in a very large model since it is serialized.
https://tslearn.readthedocs.io/en/stable/gen_modules/clustering/tslearn.clustering.KernelKMeans.html
Maybe someone has a clue where to look for this or how to implement this codewise from an arbitrary dataset?

Random Forest Regressor using a custom objective/ loss function (Python/ Sklearn)

I want to build a Random Forest Regressor to model count data (Poisson distribution). The default 'mse' loss function is not suited to this problem. Is there a way to define a custom loss function and pass it to the random forest regressor in Python (Sklearn, etc..)?
Is there any implementation to fit count data in Python in any packages?
In sklearn this is currently not supported. See discussion in the corresponding issue here, or this for another class, where they discuss reasons for that a bit more in detail (mainly the large computational overhead for calling a Python function).
So it could be done as discussed within the issues, by forking sklearn, implementing the cost function in Cython and then adding it to the list of available 'criterion'.
If the problem is that the counts c_i arise from different exposure times t_i, then indeed one cannot fit the counts, but one can still fit the rates r_i = c_i/t_i using MSE loss function, where one should, however, use weights proportional to the exposures, w_i = t_i.
For a true Random Forest Poisson regression, I've seen that in R there is the rpart library for building a single CART tree, which has a Poisson regression option. I wish this kind of algorithm would have been imported to scikit-learn.
In R, writing a custom objective function is fairly simple.
randomForestSRC package in R has provision for writing your own custom split rule. The custom split rule, however has to be written in pure C language.
All you have to do is, write your own custom split rule, register the split rule, compile and install the package.
The custom split rule has to be defined in the file called splitCustom.c in randomForestSRC source code.
You can find more info
here.
The file in which you define the split rule is
this.

How to control feature subsetting in random forest in scikit-learn?

I am trying to change the way that random forest algorithm using in subsetting features for every node. The original algorithm as it is implemented in Scikit-learn way is randomly subsetting. I want to define which subset for every new node from several choices of several subsets. Is there direct way in scikit-learn to control such method? If not, is there any way to update the same code of Scikit-learn? If yes, which function in the source code is what you think should be updated?
Short version: This is all you.
I assume by "subsetting features for every node" you are referring to the random selection of a subset of samples and possibly features used to train individual trees in the forest. If that's what you mean, then you aren't building a random forest; you want to make a nonrandom forest of particular trees.
One way to do that is to build each DecisionTreeClassifier individually using your carefully specified subset of features, then use the VotingClassifier to combine the trees into a forest. (That feature is only available in 0.17/dev, so you may have to build your own, but it is super simple to build a voting classifier estimator class.)

Is it possible to compare the classification ability of two sets of features by ROC?

I am learning about SVM and ROC. As I know, people can usually use a ROC(receiver operating characteristic) curve to show classification ability of a SVM (Support Vector Machine). I am wondering if I can use the same concept to compare two subsets of features.
Assume I have two subsets of features, subset A and subset B. They are chosen from the same train data by 2 different features extraction methods, A and B. If I use these two subsets of features to train the same SVM by using the LIBSVM svmtrain() function and plot the ROC curves for both of them, can I compare their classification ability by their AUC values ? So if I have a higher AUC value for subsetA than subsetB, can I conclude that method A is better than method B ? Does it make any sense ?
Thank you very much,
Yes, you are on the right track. However, you need to keep a few things in mind.
Often using the two features A and B with the appropriate scaling/normalization can give better performance than the features individually. So you might also consider the possibility of using both the features A and B together.
When training SVMs using features A and B, you should optimize for them separately, i.e. compare the best performance obtained using feature A with the best obtained using feature B. Often the features A and B might give their best performance with different kernels and parameter settings.
There are other metrics apart from AUC, such as F1-score, Mean Average Precision(MAP) that can be computed once you have evaluated the test data and depending on the application that you have in mind, they might be more suitable.

Resources