Inconsistency in sklearn predict function for 'ovr' multi-class problems - scikit-learn

I have found an inconsistency in the predict function of the SVM model for multiclass problems. I have trained a model with SKlearn SVM.SVC function for a multiclass prediction problem (see plot below).
But on some occasions, the predict functions gives me different results when I did the prediction instead with the argmax of the decision function. One can see that the inconsistency is close to the decision boundary.
This inconsistency vanishes when I use the OneVsRestClassifier directly. Does the predict function of the SVM.SVC classes some corrections or why does it differ from the argmax prediction?
Here is the code to reproduce the result:
import numpy as np
from sklearn import svm, datasets
from sklearn.multiclass import OneVsRestClassifier
from scipy.linalg import cho_solve, cho_factor
def create_data(n_samples, noise):
# 4 gaussian blobs with different means and variances
sample_per_cls = np.int(n_samples/4)
sample_per_cls_rest = sample_per_cls + n_samples - 4*sample_per_cls #puts the rest of the samples into the last class
x1 = np.random.multivariate_normal([20, 18], np.array([[2, 3], [3, 7]])*4*noise, sample_per_cls, 'warn')
x2 = np.random.multivariate_normal([13, 27], np.array([[10, 3], [3, 2]])*4*noise, sample_per_cls, 'warn')
x3 = np.random.multivariate_normal([9, 13], np.array([[6, 1], [1, 5]])*4*noise, sample_per_cls, 'warn')
x4 = np.random.multivariate_normal([14, 20], np.array([[4, 0.2], [0.2, 7]])*4*noise, sample_per_cls_rest, 'warn')
X = np.vstack([x1,x2,x3,x4])
#define the labels for each class
Y = np.empty([n_samples], dtype=np.int)
Y[0:sample_per_cls] = 0
Y[sample_per_cls:2*sample_per_cls] = 1
Y[2*sample_per_cls:3*sample_per_cls] = 2
Y[3*sample_per_cls:] = 3
#shuffle the data set
rand_int = np.arange(n_samples)
np.random.shuffle(rand_int)
X = X[rand_int]
Y = Y[rand_int]
return X, Y
X, Y = create_data(n_samples=800, noise=0.15)
clf = svm.SVC(C=0.5, kernel='rbf', gamma=0.1, decision_function_shape='ovr', cache_size=8000)
#the classifier below is consistent
#clf = OneVsRestClassifier(svm.SVC(C=0.5, kernel='rbf', gamma=0.1, decision_function_shape='ovr', cache_size=8000))
clf.fit(X,Y)
Xs = np.linspace(np.min(X[:,0] - 1), np.max(X[:,0] + 1), 150)
Ys = np.linspace(np.min(X[:,1] - 1), np.max(X[:,1] + 1), 150)
XX, YY = np.meshgrid(Xs, Ys)
test_set = np.stack([XX, YY], axis=2).reshape(-1,2)
#prediction via argmax of the decision function
pred = np.argmax(clf.decision_function(test_set), axis=1)
#prediction with sklearn function
pred_1 = clf.predict(test_set)
diff = np.equal(pred, pred_1)
error = np.where(diff == False)[0]
print(error)
import matplotlib.pyplot as plt
plt.rcParams['figure.figsize'] = [16, 10]
plt.contourf(XX, YY, pred_1.reshape(XX.shape), alpha=0.5, cmap='seismic')
plt.colorbar()
plt.scatter(X[:,0], X[:,1], c=Y, s=20, marker='o', edgecolors='k')
plt.scatter(test_set[error, 0], test_set[error, 1], c=pred_1[error], s=120, marker='^', edgecolors='k')
plt.show()
Triangles are marking the inconsistent points:

Related

How can i change the activation function of the nodes in hidden layer using neurolab?

Hello dear users of neuorlab, I want to change the activation function nodes of the hidden layer to ReLU and keep Linear function in output nodes
import numpy as np
import neurolab as nl
# Create train samples
input = np.random.uniform(-1, 1, (5, 2))
target = (input[:, 0] + input[:, 1]).reshape(5, 1)
net = nl.net.newff([[-1, 1]]*2, [4, 1])
# What I try to do
import numpy as np
import neurolab as nl
# Create train samples
input = np.random.uniform(-1, 1, (5, 2))
target = (input[:, 0] + input[:, 1]).reshape(5, 1)
net = nl.net.newff([[-1, 1]]*2, [4, 1],[nl.trans.PoseLin(), nl.trans.PureLin()])

pytorch: how to calculate the loss by for-loop

I want to calculate my-self loss. I want to predict N point by deep learning. Thus, the output of network is N point (N*3).
The numpy calculation should be:
import numpy as np
point1 = np.random.random(size=[10, 30, 3])
point2 = np.random.random(size=[10, 30, 3])
losses = []
for s in range(10):
loss = 0
for p in range(30):
p1 = point1[s, p, :]
dis = p1 - point2[s, :, :]
dis = np.linalg.norm(dis, axis=1)
loss += dis.min()
losses.append(loss)
print(loss)
In pytorch, the point should be:
point1 = np.random.random(size=[10, 30, 3])
point2 = np.random.random(size=[10, 30, 3])
point1 = torch.from_numpy(point1)
point2 = torch.from_numpy(point2)
How can I calculate the loss in pytorch?
Any suggestion is appreciated!

PyPlot Change Scatter Label When Points Overlap

I am graphing my predicted and actual results of an ML project using pyplot. I have a scatter plot of each dataset as a subplot and the Y values are elements of [-1, 0, 1]. I would to change the color of the points if both points have the same X and Y value but am not sure how to implement this. Here is my code so far:
import matplotlib.pyplot as plt
Y = [1, 0, -1, 0, 1]
Z = [1, 1, 1, 1, 1]
plt.subplots()
plt.title('Title')
plt.xlabel('Timestep')
plt.ylabel('Score')
plt.scatter(x = [i for i in range(len(Y))], y = Y, label = 'Actual')
plt.scatter(x = [i for i in range(len(Y))], y = Z, label = 'Predicted')
plt.legend()
I would simply make use of NumPy indexing in this case. Specifically, first plot all the data points and then additionally highlight only those point which fulfill the condition X==Y and X==Z
import matplotlib.pyplot as plt
import numpy as np
fig = plt.figure()
Y = np.array([1, 0, -1, 0, 1])
Z = np.array([1, 1, 1, 1, 1])
X = np.arange(len(Y))
# Labels and titles here
plt.scatter(X, Y, label = 'Actual')
plt.scatter(X, Z, label = 'Predicted')
plt.scatter(X[X==Y], Y[X==Y], color='black', s=500)
plt.scatter(X[X==Z], Z[X==Z], color='red', s=500)
plt.xticks(X)
plt.legend()
plt.show()

LinearSVC() differs from SVC(kernel='linear')

When data is offset (not centered in zero), LinearSVC() and SVC(kernel='linear') are giving awfully different results. (EDIT: the problem might be it does not handle non-normalized data.)
import matplotlib.pyplot as plot
plot.ioff()
import numpy as np
from sklearn.datasets.samples_generator import make_blobs
from sklearn.svm import LinearSVC, SVC
def plot_hyperplane(m, X):
w = m.coef_[0]
a = -w[0] / w[1]
xx = np.linspace(np.min(X[:, 0]), np.max(X[:, 0]))
yy = a*xx - (m.intercept_[0]) / w[1]
plot.plot(xx, yy, 'k-')
X, y = make_blobs(n_samples=100, centers=2, n_features=2,
center_box=(0, 1))
X[y == 0] = X[y == 0] + 100
X[y == 1] = X[y == 1] + 110
for i, m in enumerate((LinearSVC(), SVC(kernel='linear'))):
m.fit(X, y)
plot.subplot(1, 2, i+1)
plot_hyperplane(m, X)
plot.plot(X[y == 0, 0], X[y == 0, 1], 'r.')
plot.plot(X[y == 1, 0], X[y == 1, 1], 'b.')
xv, yv = np.meshgrid(np.linspace(98, 114, 10), np.linspace(98, 114, 10))
_X = np.c_[xv.reshape((xv.size, 1)), yv.reshape((yv.size, 1))]
_y = m.predict(_X)
plot.plot(_X[_y == 0, 0], _X[_y == 0, 1], 'r.', alpha=0.4)
plot.plot(_X[_y == 1, 0], _X[_y == 1, 1], 'b.', alpha=0.4)
plot.show()
This is the result I get:
(left=LinearSVC(), right=SVC(kernel='linear'))
sklearn.__version__ = 0.17. But I also tested in Ubuntu 14.04, which comes with 0.15.
I thought about reporting the bug, but it seems too evident to be a bug. What am I missing?
Reading the documentation, they are using different underlying implementations. LinearSVC is using liblinear where SVC is using libsvm.
Looking closely at the coefficients and intercept, it seems LinearSVC applies regularization to the intercept where SVC does not.
By adding intercept_scaling, I was able to obtain the same results to both.
LinearSVC(loss='hinge', intercept_scaling=1000)

How to calculate F1-micro score using lasagne

import theano.tensor as T
import numpy as np
from nolearn.lasagne import NeuralNet
def multilabel_objective(predictions, targets):
epsilon = np.float32(1.0e-6)
one = np.float32(1.0)
pred = T.clip(predictions, epsilon, one - epsilon)
return -T.sum(targets * T.log(pred) + (one - targets) * T.log(one - pred), axis=1)
net = NeuralNet(
# your other parameters here (layers, update, max_epochs...)
# here are the one you're interested in:
objective_loss_function=multilabel_objective,
custom_score=("validation score", lambda x, y: np.mean(np.abs(x - y)))
)
I found this code online and wanted to test it. It did work, the results include training loss, test loss, validation score and during time and so on.
But how can I get the F1-micro score? Also, if I was trying to import scikit-learn to calculate the F1 after adding the following code:
data = data.astype(np.float32)
classes = classes.astype(np.float32)
net.fit(data, classes)
score = cross_validation.cross_val_score(net, data, classes, scoring='f1', cv=10)
print score
I got this error:
ValueError: Can't handle mix of multilabel-indicator and
continuous-multioutput
How to implement F1-micro calculation based on above code?
Suppose your true labels on the test set are y_true (shape: (n_samples, n_classes), composed only of 0s and 1s), and your test observations are X_test (shape: (n_samples, n_features)).
Then you get your net predicted values on the test set by y_test = net.predict(X_test).
If you are doing multiclass classification:
Since in your network you have set regression to False, this should be composed of 0s and 1s only, too.
You can compute the micro averaged f1 score with:
from sklearn.metrics import f1_score
f1_score(y_true, y_pred, average='micro')
Small code sample to illustrate this (with dummy data, use your actual y_test and y_true):
from sklearn.metrics import f1_score
import numpy as np
y_true = np.array([[0, 0, 1], [0, 1, 0], [0, 0, 1], [0, 0, 1], [0, 1, 0]])
y_pred = np.array([[1, 0, 0], [0, 1, 0], [0, 0, 1], [0, 0, 1], [0, 0, 1]])
t = f1_score(y_true, y_pred, average='micro')
If you are doing multilabel classification:
You are not outputting a matrix of 0 and 1, but a matrix of probabilities. y_pred[i, j] is the probability that observation i belongs to the class j.
You need to define a threshold value, above which you will say an observation belongs to a given class. Then you can attribute labels accordingly and proceed just the same as in the previous case.
thresh = 0.8 # choose your own value
y_test_binary = np.where(y_test > thresh, 1, 0)
# creates an array with 1 where y_test>thresh, 0 elsewhere
f1_score(y_true, y_pred_binary, average='micro')

Resources