Related
I wonder how to compute precision and recall using a confusion matrix for a multi-class classification problem. Specifically, an observation can only be assigned to its most probable class / label. I would like to compute:
Precision = TP / (TP+FP)
Recall = TP / (TP+FN)
for each class, and then compute the micro-averaged F-measure.
In a 2-hypothesis case, the confusion matrix is usually:
Declare H1
Declare H0
Is H1
TP
FN
Is H0
FP
TN
where I've used something similar to your notation:
TP = true positive (declare H1 when, in truth, H1),
FN = false negative (declare H0 when, in truth, H1),
FP = false positive
TN = true negative
From the raw data, the values in the table would typically be the counts for each occurrence over the test data. From this, you should be able to compute the quantities you need.
Edit
The generalization to multi-class problems is to sum over rows / columns of the confusion matrix. Given that the matrix is oriented as above, i.e., that
a given row of the matrix corresponds to specific value for the "truth", we have:
$\text{Precision}_{~i} = \cfrac{M_{ii}}{\sum_j M_{ji}}$
$\text{Recall}_{~i} = \cfrac{M_{ii}}{\sum_j M_{ij}}$
That is, precision is the fraction of events where we correctly declared $i$
out of all instances where the algorithm declared $i$. Conversely, recall is the fraction of events where we correctly declared $i$ out of all of the cases where the true of state of the world is $i$.
Good summary paper, looking at these metrics for multi-class problems:
Sokolova, M., & Lapalme, G. (2009). A systematic analysis of performance measures for classification tasks. Information Processing and Management, 45, p. 427-437. (pdf)
The abstract reads:
This paper presents a systematic analysis of twenty four performance
measures used in the complete spectrum of Machine Learning
classification tasks, i.e., binary, multi-class, multi-labelled, and
hierarchical. For each classification task, the study relates a set of
changes in a confusion matrix to specific characteristics of data.
Then the analysis concentrates on the type of changes to a confusion
matrix that do not change a measure, therefore, preserve a
classifier’s evaluation (measure invariance). The result is the
measure invariance taxonomy with respect to all relevant label
distribution changes in a classification problem. This formal analysis
is supported by examples of applications where invariance properties
of measures lead to a more reliable evaluation of classifiers. Text
classification supplements the discussion with several case studies.
Using sklearn or tensorflow and numpy:
from sklearn.metrics import confusion_matrix
# or:
# from tensorflow.math import confusion_matrix
import numpy as np
labels = ...
predictions = ...
cm = confusion_matrix(labels, predictions)
recall = np.diag(cm) / np.sum(cm, axis = 1)
precision = np.diag(cm) / np.sum(cm, axis = 0)
To get overall measures of precision and recall, use then
np.mean(recall)
np.mean(precision)
#Cristian Garcia code can be reduced by sklearn.
>>> from sklearn.metrics import precision_score
>>> y_true = [0, 1, 2, 0, 1, 2]
>>> y_pred = [0, 2, 1, 0, 0, 1]
>>> precision_score(y_true, y_pred, average='micro')
Here is a different view from the other answers that I think will be helpful to others. The goal here is to allow you to compute these metrics using basic laws of probability.
First, it helps to understand what a confusion matrix is telling us in general. Let $Y$ represent a class label and $\hat Y$ represent a class prediction. In the binary case, let the two possible values for $Y$ and $\hat Y$ be $0$ and $1$, which represent the classes. Next, suppose that the confusion matrix for $Y$ and $\hat Y$ is:
$\hat Y = 0$
$\hat Y = 1$
$Y = 0$
10
20
$Y = 1$
30
40
With hindsight, let us normalize the rows and columns of this confusion matrix, such that the sum of all elements of the confusion matrix is $1$. Currently, the sum of all elements of the confusion matrix is $10 + 20 + 30 + 40 = 100$, which is our normalization factor. After dividing the elements of the confusion matrix by the normalization factor, we get the following normalized confusion matrix:
$\hat Y = 0$
$\hat Y = 1$
$Y = 0$
$\frac{1}{10}$
$\frac{2}{10}$
$Y = 1$
$\frac{3}{10}$
$\frac{4}{10}$
With this formulation of the confusion matrix, we can interpret $Y$ and $\hat Y$ slightly differently. We can interpret them as jointly Bernoulli (binary) random variables, where their normalized confusion matrix represents their joint probability mass function. When we interpret $Y$ and $\hat Y$ this way, the definitions of precision and recall are much easier to remember using Bayes' rule and the law of total probability:
\begin{align}
\text{Precision} &= P(Y = 1 \mid \hat Y = 1) = \frac{P(Y = 1 , \hat Y = 1)}{P(Y = 1 , \hat Y = 1) + P(Y = 0 , \hat Y = 1)} \\
\text{Recall} &= P(\hat Y = 1 \mid Y = 1) = \frac{P(Y = 1 , \hat Y = 1)}{P(Y = 1 , \hat Y = 1) + P(Y = 1 , \hat Y = 0)}
\end{align}
How do we determine these probabilities? We can estimate them using the normalized confusion matrix. From the table above, we see that
\begin{align}
P(Y = 0 , \hat Y = 0) &\approx \frac{1}{10} \\
P(Y = 0 , \hat Y = 1) &\approx \frac{2}{10} \\
P(Y = 1 , \hat Y = 0) &\approx \frac{3}{10} \\
P(Y = 1 , \hat Y = 1) &\approx \frac{4}{10}
\end{align}
Therefore, the precision and recall for this specific example are
\begin{align}
\text{Precision} &= P(Y = 1 \mid \hat Y = 1) = \frac{\frac{4}{10}}{\frac{4}{10} + \frac{2}{10}} = \frac{4}{4 + 2} = \frac{2}{3} \\
\text{Recall} &= P(\hat Y = 1 \mid Y = 1) = \frac{\frac{4}{10}}{\frac{4}{10} + \frac{3}{10}} = \frac{4}{4 + 3} = \frac{4}{7}
\end{align}
Note that, from the calculations above, we didn't really need to normalize the confusion matrix before computing the precision and recall. The reason for this is that, because of Bayes' rule, we end up dividing one value that is normalized by another value that is normalized, which means that the normalization factor can be cancelled out.
A nice thing about this interpretation is that it can be generalized to confusion matrices of any size. In the case where there are more than 2 classes, $Y$ and $\hat Y$ are no longer considered to be jointly Bernoulli, but rather jointly categorical. Moreover, we would need to specify which class we are computing the precision and recall for. In fact, the definitions above may be interpreted as the precision and recall for class $1$. We can also compute the precision and recall for class $0$, but these have different names in the literature.
trying to implement custom MLE for binomial distribution (for learning purpose) stuck with implantation of binomial coefficient in google JAX . there is no analog for scipy.special.binom() implemented.
what shall i use instead ?
The binomial coefficient for general real-valued inputs can be computed in terms of the gamma function, which is available in JAX via jax.scipy.special.gammaln. Here's one way you could define it:
def binom(x, y):
return jnp.exp(gammaln(x + 1) - gammaln(y + 1) - gammaln(x - y + 1))
Here is a (sequential) integer implementation using JAX.
def binom_int_seq(x : int, y : int):
def scan_body(carry, values):
n, d = values
carry = (carry*n)//d
return carry, None
y = max(y, x-y)
nd = jnp.concatenate(
(jnp.arange(y+2, x+1, dtype = 'u8')[:,None],
jnp.arange(2, x-y+1, dtype = 'u8')[:,None],),
axis = 1
)
bc, *_ = jax.lax.scan(scan_body, jnp.array(y+1, dtype = 'u8'), nd)
return bc
binom_int_seq_jit = jax.jit(binom_int_seq, static_argnums = (0, 1))
which gives
x, y = 60, 31
bc_ref = sp.special.comb(x, y, exact=True)
# 114449595062769120
binom_int_seq(x, y)-bc_ref
# DeviceArray(0, dtype=uint64)
# Using above logarithmic gamma function based implementation
binom(x, y)-bc_ref
# DeviceArray(496., dtype=float64, weak_type=True)
Keep in mind the binom_int_seq implementation is only correct if
(x-max(x-y, y))*sp.special.comb(x, y, exact=True) < jnp.iinfo(jnp.uint64).max
Unlike the real-valued version, the error will be sudden and catastrophic if this condition is not satisfied.
There may be other ways to increase this constraint, such as running cancellations based upon prime factorisation, without resorting to larger unsigned integers (/arbitrary precision).
A monoidal version could be implemented which computes the binomial coefficient numerator and denominator reductions then integer divides, but this places stricter constraints on the maximum arguments.
Do you know some well-known python library for interpolate randomly located 2d points based on regular grid date points?
Note that data points to create an interpolator is on regular grid. But evaluation points are not on regular grid.
context
Let me explain the context. In my application, data points to create an interpolator is on a regular grid. However, at the evaluation time, the points to be evaluated are on random locations (say np.random.rand(100, 2)).
As far as I know, most used library for 2d interpolation is scipy's interp2d. But at the evaluation time interp2d takes grid coordinates X and Y instead of points as the following documentation describe.
Of course, it is possible to do something like
values = []
for p in np.random.rand(100, 2):
value = itp([p[0]], [p[1]])
values.append(value)
or to avoid for-loop
pts = np.random.rand(100, 2)
tmp = itp(pts[:, 0], pts[:, 1])
value = tmp.diagonal()
But both method is two inefficient. First one will be slow by for loop (run code as possible as in c-side) and the second one is wasteful because evaluate N^2 points for getting results for only N points.
scipy.interpolate.RegularGridInterpolator does. By this, one can create interpolator using gridded data points, and at evaluation time it takes 2dim numpy array with shape (n_points, n_dim).
For example:
import numpy as np
from scipy.interpolate import RegularGridInterpolator
x = np.linspace(0, 1, 20)
y = np.linspace(0, 1, 20)
f = np.random.randn(20, 20)
itp = RegularGridInterpolator((x, y), f)
pts = np.random.rand(100, 2)
f_interped = itp(pts)
I need to solve a non linear optimization problem in Python. I found out that scipy solves optimization problems, however I don't know what I am doing wrong since with some example input it can't find the correct solution that I have in NEOS server solver Knitro AMPL.
My problem is that, given a set of points it must find the biggest ellipse inscribed that at max touches those points and the points are never included inside of it.
Theory
Formulating the optimization problem, I have a and b the semiaxis, phi the rotation, xc and yc the coordinates of the centre and points the list of points with each element in the form of [x, y] -> [0, 1] indices.
On paper the problem and the constraints are these, a, b, phi, xc, yc are real, the points are integers:
NEOS
The files I used in NEOS are these:
mod
dat
run
With successful results (complete):
xc = 143.012
yc = 262.634
a = 181.489
b = 140.429
phi = 1.43575
Python
So, my python code is this, it is my first time using scipy for optimization, so I don't exclude errors of understanding how it works from the documentation.
from typing import List
import numpy as np
from scipy.optimize import *
def ellipse_calc(
points: List[List[int]],
verbose: bool = False
):
centre = [0, 0]
for i in range(len(points)):
centre[0] += points[i][0]
centre[1] += points[i][1]
centre[0] /= len(points)
centre[1] /= len(points)
if verbose:
print(f'centre: {centre[0]:.2f}, {centre[1]:.2f}')
max_x = max([p[0] for p in points])
max_y = max([p[1] for p in points])
min_x = min([p[0] for p in points])
min_y = min([p[1] for p in points])
initial_axis = 0.25 * (max_x - min_x + max_y - min_y)
if verbose:
print(initial_axis)
constraints = [
NonlinearConstraint(lambda x: x[0], 1, np.inf),
NonlinearConstraint(lambda x: x[1], 1, np.inf),
NonlinearConstraint(lambda x: x[2], 0, np.inf),
]
for i in range(len(points)):
constraints += [NonlinearConstraint(
lambda x:
(points[i][0] - x[3]) ** 2 * (np.cos(x[2]) ** 2 / x[0]**2 + np.sin(x[2]) ** 2 / x[1]**2) +
(points[i][1] - x[4]) ** 2 * (np.sin(x[2]) ** 2 / x[0]**2 + np.cos(x[2]) ** 2 / x[1]**2) +
2 * (points[i][0] - x[3]) * (points[i][1] - x[4]) *
np.cos(x[2]) * np.sin(x[2]) * (1 / x[1]**2 - 1 / x[0]**2), 1, np.inf)]
result = minimize(
lambda x: -np.pi * x[0] * x[1],
[initial_axis, initial_axis, 0, centre[0], centre[1]],
constraints=constraints
)
print(result)
if __name__ == '__main__':
points = [[50,44],[91,44],[161,44],[177,44],[44,88],[189,88],[239,88],[259,88],[2,132],[250,132],[2,176],[329,176],[2,220],[289,220],[2,264],[288,264],[2,308],[277,308],[2,352],[285,352],[2,396],[25,396],[35,396],[231,396],[284,396],[298,396],[36,440],[76,440],[106,440],[173,440]]
ellipse_calc(points, True)
This try, that has the same data I tried on NEOS gives as output the following:
fun: -8.992626773255127e+40
jac: array([-5.68832805e+20, -4.96651566e+20, -0.00000000e+00, -0.00000000e+00,
-0.00000000e+00])
message: 'Inequality constraints incompatible'
nfev: 54
nit: 10
njev: 9
status: 4
success: False
x: array([ 1.58089104e+20, 1.81065104e+20, -1.24564497e+15, -1.55647883e+10,
-2.76654483e+10])
Does anyone know what I am doing wrong and how to fix it? Also, I don't really know if it is possible to solve this problem with scipy, in that case I am looking for a free library to solve it or even to alternative methods of finding that ellipse equation
This isn't a complete answer, but it should help you to get started. Here are two hints:
Pass simple box constraints on the variables as boundaries, not as constraints. That is, use
bounds = [(1, None), (1, None), (0, None), (None, None), (None, None)]
and pass it to minimize via the bounds parameter.
You need to be really careful when defining constraints through lambda expressions inside a loop, see here. You need to capture the loop variable i by lambda x, i=i: your_fun. Otherwise, each of your constraints uses i=29 and thus evaluates the last point. This can easily be observed by evaluating all constraints for a specific value.
Then you should at least get a feasible solution with an objective value of 79384. Note also that you can shorten your code significantly by using numpy functions instead of loops.
I would like to implement the field-aware factorization model (FFM) in a vectorized way. In FFM, a prediction is made by the following equation
where w are the embeddings that depend on the feature and the field of the other feature. For more info, see equation (4) in FFM.
To do so, I have defined the following parameter:
import torch
W = torch.nn.Parameter(torch.Tensor(n_features, n_fields, n_factors), requires_grad=True)
Now, given an input x of size (batch_size, n_features), I want to be able to compute the previous equation. Here is my current (non-vectorized) implementation:
total_inter = torch.zeros(x.shape[0])
for i in range(n_features):
for j in range(i + 1, n_features):
temp1 = torch.mm(
x[:, i].unsqueeze(1),
W[i, feature2field[j], :].unsqueeze(0))
temp2 = torch.mm(
x[:, j].unsqueeze(1),
W[j, feature2field[i], :].unsqueeze(0))
total_inter += torch.sum(temp1 * temp2, dim=1)
Unsurprisingly, this implementation is horribly slow since n_features can easily be as large as 1000! Note however that most of the entries of x are 0. All inputs are appreciated!
Edit:
If it can help in any ways, here are some implementations of this model in PyTorch:
pytorch-fm
ctr_model_zoo
Unfortunately, I cannot figure out exactly how they have done it.
Additional update:
I can now obtain the product of x and W in a more efficient way by doing:
temp = torch.einsum('ij, jkl -> ijkl', x, W)
Thus, my loop is now:
total_inter = torch.zeros(x.shape[0])
for i in range(n_features):
for j in range(i + 1, n_features):
temp1 = temp[:, i, feature2field[j], :]
temp2 = temp[:, j, feature2field[i], :]
total_inter += 0.5 * torch.sum(temp1 * temp2, dim=1)
It is however still too long since this loop goes over for about 500 000 iterations.
Something that could potentially help you speed up the multiplication is using pytorch sparse tensors.
Also something that might work would be the following:
Create n arrays, one for each feature i that would hold its corresponding field factors in each row. e.g. for feature i = 0
[ W[0, feature2field[0], :],
W[0, feature2field[1], :],
W[0, feature2field[n], :]]
Then calculate the multiplication of those arrays, lets call them F, with X
R[i] = F[i] * X
So each element in R would hold the result of the multiplication, an array, of the F[i] with X.
Next you would multiply each R[i] with its transpose
R[i] = R[i] * R[i].T
Now you can do the summation in a loop like before
for i in range(n_features):
total_inter += torch.sum(R[i], dim=1)
Please take this with a grain of salt as i haven't tested it. In any case i think that it will point you in the right direction.
One problem that might occur is in the transpose multiplication in which each element will also be multiplied with itself and then be added in the sum. I don't think it will affect the classifier but in any case you can make the elements in the diagonal of the transpose and above 0 (including the diagonal).
Also although minor nevertheless please move the 1st unsqueeze operation outside of the nested for loop.
I hope it helps.