pROC multiclass.roc - univariate case. How is AUC calculated in this instance? - proc-r-package

It is clear how the Hand and Till method works (from class probabilities) in the multivariate case and I have checked this against some code I wrote. I.e. I have used a 3 class problem and I get the same result as pROC. When I convert the 3 class problem into a 2 class problem (by merging 2 of the classes) I get the same results as pROC (still using multiclass.roc). However when I pass multiclass.roc the response with 3 classes, the 3 levels but only one class probabilities vector, I get another result.
I know this is handled differently; https://rdrr.io/cran/pROC/man/multiclass.html says "The multiclass.roc function can handle two types of datasets: uni- and multi-variate. In the univariate case, a single predictor vector is passed and all the combinations of responses are assessed.". However I haven't been able to find an explanation about what is happening here. Obviously it is something to do with the number of classes in the response (since this is the only other data) but I would be very interested to know what pROC is doing in such a case.
Here are my results (I wrote AUC_mc):
pROC: Class 3 univariate: 0.8494, Class 3 vs the rest (2 class): 0.9118
AUC_mc: Class 3 vs the rest (2 class): 0.9118
pROC: Class 1 univariate: 0.9721, Class 1 vs the rest (2 class): 0.9693
AUC_mc: Class 1 vs the rest (2 class): 0.9692823
#
# Here the probabilities for only 1 class are passed to pROC.
# When there are 2 classes in the domain that's fine
# When there are 3 classes in the domain then I get a different result
#
roc = multiclass.roc(test.df$response, probabilities[,n], levels=levels(test.df$response))
pROC: 3 class: 0.9568
AUC_mc: 3 class: 0.9567698
roc = multiclass.roc(test.df$response, probabilities[,], levels=levels(test.df$response))
Many thanks

In the univariate case, pROC tests all the 1 vs 1 comparisons. There is no 1 to the rest comparisons. So for 3 classes you have 3 roc curves: 1 vs 2; 1 vs 3; and 2 vs 3.
You can find the source code in multiclass.R:
rocs <- utils::combn(levels, 2, function(X, response, predictor, percent, ...) {
roc(response, predictor, levels=X, percent=percent, auc=FALSE, ci=FALSE, ...)
[...]
The utils::combn function generate all combinations of the elements of levels taken 2 at a time.
I will try to improve the documentation to reflect this better.

Related

How to adding output dimension of `nn.Linear` while freezing the original dimension?

I am implementing an incremental learning task with pytorch. Let's say, in a simple scenario, the number of base classes is 5, and the number of incremental classes is 2. Namely, I want the model could incrementall learning 2 new classes each time.
Simply, suppose the model is composed of a feature extractor resnet18 and a classifier, a 1 layer mlp = nn.Linear(126,5). For classifying the novel classes, I must adding 2 extra output neuron responsible for the 2 incremental classes. That is to say, I want a new classifier mlp_inc = nn.Linear(126,7). But, importantly, I want to freeze the trained weights (126 * 5) for base classes while only update the parameters(126 * 2) for incremental classes.
A straight way is to cat the output of base classifier and incremental classifier:
self.mlp_base = nn.Linear(126,5)
self.mlp_inc = nn.Linear(126,2)
'''
def forward(x):
x_base = self.mlp_base(x)
x_inc = self.mlp_inc(x)
output = torch.cat((x_base,x_inc),1)
'''
But this way will add a new module self.mlp_inc to original model. Noting mlp_base1 and mlp_inc1 as the trained classifer for incremental task 1.
When adapting to newer incremental task (another novel 2 classes, taks 2), I can not directly merge mlp_base1 and mlp_inc1 as mlp_base and load the state_dict of mlp_base1 and mlp_inc1 as mlp_base. Which means I should adding mlp_inc2, mpl_inc3 ... for other tasks. This is not easily maintainable.
So an simple way like the code below is a better choice.
# for task 1
self.mlp = nn.Linear(126,5)
# for task 2
self.mlp = nn.Linear(126,5+2)
# self.mlp.load_state_dict(), load the partial parameters for 5 base classes
# self.mlp.requires_grad(False), freeze the partial parameters for 5 base classes
But this seems not achievable.

Custom Metrics Keras [duplicate]

This question already has answers here:
how to implement custom metric in keras?
(3 answers)
Closed 3 years ago.
i need help to create a custom metrics in keras. I need to count how many times my error is equal to zero (y_pred - y_true = 0).
I tried this:
n_train = 1147 # Number of samples on training set
c = 0 # Variable to count
def our_metric(y_true, y_pred):
if y_true-y_pred == 0:
c += 1
return c/n_train
But i'm getting this error:
OperatorNotAllowedInGraphError: using a tf.Tensor as a Python bool
is not allowed in Graph execution. Use Eager execution or decorate
this function with #tf.function.
EDIT: Using the solution proposed here:
Creating custom conditional metric with Keras
I solved my problem as this:
c = tf.constant(0)
def our_metric(y_true, y_pred):
mask = K.equal(y_pred, y_true) # TRUE if y_pred = y_true
mask = K.cast(mask,K.floatx())
s = K.sum(mask)
return s/n_train
You can't run Python comparison in plain tensorflow (using static graphs).
You have to enable eager mode, a wrapper which let's you use some Python control statements (like if or loop). Just decorate your function as the error suggests or issue tf.enable_eager_execution() at the beginning of your script.
You may also want to update your code to use tf2.0, it's more intuitive and has eager mode on by default.
There are numerous ways to use Keras backend functions to count the number of times a value is equal to zero. You just have to think a bit outside of the box. Here is an example:
diff = y_true - y_pred
count = K.sum(K.cast(K.equal(diff, K.zeros_like(diff)), 'int8'))
There's also a tf.count_nonzero operation that could be used, but mixing keras and explicit tensorflow can cause issues.

Can you give a simple example of adaptive stepsize scipy.integrate.LSODA function?

I need to understand the mechanism of scipy.integrate.LSODA function.
I have written a test script that integrate a simple function. According to the LSODA webpage inputs of functions can be rhs function, t_min, initial y and t_max. On the other hand, when I run the code, I get nothing. What should I do?
import scipy.integrate as integ
import numpy as np
def func(t,y):
return t
y0=np.array([1])
t_min=1
t_max=10
N_max=100
t_min2=np.linspace(t_min,t_max,N_max)
first_step=0.01
solution=integ.LSODA(func,t_min,y0,t_max)
solution2=integ.odeint(func,y0,t_min2)
print(solution.t,solution.y,solution.nfev,'\n')
print(solution2)
Solution give
1 [ 1.] 0
[[ 1.00000000e+00]
[ 9.48773662e+00]
[ 9.00171421e+01]
[ 8.54058901e+02]
[ 8.10308559e+03]]
1.) You only initiate an instance of the LSODA solver class, no computation occurs, just initialization of the arrays with the initial data. To get an odeint-like interface, use solve_ivp with the option method='LSODA'.
2.) Without the option tfirst=True, the LSODA solver will solve y'(t)=t, while odeint will solve y'(t)=y(t)
To get comparable results, one should also equalize the tolerances, as the default tolerances can be different. One thus can call the methods like
print "LSODA"
solution=integ.solve_ivp(func,[t_min, t_max],y0,method='LSODA', atol=1e-4, rtol=1e-6)
print "odeint"
solution2=integ.odeint(func,y0,t_min2, tfirst=True, atol=1e-4, rtol=1e-6)
Even then you get no information on the internal steps of odeint, even if the FORTRAN code has an option for that, the python wrapper does not replicate it. You could add a print statement to the ODE function func so that you see at what places this function is actually called, this should average to 2 calls with close-by arguments per internal step.
This reports
LSODA
1.0 [1.]
1.00995134265 [1.00995134]
1.00995134265 [1.01005037]
1.01990268529 [1.02010074]
1.01990268529 [1.02019977]
10.0 [50.50009903]
10.0 [50.50009903]
odeint
1.0 [1.]
1.00109084546 [1.00109085]
1.00109084546 [1.00109204]
1.00218169092 [1.00218407]
1.00218169092 [1.00218526]
11.9106363102 [71.43162985]
where the reported steps in the output of LSODA are
[ 1. 1.00995134 1.01990269 10. ] [[ 1. 1.01005037 1.02019977 50.50009903]] 7
Of course, a high-order method will integrate the linear polynomial y'=t to the quadratic polynomial y(t)=0.5*(t^2+1) with essentially no error independent of the step size.

Class attributes in Python objects "erasing" themselves

I'm working with some old Python2 code from a differential power analysis (DPA) contest from way back (http://www.dpacontest.org/home/index.html). The code I'm modifying can be found here (https://svn.comelec.enst.fr/dpacontest/code/reference/)
Current issue that is driving me nuts:
Essentially, Some python objects in this code will "erase" themselves or go to their default values, and I have no idea why. I cannot find any code in the logic flow that does this, and I don't think it's a scope issue.
I've tried rewriting parts of the code to use numpy instead of mapping lambda functions. I've also tried a myriad of different orderings of code and pulling methods out of their classes and trying to run them locally/inline.
main.py:
loc_subkeys = brk.get_subkeys()
des_breaker.py
def get_subkeys(self):
"""
Returns a vector of currently best sboxes subkeys.
This is an array of 8 integers.
"""
sk = np.array([])
for i in range(8):
sk = np.append(sk, self.__sbox_breakers[i].get_key())
return sk
sbox_breaker.py
def get_key(self):
"Gives the current best key"
if self.__best_key is None:
marks = np.array([])
print("p0: ", len(list(self.__key_estimators[0]._key_estimator__p0)))
print("p1: ", len(list(self.__key_estimators[0]._key_estimator__p1)))
print("p0: ", len(list(self.__key_estimators[0]._key_estimator__p0)))
print("p1: ", len(list(self.__key_estimators[0]._key_estimator__p1)))
for i in range(64):
ke = self.__key_estimators[i]
marks = np.append(marks, ke.get_mark())
self.__best_key = np.argmax(marks)
return self.__best_key
key_estimator.py - attributes
class key_estimator:
"""
Provides methods to give a mark to the key relatively to the probability f
the correctness of the key.
"""
__sbox = None
__key = None
__cnt0 = 0 # The accumulated traces count in partition 0
__cnt1 = 0 # The accumulated traces count in partition 1
__p0 = None # The bit=0 estimated partition
__p1 = None # The bit=1 estimated partition
__diff = np.array([]) # The differential trace
Print statements in sbox_breaker are mine. Their output is the only clue I have right now:
p0: 5003 (Good)
p1: 5003 (Good)
p0: 0 (???)
p1: 0
What gives? The second time around the attributes of the key_estimator class have seemed to erase themselves. This happens to all the attributes, not just p0 and p1.
The first loop through this program works, but on the second iteration (starting from main) it fails because the attributes have erased themselves. I can "erase" them manually just by printing the object's attributes.
So I seemed to have fixed the problem after sleeping on it. The class attributes were being created by map which returns a list in Python2, but not Python3. Making them into lists with list() solves the persistence issue. I couldn't tell you why printing a map attribute causes it to clear itself though.

Finding the top three relevant category and its corresponding probabilities

From the below script, I find the highest probability and its corresponding category in a multi class text classification problem. How do I find the highest top 3 predicted probability and its corresponding category in a best efficient way without using loops.
probabilities = classifier.predict_proba(X_test)
max_probabilities = probabilities.max(axis=1)
order=np.argsort(probabilities, axis=1)
classification=(classifier.classes_[order[:, -1:]])
print(accuracy_score(classification,y_test))
Thanks in advance.
( I have around 50 categories, I want to extract the top 3 best relevant category among 50 categories for each of my narrations and display them in a dataframe)
You've done most of the hard work here, just missing a bit of numpy foo to finish it off. Your line
order = np.argsort(probabilities, axis=1)
Contains the indices of the sorted probabilities, so [[lowest_prob_class_1, ..., highest_prob_class_1]...] for each of your samples. Which you have used to give your classification with order[:, -1:], i.e. the index of the highest probability class. So to get the top three classes we can just make a simple change
top_3_classes = classifier.classes_[order[:, -3:]]
Then to get the corresponding probabilities we can use
top_3_probabilities = probabilities[np.repeat(np.arange(order.shape[0]), 3),
order[:, -3:].flatten()].reshape(order.shape[0], 3)

Resources