Spark : regression model threshold and precision - apache-spark

I have logistic regression mode, where I explicitly set the threshold to 0.5.
model.setThreshold(0.5)
I train the model and then I want to get basic stats -- precision, recall etc.
This is what I do when I evaluate the model:
val metrics = new BinaryClassificationMetrics(predictionAndLabels)
val precision = metrics.precisionByThreshold
precision.foreach { case (t, p) =>
println(s"Threshold is: $t, Precision is: $p")
}
I get results with only 0.0 and 1.0 as values of threshold and 0.5 is completely ignored.
Here is the output of the above loop:
Threshold is: 1.0, Precision is: 0.8571428571428571
Threshold is: 0.0, Precision is: 0.3005181347150259
When I call metrics.thresholds() it also returns only two values, 0.0 and 1.0.
How do I get the precision and recall values with threshold as 0.5?

You need to clear the model threshold before you make predictions. Clearing threshold makes your predictions return a score and not the classified label. If not you will only have two thresholds, i.e. your labels 0.0 and 1.0.
model.clearThreshold()
A tuple from predictionsAndLabels should look like (0.6753421,1.0) and not (1.0,1.0)
Take a look at https://github.com/apache/spark/blob/master/examples/src/main/scala/org/apache/spark/examples/mllib/BinaryClassificationMetricsExample.scala
You probably still want to set numBins to control the number of points if the input is large.

I think what happens is that all the predictions are 0.0 or 1.0. Then the intermediate threshold values make no difference.
Consider the numBins argument of BinaryClassificationMetrics:
numBins:
if greater than 0, then the curves (ROC curve, PR curve) computed internally will be down-sampled to this many "bins". If 0, no down-sampling will occur. This is useful because the curve contains a point for each distinct score in the input, and this could be as large as the input itself -- millions of points or more, when thousands may be entirely sufficient to summarize the curve. After down-sampling, the curves will instead be made of approximately numBins points instead. Points are made from bins of equal numbers of consecutive points. The size of each bin is floor(scoreAndLabels.count() / numBins), which means the resulting number of bins may not exactly equal numBins. The last bin in each partition may be smaller as a result, meaning there may be an extra sample at partition boundaries.
So if you don't set numBins, then precision will be calculated at all the different prediction values. In your case this seems to be just 0.0 and 1.0.

First, try adding more bins like this (here numBins is 10):
val metrics = new BinaryClassificationMetrics(probabilitiesAndLabels,10);
If you still only have two thresholds of 0 and 1, then check to make sure the way you have defined your predictionAndLabels. You many be having this problem if you have accidentally provided (label, prediction) instead of (prediction, label).

Related

What to pass as threshold for Naive Bayes Classifier in Pyspark?

I'm trying to make a ROC curve for my model while using a Naive Bayes Classifier. To do this, I need to change the value of the threshold for my classifier. The way I interpreted it, a list must be passed with the value for the threshold of each category. So if i had two categories, and t is the threshold I want to set (0 <= t <= 1), then I would have to pass a list like this: [1-t, t].
Anyways, when i tried doing the ROC curve, I got this:
Given the result, my idea was that the idea I had for the theshold might have been wrong, so I went to check the documentation for the Naive Bayes Classifier. But when I finally found an example i dont get what the criteria is for the parameter:
nb = nb.setThresholds([0.01, 10.00])
Does anyone know what must be passed to the threshold? Supose I want the theshold to be set at 0.7 (if the probability is over 0.7 i want the prediction to be 1), what should i pass to the threshold parameter?
As it says in pyspark.ml's documentation for NaiveBayes under the thresholds parameter:
The class with largest value p/t is predicted, where p is the original
probability of that class and t is the class's threshold.
Therefore, thresholds can be thought of as handicaps on the probabilities. To keep it simple, in the case of binary classification, you can set the thresholds as a value in the range [0, 1], such that they sum to 1. This will get you the desired rule of "Classify as True if the probability is over threshold T, otherwise classify as False".
For your specific ask of a 0.7 probability threshold, this would look like:
nb = nb.setThresholds([0.3, 0.7])
assuming that the first entry is the threshold for False and the second value is the thresold for True. Using these thresholds, the model would classify a class with False and True probabilities p_false and p_true by taking the greater value out of [p_false/0.3, p_true/0.7].
You can technically set the thresholds to any value. Just remember that the probability for class X will be divided by its respective threshold and compared against the other adjusted probabilities for the other classes.

How do I prevent minimize (via SCIPY) from outputting "optimized" parameters that I have input as guesses?

I am trying to use the minimize function from the scipy module. The full code is too lengthy to post, but the main idea is that there are multiple defined distributions that should be fittable against datasets. The observations per bin are easily calculated from the datasets, whereas the expectations per bin are calculated by a function that uses one argument to specify which distribution should be integrated over bin bounds (where the bin bounds are identical to the histogram bins). There are three functions chisqI where I = 1,2,3 (one for each distribution), each of which inputs specified observations per bin and expectations per bin to output the chi square. Then there are three functions, each of which inputs a chisqI and args to output the minimized function result and optimized parameters. Here, the args are parameters mu and sigma that will be optimized to produce the smallest chi-square. I was able to pass arguments through a chain of functions for one distribution, and am wondering if I need to pass through another arg that specifies which distribution is being dealt with from one function down the chain.
There are different methods that the minimize function can use, like Nelder-Mead or CG. I've been trying to compare results from the different methods to find the one that provides the best fit (where the best fit is defined as the fit that both produces the smallest chi-square or largest p-value when compared to an actual dataset). Interestingly enough, the Nelder-Mead and Powell methods produce the lowest chi square relative to the other methods, but the plotted fit against the histogram of the actual data looks better with other methods. For the code outputs below, the function value is the negative of the p-value that is associated with a chi-square value; this is the minimized result. CHISQ_RED is the reduced chi square value by using the CHISQ_TOT and the degrees of freedom, whereas the first and second elements in the x: array are the optimized parameters mu and sigma for a distribution, respectively.
Running the Nelder-Mead minimization method produces the output below.
final_simplex: (array([[ 6.00002802, 0.60020636],
[ 5.99995429, 0.60018798],
[ 6.0000716 , 0.60011127]]), array([ -5.16845821e-21, -5.16838926e-21, -5.16815050e-21]))
fun: -5.1684582072826815e-21
message: 'Optimization terminated successfully.'
nfev: 47
nit: 24
status: 0
success: True
x: array([ 6.00002802, 0.60020636])
CHISQ_TOT = 259.042420419 CHISQ_RED = 3.36418727816
Running the CG minimization method produces the output below.
fun: -4.0964504680695594e-97
jac: array([ 8.72867710e-94, -3.96555507e-93])
message: 'Optimization terminated successfully.'
nfev: 4
nit: 0
njev: 1
status: 0
success: True
x: array([ 6.01921293, 0.54436257])
CHISQ_TOT = 683.781671477 CHISQ_RED = 8.88028144776
Yet, the fit with a higher chi square value looks like a better fit (same dataset in the histogram).
The problem is that every method of minimization outputs my guess parameters (mu and sigma) as the optimized parameters. The Nelder-Mead method (smaller chi-square, worse-looking fit) has 47 function evaluations and 24 iterations, whereas the CG method (larger chi-square, better-looking fit) has 4 function evaluations and 0 iterations. I tried to change this by adding extra args in the minimization function (where chisq3 is the pre-defined function of mu and sigma being minimized, and parameterguess is [mu_guess, sigma_guess].
minimize( chisq3 , parameterguess , method = 'CG', options={'gtol':1e-50, 'maxiter': 100})
If I change my guess value of mu and sigma by adding 2 to each, then the fits become drastically worse (as the guess value for the optimized parameters is rather decent). I'm not sure if it's relevant, but the data shown in the plots are adapted from a lognormal distribution by taking the logarithm of each value in my dataset to create a "pseudo-" Gaussian shape/distribution (over logarithmic x axes).
I am guessing that the minimize function via scipy is supposed to do many iterations to be truly successful. So I think adding more iterations should decrease the sensitivity of the minimize function to my initial guess of parameters.
Most importantly, is this a common error using the minimize function via scipy? If so, what are some common fixes for this? Also, why would the minimize function do many iterations and function evaluations only to produce the same result as the input?
The problem was that chi square is the calculation equalto the sum of the square of the per-bin difference of expectation values and observed values, all divided by the expectation value. The result was a small number divided by a large number, squared, then continuously summed thousands of times, contributing to zero division problems and round off errors. By minimizing a simpler function, such as chi square without the denominator term, the source of the bug is gone and one can calculate a chi square from the obtained parameter fit.

spark ml 2.0 - Naive Bayes - how to determine threshold values for each class

I am using NB for document classification and trying to understand threshold parameter to see how it can help to optimize algorithm.
Spark ML 2.0 thresholds doc says:
Param for Thresholds in multi-class classification to adjust the probability of predicting each class. Array must have length equal to the number of classes, with values >= 0. The class with largest value p/t is predicted, where p is the original probability of that class and t is the class' threshold.
0) Can someone explain this better? What goal it can achieve? My general idea is if you have threshold 0.7 then at least one class prediction probability should be more then 0.7 if not then prediction should return empty. Means classify it as 'uncertain' or just leave empty for prediction column. How can p/t function going to achieve that when you still pick the category with max probability?
1) What probability it adjust? default column 'probability' is actually conditional probability and 'rawPrediction' is
confidence according to document. I believe threshold will adjust 'rawPrediction' not 'probability' column. Am I right?
2) Here's how some of my probability and rawPrediction vector look like. How do I set threshold values based on this so I can remove certain uncertain classification? probability is between 0 and 1 but rawPrediction seems to be on log scale here.
Probability:
[2.233368649314982E-15,1.6429456680945863E-9,1.4377313514127723E-15,7.858651849363202E-15]
rawPrediction:
[-496.9606736723107,-483.452183395287,-497.40111830218746]
Basically I want classifier to leave Prediction column empty if it doesn't have any probability that is more then 0.7 percent.
Also, how to classify something as uncertain when more then one category has very close scores e.g. 0.812, 0.800, 0.799 . Picking max is something I may not want here but instead classify as "uncertain" or leave empty and I can do further analysis and treatment for those documents or train another model for those docs.
I haven't played with it, but the intent is to supply different threshold values for each class. I've extracted this example from the docstring:
model = nb.fit(df)
>>> result.prediction
1.0
>>> result.probability
DenseVector([0.42..., 0.57...])
>>> result.rawPrediction
DenseVector([-1.60..., -1.32...])
>>> nb = nb.setThresholds([0.01, 10.00])
>>> model3 = nb.fit(df)
>>> result = model3.transform(test0).head()
>>> result.prediction
0.0
If I understand correctly, the effect was to transform [0.42, 0.58] into [.42/.01, .58/10] = [42, 5.8], switching the prediction ("largest p/t") from column 1 (third row above) to column 0 (last row above). However, I couldn't find the logic in the source. Anyone?
Stepping back: I do not see a built-in way to do what you want: be agnostic if no class dominates. You will have to add that with something like:
def weak(probs, threshold=.7, epsilon=.01):
return np.all(probs < threshold) or np.max(np.diff(probs)) < epsilon
>>> cases = [[.5,.5],[.5,.7],[.7,.705],[.6,.1]]
>>> for case in cases:
... print '{:15s} - {}'.format(case, weak(case))
[0.5, 0.5] - True
[0.5, 0.7] - False
[0.7, 0.705] - True
[0.6, 0.1] - True
(Notice I haven't checked whether probs is a legal probability distribution.)
Alternatively, if you are not actually making a hard decision, use the predicted probabilities and a metric like Brier score, log loss, or info gain that accounts for the calibration as well as the accuracy.

Expectation Maximization algorithm(Gaussian Mixture Model) : ValueError: the input matrix must be positive semidefinite

I am trying to implement Expectation Maximization algorithm(Gaussian Mixture Model) on a data set data=[[x,y],...]. I am using mv_norm.pdf(data, mean,cov) function to calculate cluster responsibilities. But after calculating new values of covariance (cov matrix) after 6-7 iterations, cov matrix is becoming singular i.e determinant of cov is 0 (very small value) and hence it is giving errors
ValueError: the input matrix must be positive semidefinite
and
raise np.linalg.LinAlgError('singular matrix')
Can someone suggest any solution for this?
#E-step: Compute cluster responsibilities, given cluster parameters
def calculate_cluster_responsibility(data,centroids,cov_m):
pdfmain=[[] for i in range(0,len(data))]
for i in range(0,len(data)):
sum1=0
pdfeach=[[] for m in range(0,len(centroids))]
pdfeach[0]=1/3.*mv_norm.pdf(data[i], mean=centroids[0],cov=[[cov_m[0][0][0],cov_m[0][0][1]],[cov_m[0][1][0],cov_m[0][1][1]]])
pdfeach[1]=1/3.*mv_norm.pdf(data[i], mean=centroids[1],cov=[[cov_m[1][0][0],cov_m[1][0][1]],[cov_m[1][1][0],cov_m[0][1][1]]])
pdfeach[2]=1/3.*mv_norm.pdf(data[i], mean=centroids[2],cov=[[cov_m[2][0][0],cov_m[2][0][1]],[cov_m[2][1][0],cov_m[2][1][1]]])
sum1+=pdfeach[0]+pdfeach[1]+pdfeach[2]
pdfeach[:] = [x / sum1 for x in pdfeach]
pdfmain[i]=pdfeach
global old_pdfmain
if old_pdfmain==pdfmain:
return
old_pdfmain=copy.deepcopy(pdfmain)
softcounts=[sum(i) for i in zip(*pdfmain)]
calculate_cluster_weights(data,centroids,pdfmain,soft counts)
Initially, I've passed [[3,0],[0,3]] for each cluster covariance since expected number of clusters is 3.
Can someone suggest any solution for this?
The problem is your data lies in some manifold of dimension strictly smaller than the input data. In other words for example your data lies on a circle, while you have 3 dimensional data. As a consequence when your method tries to estimate 3 dimensional ellipsoid (covariance matrix) that fits your data - it fails since the optimal one is a 2 dimensional ellipse (third dimension is 0).
How to fix it? You will need some regularization of your covariance estimator. There are many possible solutions, all in M step, not E step, the problem is with computing covariance:
Simple solution, instead of doing something like cov = np.cov(X) add some regularizing term, like cov = np.cov(X) + eps * np.identity(X.shape[1]) with small eps
Use nicer estimator like LedoitWolf estimator from scikit-learn.
Initially, I've passed [[3,0],[0,3]] for each cluster covariance since expected number of clusters is 3.
This makes no sense, covariance matrix values has nothing to do with amount of clusters. You can initialize it with anything more or less resonable.

What does `sample_weight` do to the way a `DecisionTreeClassifier` works in sklearn?

I've read from the relevant documentation that :
Class balancing can be done by sampling an equal number of samples from each class, or preferably by normalizing the sum of the sample weights (sample_weight) for each class to the same value.
But, it is still unclear to me how this works. If I set sample_weight with an array of only two possible values, 1's and 2's, does this mean that the samples with 2's will get sampled twice as often as the samples with 1's when doing the bagging? I cannot think of a practical example for this.
Some quick preliminaries:
Let's say we have a classification problem with K classes. In a region of feature space represented by the node of a decision tree, recall that the "impurity" of the region is measured by quantifying the inhomogeneity, using the probability of the class in that region. Normally, we estimate:
Pr(Class=k) = #(examples of class k in region) / #(total examples in region)
The impurity measure takes as input, the array of class probabilities:
[Pr(Class=1), Pr(Class=2), ..., Pr(Class=K)]
and spits out a number, which tells you how "impure" or how inhomogeneous-by-class the region of feature space is. For example, the gini measure for a two class problem is 2*p*(1-p), where p = Pr(Class=1) and 1-p=Pr(Class=2).
Now, basically the short answer to your question is:
sample_weight augments the probability estimates in the probability array ... which augments the impurity measure ... which augments how nodes are split ... which augments how the tree is built ... which augments how feature space is diced up for classification.
I believe this is best illustrated through example.
First consider the following 2-class problem where the inputs are 1 dimensional:
from sklearn.tree import DecisionTreeClassifier as DTC
X = [[0],[1],[2]] # 3 simple training examples
Y = [ 1, 2, 1 ] # class labels
dtc = DTC(max_depth=1)
So, we'll look trees with just a root node and two children. Note that the default impurity measure the gini measure.
Case 1: no sample_weight
dtc.fit(X,Y)
print dtc.tree_.threshold
# [0.5, -2, -2]
print dtc.tree_.impurity
# [0.44444444, 0, 0.5]
The first value in the threshold array tells us that the 1st training example is sent to the left child node, and the 2nd and 3rd training examples are sent to the right child node. The last two values in threshold are placeholders and are to be ignored. The impurity array tells us the computed impurity values in the parent, left, and right nodes respectively.
In the parent node, p = Pr(Class=1) = 2. / 3., so that gini = 2*(2.0/3.0)*(1.0/3.0) = 0.444..... You can confirm the child node impurities as well.
Case 2: with sample_weight
Now, let's try:
dtc.fit(X,Y,sample_weight=[1,2,3])
print dtc.tree_.threshold
# [1.5, -2, -2]
print dtc.tree_.impurity
# [0.44444444, 0.44444444, 0.]
You can see the feature threshold is different. sample_weight also affects the impurity measure in each node. Specifically, in the probability estimates, the first training example is counted the same, the second is counted double, and the third is counted triple, due to the sample weights we've provided.
The impurity in the parent node region is the same. This is just a coincidence. We can compute it directly:
p = Pr(Class=1) = (1+3) / (1+2+3) = 2.0/3.0
The gini measure of 4/9 follows.
Now, you can see from the chosen threshold that the first and second training examples are sent to the left child node, while the third is sent to the right. We see that impurity is calculated to be 4/9 also in the left child node because:
p = Pr(Class=1) = 1 / (1+2) = 1/3.
The impurity of zero in the right child is due to only one training example lying in that region.
You can extend this with non-integer sample-wights similarly. I recommend trying something like sample_weight = [1,2,2.5], and confirming the computed impurities.

Resources