I am having trouble finding the documentation I need on this. To summarize the issue, I have trained a tf.keras model using two classes of images, labeled as '0' or '1'. I now want to use this model to predict whether new images are a '0' or '1'. My question is as follows: model.predict() returns a number between 1 and 0, but I can't seem to find what exactly this is. Is it correct to say that this is it's prediction (ie, closer to 1 means the image is likely a 1, and closer to 0 means the image is likely a 0)? Or is there something else going on here. I have included the code, and some output, below. In this case, is pred the probability the image is a 1, and 1 - pred the probability the image is a 0?
Thanks for any and all help.
for img_path in test_filenames:
img = tf.keras.preprocessing.image.load_img(img_path, target_size=(IMAGE_SIZE,IMAGE_SIZE))
img_array = tf.keras.preprocessing.image.img_to_array(img)
img_array = tf.expand_dims(img_array, 0)
pred = model.predict(img_array)
print(pred)
Returns
[[0.8361757]]
[[0.26765466]]
[[0.2722953]]
[[0.81938094]]
[[0.24995388]]
[[0.45974937]]
is pred the probability the image is a 1, and 1 - pred the probability the image is a 0?
Yes, that is correct. If you want to get hard class (i.e., 0 or 1), then you can threshold the output. 0.5 is a common threshold, but I have also seen 0.3. This is something you can tune.
pred = model.predict(img_array)
classes = pred > 0.5
The predictions are between 0 and 1 most likely because the last activation of the model is a sigmoid function.
Related
I have a binary classification problem for detecting AO/Non-AO images, using PyTorch for this purpose.
First, I load the data using the ImageFolder utility.
The Dataset class to label mapping in dataset.class_to_idx is {0: 'AO', 1: 'Non-AO'}.
So, my 'positive class' AO, is assigned a label 0, and my 'negative class' Non-AO is assigned a label 1.
Then I train and validate the model without any issues.
When I come to testing, I need to calculate some metrics on the test data.
Here is where I am confused.
[Method A]
fpr, tpr, thresholds = roc_curve(y_true, y_score)
roc_auc = auc(fpr, tpr)
[Method B]
# because 0 is my actual 'positive' class for this problem
fpr, tpr, thresholds = roc_curve(y_true, y_score, pos_label=0)
roc_auc = auc(fpr, tpr)
Now, this second curve is basically the mirror of the first one along the diagonal, right?
And I think, that it can't be the correct curve, because I checked the accuracy of the model by directly comparing y_true and y_pred to get the following accuracies.
Apart from this, here is what my confusion matrix looks like.
So, my first question is, am I doing something wrong? What is the significance of the curve from Method B? Can I say that Method A gives me the correct ROC curve for my classification task? If not, then how do I proceed for getting the correct curve?
What does true positive or true negative or any of the other terms signify for my confusion matrix? Does the matrix consider 0 : AO as negative and 1 : Non-AO as positive (I think so, yes) or the vice versa?
If 0 is indeed being considered as negative, when I actually want 0 to be considered as positive, how can I make changes to reflect so in the matrix (because I am using the matrix later to calculate other matrics like specificity, sensitivity, etc) ?
I trained my model in keras for binary classification. I used Resnet preformer on ImageNet and I got 95% of accuracy. In my dataset, I have 9004 images for training divided into the two class and 2250 images for test divided into the two class. But the confusion matrix give me
4502 0
4502 0
can some one help me to know what is the meaning of this resul?
The interpretation for your result is the following (please note that for simplicity the indices start from 1 and not 0):
For calculating the number of correct predictions on your test dataset, one needs to sum the main diagonal of the matrix.
For the first class (class 1), all your predictions are correct.
This can be inferred from your confusion matrix, as the element on the position [1,1] (first_row,first_col) is 4502. Since you have 0 the element on the position [1,2], it means that all the predictions for class 1 are correct.
However, for the second class, which has on the position [2,2] the value 0, this means that none of your predictions for this class are correct.
Practically, we can easily verify that 4502 is on the position [2,1].
Notes:
You may have calculated the accuracy/confusion matrix on the wrong dataset. According to your description, 4502 * 2 = 9004, which means that the confusion matrix that you are giving here is for the training set not for the test set.
Whenever you see a number on the confusion matrix that does not belong to the main diagonal, that means that you have a case of FP(false positive) or FN(false negative).
I have loan data set with shape which is highly imbalanced:
(116058, 29)
how to improve precision and recall scores
target column m13
Counter({1: 636, 0: 115422})
I have used to split data in train and test set:
X_train,X_test,y_train,y_test = train_test_split(X,y,train_size = 0.8,random_state = 100,stratify = y)
and then used svm for classification:
svc = SVC(class_weight = {1:0.95,0:0.05},kernel='rbf')
svc.fit(X_train,y_train)
y_pred = svc.predict(X_test)
I got precision as .54 and recall as .55
I tried grid search as well with different value of C and gamma, the above code gave the best result
svc = SVC(class_weight = {1:0.95,0:0.05},kernel='rbf')
svc.fit(X_train,y_train)
y_pred = svc.predict(X_test)
is there any way to improve the precision as well as recall score?
first of all let me comment the baseline of your prediction, If i understand you correct, you have 636 of class 1, and 115422 of class 0.
Imagen you would built a prediction model that always predicts class 0, your precision would be (if class 0 is your true class):
115422/(115422+636)=0,9945
and your recall (if class 0 is your true class):
1
If class 1 is your true class, precision would be: 0
As you can see it is quite a task to tune it. In general there are books about this topic, it will be super hard to tune it. But your target should be to predict the class 1 correctly! The goal should be to identify every class 1 in your algorithm. For example you could try to target your sensivity, here are some goals to target: https://en.wikipedia.org/wiki/Precision_and_recall
What you defetnly should do, make sure that your train and test sets have target ofs class 1.
I was using KNN from sklearn and predicted the labels using predict_proba. I was expecting the values in the range of 0 to 1 since it tells the probability for a particular class. But I am only getting 0 & 1.
I have put large k values also but to no gain. Though I have only 1000 samples with features around 200 and the matrix is largely sparse.
Can anybody tell me what could be the solution here?
sklearn.neighbors.KNeighborsClassifier(n_neighbors=**k**)
The reason why you're getting only 0 & 1 is because of the n_neighbors = k parameter. If k value is set to 1, then you will get 0 or 1. If it's set to 2, you will get 0, 0.5 or 1. And if it's set to 3, then the probability outputs will be 0, 0.333, 0.666, or 1.
Also note that probability values are essentially meaningless in KNN. The algorithm is based on similarity and distance.
The reason might be lack of variety of data in training and test sets.
If the features of a sample may only exist in a particular class and its features don't exist in any sample of other classes in training set, then that sample will be predicted to belong that class with probability of 100% (1) and 0% (0) for other classes.
Otherwise; let say you have 2 classes and test a sample like knn.predict_proba(sample) and expect some result like [[0.47, 0.53]] The result will yield 1 in total in either way.
If thats the case, try generating your own test sample that has features from more than one classes objects in training set.
I am using NB for document classification and trying to understand threshold parameter to see how it can help to optimize algorithm.
Spark ML 2.0 thresholds doc says:
Param for Thresholds in multi-class classification to adjust the probability of predicting each class. Array must have length equal to the number of classes, with values >= 0. The class with largest value p/t is predicted, where p is the original probability of that class and t is the class' threshold.
0) Can someone explain this better? What goal it can achieve? My general idea is if you have threshold 0.7 then at least one class prediction probability should be more then 0.7 if not then prediction should return empty. Means classify it as 'uncertain' or just leave empty for prediction column. How can p/t function going to achieve that when you still pick the category with max probability?
1) What probability it adjust? default column 'probability' is actually conditional probability and 'rawPrediction' is
confidence according to document. I believe threshold will adjust 'rawPrediction' not 'probability' column. Am I right?
2) Here's how some of my probability and rawPrediction vector look like. How do I set threshold values based on this so I can remove certain uncertain classification? probability is between 0 and 1 but rawPrediction seems to be on log scale here.
Probability:
[2.233368649314982E-15,1.6429456680945863E-9,1.4377313514127723E-15,7.858651849363202E-15]
rawPrediction:
[-496.9606736723107,-483.452183395287,-497.40111830218746]
Basically I want classifier to leave Prediction column empty if it doesn't have any probability that is more then 0.7 percent.
Also, how to classify something as uncertain when more then one category has very close scores e.g. 0.812, 0.800, 0.799 . Picking max is something I may not want here but instead classify as "uncertain" or leave empty and I can do further analysis and treatment for those documents or train another model for those docs.
I haven't played with it, but the intent is to supply different threshold values for each class. I've extracted this example from the docstring:
model = nb.fit(df)
>>> result.prediction
1.0
>>> result.probability
DenseVector([0.42..., 0.57...])
>>> result.rawPrediction
DenseVector([-1.60..., -1.32...])
>>> nb = nb.setThresholds([0.01, 10.00])
>>> model3 = nb.fit(df)
>>> result = model3.transform(test0).head()
>>> result.prediction
0.0
If I understand correctly, the effect was to transform [0.42, 0.58] into [.42/.01, .58/10] = [42, 5.8], switching the prediction ("largest p/t") from column 1 (third row above) to column 0 (last row above). However, I couldn't find the logic in the source. Anyone?
Stepping back: I do not see a built-in way to do what you want: be agnostic if no class dominates. You will have to add that with something like:
def weak(probs, threshold=.7, epsilon=.01):
return np.all(probs < threshold) or np.max(np.diff(probs)) < epsilon
>>> cases = [[.5,.5],[.5,.7],[.7,.705],[.6,.1]]
>>> for case in cases:
... print '{:15s} - {}'.format(case, weak(case))
[0.5, 0.5] - True
[0.5, 0.7] - False
[0.7, 0.705] - True
[0.6, 0.1] - True
(Notice I haven't checked whether probs is a legal probability distribution.)
Alternatively, if you are not actually making a hard decision, use the predicted probabilities and a metric like Brier score, log loss, or info gain that accounts for the calibration as well as the accuracy.