Confusion matrix value not diagonal - conv-neural-network

I trained my model and I got Test accuracy: 0.9311110973358154
Training and Testing Accuracy Plot image
Training and Testing Loss Plot image
my confusion matrix is
Confusion Matrix image
my test data contain on 2 class 225 images in each class. please can any one advise me like how I can solve it or it's correct or not.

Related

My confusion matrix showing 16*16 instead of 8*8

cm = confusion_matrix(test_labels, prediction_RF)
print(cm)
sns.heatmap(cm, annot=True)
I'm using CNN as feature extractor and then feed the model into Random Forest. Previously I used the same procedure to on a dummy CNN model. It showed the output confusion matrix 8x8 (since I have 8 classes) when I try to see my Confusion Matrix on VGG16 model, I get 16x16 matrix. And I also get 0.0 accuracy on VGG16. But I'm still getting decent result. The matrix I get on VGG16 is given below.
Matrix on VGG16

Multi-class segmentation in Keras

I'm trying to implement a multi-class segmentation in Keras:
input image is grayscale (i.e 1 channel)
ground truth image has 3 channels, each pixel is a one-hot vector of length 3
prediction is standard U-Net trained with categorical_crossentropy outputting 3 channels (softmax-ed)
What is wrong with this setup? The training loss has some weird behaviour:
in my lucky cases it behaves as expected (decreases)
90 % of the time it's stuck at ~0.9
My implementation can be found here
I don't think there is anything wrong with the code: if my ground truth is 1-channel (i.e 0s everywhere and 1s somewhere) and use binary_crossentropy + sigmoid as final activation I see no weird behaviour.
I'll answer my own question. The solution is to weight each class i.e using a weighted cross entropy loss

Training loss curve decrease sharply

I am new at deep learning.
I have a dataset with 1001 values of human pose upper body. The model I trained for that has 4 Conv layers and 2 fully connected layers with ReLu and Dropout. This is the result I got after 200 iterations. Does anyone have any ideas about why the curve of training loss decreases in a sharp way?
I think probably I need more data, as my dataset concludes numerical values what do you think is the best data augmentation method I have to use here?

ResUNet Segmentation output is bad although precision and recall values are higher on training and validation

I recently have implemented the RESUNET for a parasite segmentation on blood sample images. The model is described in this papaer, https://arxiv.org/pdf/1711.10684.pdf and here is the code https://github.com/DuFanXin/deep_residual_unet/blob/master/res_unet.py. The segmentation output is a binary image. I trained the model with the weighted Binary cross-entropy Loss, given more weight to the parasite class since there is an imbalance of classes in my images. The last ouput layer has a sigmoid activation.
I calculate precision, recall, and Dice Coefficient value to verify how good is the segmentation on trainning. On training and validation I got good numerical results:
Training
dice_coeff: .6895, f2: 0.8611, precision: 0.6320, recall: 0.9563
Validation
val_dice_coeff: .6433, val_f2: 0.7752, val_precision: 0.6052, val_recall: 0.8499
However, when I try to visually see the segmentations of the validation set my algorithm outputs all black. After analyzing the predictions returned by the model, almost all values are close to zero, so it cannot correctly differenciate between background and foreground. The problems is: Why my metrics shows good numerical values but the segmentation ouput not?
I mean, the metrics are not giving me good information? Why the recall value is higher even if the output is all black?
I trained for about 50 epochs, and my training curves shows constantly learning. Is this because the vanishing gradient problem?
No, you do not have a vanishing gradient issue.
I am almost 100% sure that the problem is related to the way in which you test.
The numbers in your training/validation do not lie.
Ensure that you use the exact same preprocessing on your test dataset, exactly the same preprocessing that is applied during the training.
E.g. : If you use "rescale = 1/255.0" parameter in Keras ImageDataGenerator(), ensure that when you load the test image, divide it by 255.0 before predicting on it.
Note that the aforementioned is a pure example; your inconsistency in train/test preprocessing may stem from other reasons.

Performance evaluation of image segmentation - Keras?

I am currently using a model (e.g. U-Net or SegNet) implemented by Keras to segment high resolution images.
Below is the code for performance evaluation:
score = model.evaluate(test_data, test_label, verbose=1)
The trained model produced very high scores on my test dataset (loss: 0.4232, acc: 0.9789)
Then I showed the segmented test images by the following code:
k = 7
output = model.predict_classes(test_data[k: k+ 1])
visualize(np.squeeze(output, axis=0))
I do not understand why the real outputs were totally different from the expected outputs (i.e. groundtruths), although the accuracy was very high. Here, I have 2 kinds of objects, red color denotes object 1 and green color denotes object 2.
Any help or suggestions would be greatly appreciated!

Resources