My confusion matrix showing 16*16 instead of 8*8 - conv-neural-network

cm = confusion_matrix(test_labels, prediction_RF)
print(cm)
sns.heatmap(cm, annot=True)
I'm using CNN as feature extractor and then feed the model into Random Forest. Previously I used the same procedure to on a dummy CNN model. It showed the output confusion matrix 8x8 (since I have 8 classes) when I try to see my Confusion Matrix on VGG16 model, I get 16x16 matrix. And I also get 0.0 accuracy on VGG16. But I'm still getting decent result. The matrix I get on VGG16 is given below.
Matrix on VGG16

Related

train the CNN model with 2D Matrix and the scaler value

I would like to make some doubts clear. I have labelled dataset which is number of 2D matrix as the input and one scaler value as an output. I am thinking to apply the Convolutional Neural Network Architecture, make the regression model for prediction.
My question is, can it be possible to train the CNN model with 2D Matrix and the scaler value?
My expected output is also a Scaler valueƶ

Confusion matrix value not diagonal

I trained my model and I got Test accuracy: 0.9311110973358154
Training and Testing Accuracy Plot image
Training and Testing Loss Plot image
my confusion matrix is
Confusion Matrix image
my test data contain on 2 class 225 images in each class. please can any one advise me like how I can solve it or it's correct or not.

How am I able to visualize incorrect predictions from a binary image classifier and printing out the classification report

I have been trying to follow Francois example of a binary image classifier of cats and dogs. I have attempted to follow his example in another similar set in kaggle (https://www.kaggle.com/playlist/men-women-classification) and I want to achieve the following
Visualise the predictions that are wrong
Come out with the classification report
I already have a model with around 85% accuracy on the validation set but I want to know roughly what kind of images my model is getting wrong as well as coming up with a classification report with sklearn.metric's classification report.
However I do not know how does the image generator works and have a big problem trying to know how to pair the predictions with the labels of the test images.
from sklearn.metrics import classification_report
new_test_datagen = test_datagen.flow_from_directory(
directory = test_dir,
target_size=(150,150),
batch_size=1,
class_mode='binary',
seed = 42,
)
train_image = new_train_generator.next()
plt.imshow(train_image[0].reshape(150,150,-1))
print(train_image[1])
#I want to output images but I am not sure if this is the most efficient way of doing it
predictions = model.predict(test_generator)
predictions.shape
#The predictions is a numpy array of length 476 but I do not know what are the 'correct' labels found in my test set to validate it against this output.
model.evaluate(test_generator)
# [0.3109202980995178, 0.8886554837226868]

multiclass-segemenratation using pytorch and unet

I am doing landuse classification with 4 classes and as an output from softmax function of unet model i gained output of [8,4,128,128] and my mask image is [8,1,128,128] so to calculate loss I used nn.cross entropy function should i have to make any modification for good results as I find the output of testing image was 4 channels images which are not like mask image.``

predict() returns image similarities with SVM in scikit learn

A silly question: after i train my SVM in scikit-learn i have to use predict function: predict(X) for predicting at which class belongs? (http://scikit-learn.org/dev/modules/generated/sklearn.svm.SVC.html#sklearn.svm.SVC.predict)
X parameter is the image feature vector?
In case i give an image not trained (not trained because SVM ask at least 3 samples for class), what returns?
First remark: "predict() returns image similarities with SVM in scikit learn" is not a question. Please put a question in the header of Stack Overflow entries.
Second remark: the predict method of the SVC class in sklearn does not return "image similarities" but a class assignment prediction. Read the http://scikit-learn.org documentation and tutorials to understand what we mean by classification and prediction in machine learning.
X parameter is the image feature vector?
No, X is not "the image" feature vector: it is a set of image feature vectors with shape (n_samples, n_features) as explained in the documentation you refer to. In your case a sample is an image hence the expected shape would be (n_images, n_features). The predict API was design to compute many predictions at once for efficiency reason. If you want to compute a single prediction, you will have to wrap your single feature vector in an array with shape (1, n_features).
For instance if you have a single feature vector (1D) called my_single_image_features with shape (n_features,) you can call predict with:
predictions = clf.predict([my_single_image_features])
my_single_prediction = predictions[0]
Please note the [] signs around the my_single_image_features variable to turn it into a 2D array.
my_single_prediction will be an integer whose meaning depends on the integer values provided by you when calling the clf.fit(X_train, y_train) method in the first place.
In case i give an image not trained (not trained because SVM ask at least 3 samples for class), what returns?
An image is not "trained". Only the model is trained. Of course you can pass samples / images that are not part of the training set to the predict method. This is the whole purpose of machine learning: making predictions on new unseen data based on what you learn from the statistical regularities seen in the past training data.

Resources