How to write horizontal label properly in confusion matrix? - scikit-learn

I am using the following code snippet for plotting confusion matrix using sklearn lib.
from sklearn.metrics import confusion_matrix,ConfusionMatrixDisplay
cm=confusion_matrix(y_test,y_pred,normalize='true')
disp=ConfusionMatrixDisplay(confusion_matrix=cm,display_labels=['anger','bordome','disgust','fear', 'happiness','sadness' ,'neutral'])
and the result is given below:
enter image description here

Try adding this in your code
disp.plot(xticks_rotation = 'vertical')
By default, it is displayed horizontally but this behavior can be changed.
You can find more details in the official documentation
https://scikit-learn.org/stable/modules/generated/sklearn.metrics.ConfusionMatrixDisplay.html#sklearn.metrics.ConfusionMatrixDisplay

Related

pytorch make_grid (from torchvision.utils import make_grid) behaves different then I expect

trying to run the visualization utils tutorial from pytorch, I tried it with some images of dogs found on the internet. the images used in the tutorial are not distributed for use.. making the gris and showing the result behaves funny - it shows each channel as a separate image (I guess this is what I see)
so - from the tutorial
but here is what I get from the images I got:
I was expecting to see the two images in their original colors in a grid.
Another step I tried following Ivan's comment:
tutorial: https://pytorch.org/vision/master/auto_examples/plot_visualization_utils.html
I would like to know how to fix this (and use make_grid correctly)
For the output you got, I would assume the correct shape is (height, width, channels) instead of (channels, height, width). You can correct this with torch.permute. The following should provide the desired result:
>>> grid = make_grid(torch.stack([transformed_dog1, transformed_dog2]).permute(0,3,1,2))
>>> show(grid)

Unexpected plots on matplotlib histograms

I am quite a beginner with matplotlib so apologies if this seems like a dumb question.
I have a csv file with weight values for individual neurons in the different layers of my deep learning model. As I have four layers in my model, the file structure looks like this:
weight_1,weight_2......weight_n
weight_1,weight_2......weight_n
weight_1,weight_2......weight_n
weight_1,weight_2......weight_n
I want to extract the weights from each layer and generate the distributions out of it. I already have a code for it and it's working but for some epochs, the histograms have some weird colors which look like more histograms. I am attaching a sample image with the question.
As you can see, there is some pinkish part which is masked by the blue bulk of the histogram. Can someone please help me to understand what is that?
My code currently looks like this (assume that my file is loaded in the reader):
for row in csv_reader:
a = np.array(row)
a_float = a.astype(np.float)
plt.hist(a_float,bins=20)
plt.xlabel("weight_range")
plt.ylabel("frequency")
Please note that FOUR different plots (images) are generated after finishing the loop as the csv file has four rows. I have only posted the sample image for one of them. I didn't try to plot all the rows in one graph.
EDIT
I reduced the number of bins and now it's more prominent. I am attaching another sample image.
SOLVED
Adding plt.figure() inside the loop solved it. Please check the comments and answer below for the details. The updated loop should be as follows:
for row in csv_reader:
a = np.array(row)
a_float = a.astype(np.float)
plt.figure()
plt.hist(a_float,bins=20)
plt.xlabel("weight_range")
plt.ylabel("frequency")
plt.close()
I was trying to reproduce your error, and most likely you are plotting several histograms in one plot:
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
arrays = np.array([np.random.random() for i in range(200)]).reshape(2, 100)
fig = plt.figure()
ax = fig.add_subplot(111)
for array in arrays:
ax.hist(array, bins = 20)

Problems Converting Numpy/OpenCV Array Image into a Wand Image

I'm currently trying to perform a Polar to Cartesian Coordinate Image transformation, to display a raw sonar image into a 'fan-display'.
Initially I have a Numpy Array image of type np.float64, that can be seen below:
After doing some searching, I came across this StackOverflow post Inverse transform an image from Polar to Cartesian in OpenCV with a very similar problem, in which the poster seemed to have solved his/her issue by using the Python Wand library (http://docs.wand-py.org/en/0.5.9/index.html), specifically using their set of Distortion functions.
However, when I tried to use Wand and read the image in, I instead ended up with Wand getting the image below, which seems to be smaller than the original one. However, the weird thing is that img.size still gives the same size number as the original image's shape.
The code for this transformation can be seen below:
print(raw_img.shape)
wand_img = Image.from_array(raw_img.astype(np.uint8), channel_map="I") #=> (369, 256)
display(wand_img)
print("Current image size", wand_img.size) #=> "Current image size (369, 256)"
This is definitely quite problematic as Wand will automatically give the wrong 'fan image'. Is anybody familiar with this kind of problem with the Wand library previously, and if yes, may I ask what is the recommended solution to fix this issue?
If this issue isn't resolved soon I have an alternative backup of using OpenCV's cv::remap function (https://docs.opencv.org/4.1.2/da/d54/group__imgproc__transform.html#ga5bb5a1fea74ea38e1a5445ca803ff121). However the problem with this is that I'm not sure what mapping arrays (i.e. map_x and map_y) to use to perform the Polar->Cartesian transformation, as using a mapping matrix that implements the transformation equations below:
r = polar_distances(raw_img)
x = r * cos(theta)
y = r * sin(theta)
didn't seem to work and instead threw out errors from OpenCV as well.
Any kind of help and insight into this issue is greatly appreciated. Thank you!
- NickS
EDIT I've tried on another image example as well, and it still shows a similar problem. So first, I imported the image into Python using OpenCV, using these lines of code:
import matplotlib.pyplot as plt
from wand.image import Image
from wand.display import display
import cv2
img = cv2.imread("Test_Img.jpg")
img_rgb = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
plt.figure()
plt.imshow(img_rgb)
plt.show()
which showed the following display as a result:
However, as I continued and tried to open the img_rgb object with Wand, using the code below:
wand_img = Image.from_array(img_rgb)
display(img_rgb)
I'm getting the following result instead.
I tried to open the image using wand.image.Image() on the file directly, which is able to display the image correctly when using display() function, so I believe that there isn't anything wrong with the wand library installation on the system.
Is there a missing step that I required to convert the numpy into Wand Image that I'm missing? If so, what would it be and what is the suggested method to do so?
Please do keep in mind that I'm stressing the conversion of Numpy to Wand Image quite crucial, the raw sonar images are stored as binary data, thus the required use of Numpy to convert them to proper images.
Is there a missing step that I required to convert the numpy into Wand Image that I'm missing?
No, but there is a bug in Wand's Numpy implementation in Wand 0.5.x. The shape of OpenCV's ndarray is (ROWS, COLUMNS, CHANNELS), but Wand's ndarray is (WIDTH, HEIGHT, CHANNELS). I believe this has been fixed for the future 0.6.x releases.
If so, what would it be and what is the suggested method to do so?
Swap the values in img_rgb.shape before passing to Wand.
img_rgb.shape = (img_rgb.shape[1], img_rgb.shape[0], img_rgb.shape[2],)
with Image.from_array(img_rgb) as img:
display(img)

Photutils DAOPhot Not Fitting stars well?

I recently ran across the PhotUtils package and am trying to use it to perform PSF Photometry on some images I have. However, when I try to run the code, I get very strange results. When I plot the image generated by get_residual_image(), the stars are not removed well. Some sample images are shown below.
The first image has sigma set to 2.05, as it is in one of the sample programs in the PhotUtils documentation:
However, the stars only appear to be removed in their center.
The second image has sigma set to 5.0. This one is especially strange. Some stars are way over-removed, some are under removed, some black squares are added to the image, etc.
Here is my code:
import photutils
from photutils.psf import DAOPhotPSFPhotometry as DAOP
from photutils.psf import IntegratedGaussianPRF as PRF
from photutils.background import MMMBackground
bkg = MMMBackground()
background = 2.5*bkg(img)
gaussian_prf = PRF(sigma=5.0)
gaussian_prf.sigma.fixed = False
photTester = DAOP(8,background,5,gaussian_prf,31)
photResults = photTester(imgStars)
finalImg = photTester.get_residual_image()
After this, I simply plot the original and final image in MatPlotLib. I use a greyscale colormap. The reason that the left images appear slightly darker is that they use a different color scaling.
Perhaps I have set one of the parameters incorrectly?
Could someone help me out with this? Thank you!
Looking at the residual image instantly told me that the background subtraction might be wrong. I could reproduce the result and wondered, if MMMBackground did not do the job correctly.
After taking a closer look at the documentation, Getting startet with Photutils finally gave the essential hint:
image -= np.median(image)

Networkx - exporting graphml with edge labels, height and width attributes, custom images

I want to automate a network topology diagram using python. I'm new to python so please bear with me. After doing some research I found out that I can use python to create graphml files which can be read by yEd.
I'm learning how to use Networkx to create the graphml files. So far I'm able to create nodes, connect them and add labels to the nodes (these labels would be the hostnames). Now I need to know how I can add labels to the edges (these labels would be the interfaces). For example:
Topology example
If possible I would like to know how to add a custom image for every node (by default the shape is a square but I would like to use a router png file).
If it is not possible then it would be helpful to know how to edit the height and width of the shape and also disabling arrows.
I've reviewed the docs on networkx website but I haven't found how to do these changes directly to the graph object. The only way I've seen it done is when drawing the graph, for example using the following function: nx.draw_networkx_labels(G, pos, labels, font_size=15, arrows=False), but this is not what I need because this is not saved to the graphml file.
If someone can guide me through this it would be really helpful, I'm attaching my code:
import networkx as nx
import matplotlib
import matplotlib.pyplot as plt
g = nx.DiGraph()
g.add_node('Hostname_A')
g.add_node('Hostname_B')
g.add_node('Hostname_C')
g.add_node('Hostname_D')
g.add_edge('Hostname_A','Hostname_B')
g.add_edge('Hostname_A','Hostname_C')
g.add_edge('Hostname_B','Hostname_D')
g.add_edge('Hostname_B','Hostname_C')
for node in g.nodes():
g.node[node]['label'] = node
nx.readwrite.write_graphml(g, "graph.graphml")
This is the solution:
for edge in g.edges():
g.edges[edge]['source'] = 'int gi0/0/0'
g.edges[edge]['destination'] = 'int gi0/0/1'

Resources