I am trying to save a png image using the following commands:
fig = plt.figure(figsize=(14, 8))
ax1 = fig.add_subplot(221)
subplt1=(usub1_sfc-usub2_sfc).plot(vmin=-2.5e-2,vmax=2.5e-2,add_colorbar=False)
cb=plt.colorbar(subplt1,extend='both')
cb.ax.set_title('m/s', size=14)
cb.ax.tick_params(labelsize=12)
ax1.tick_params(labelsize=12)
ax1.set_xticks(np.arange(0,3500,500))
ax1.set_yticks(np.arange(0,2500,500))
#plt.xticks(fontsize=10)
#fig.colorbar(subplt1)
plt.title('USUBM$_{\mathrm{1km}}$ - USUBM$_{\mathrm{5km}}$')
plt.xlabel('nlon',fontsize=16)
plt.ylabel('nlat',fontsize=16)
ax2 = fig.add_subplot(222)
subplt2=(usub3_sfc-usub2_sfc).plot(vmin=-2.5e-2,vmax=2.5e-2,add_colorbar=False)
cb=plt.colorbar(subplt2,extend='both')
cb.ax.set_title(label='m/s', size=14)
cb.ax.tick_params(labelsize=12)
ax2.tick_params(labelsize=12)
ax2.set_xticks(np.arange(0,3500,500))
ax2.set_yticks(np.arange(0,2500,500))
plt.title('USUBM$_{\mathrm{200m}}$ - USUBM$_{\mathrm{5km}}$')
plt.xlabel('nlon',fontsize=16)
plt.ylabel('nlat',fontsize=16)
fig.savefig('./test.png',dpi=130)
My png file ends up having a checkerboard pattern everywhere around the bounding boxes of the plots. Inside the boxes I can see the fields, but everywhere around it the checkerboard pattern covers the axis ticks, axis labels, plot titles, etc.
The file I create looks very much like the third image at this link. The only difference is that there you see the checkerboard everywhere.
Question: How to save the png image without this checkerboard pattern?
Here is the answer to my original question (based on the other thread I linked to):
fig = plt.figure(facecolor="w")
This removed the checkerboard pattern surrounding the plotted area.
Related
I am overlaying a scatter plot of points on an imshow 128 x 128 pixels. If you look closely here:
the objects do not always fall exactly on the center of the corresponding pixels. I tried different interpolations on imshow and origins for scatter, but nothing changed. So I thought I could overlay a grid to see how much this offset actually is:
and I noticed that the grid also falls exactly on the objects and not the center of the imshow pixels. The script for the above plot is:
fig = plt.figure(figsize=(15,8))
plt.imshow(counts_pre[:,:,slice_z],cmap='viridis',interpolation=None)
plt.scatter(j_index,i_index, s = 0.1, c = 'red', marker = 'o')
myInterval=1.
loc = matplotlib.ticker.MultipleLocator(base=myInterval)
plt.gca().xaxis.set_minor_locator(loc)
plt.gca().yaxis.set_minor_locator(loc)
plt.grid(which="both", linewidth=0.72,color="white",alpha=0.1)
plt.tick_params(which="minor", length=0)
plt.show()
Any ideas on why this offset exists and how I can fix it? Notice that the grid is not very homogeneous, i.e. some squares are rectangular.
Edit:
Upgrading to the newest matplotlib version did not resolve the
issue.
I created objects where the entries are non-zero, such that I know that the points should be perfectly aligned, but they still don't match up.
I am trying to transform images that are not horizontal, because they may be slanted.
It turns out that when testing 2 images, this photo that is horizontal, and this one that is not. It gives me good results with the horizontal photo, however when trying to change the second photo that is tilted, it does not do what was expected.
The fist image it's works fine like below with a theta 1.6406095. For now it looks bad because I'm trying to make the 2 photos look horizontally correct.
The second image say that theta is just 1.9198622
I think the error it is at this line:
lines= cv2.HoughLines(edges, 1, np.pi/90.0, 60, np.array([]))
I have done a little simulation on this link with colab.
Any help is welcome.
So far this is what I got.
import cv2
import numpy as np
img=cv2.imread('test.jpg',1)
imgGray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
imgBlur=cv2.GaussianBlur(imgGray,(5,5),0)
imgCanny=cv2.Canny(imgBlur,90,200)
contours,hierarchy =cv2.findContours(imgCanny,cv2.RETR_EXTERNAL,cv2.CHAIN_APPROX_SIMPLE)
rectCon=[]
for cont in contours:
area=cv2.contourArea(cont)
if area >100:
#print(area) #prints all the area of the contours
peri=cv2.arcLength(cont,True)
approx=cv2.approxPolyDP(cont,0.01*peri,True)
#print(len(approx)) #prints the how many corner points does the contours have
if len(approx)==4:
rectCon.append(cont)
#print(len(rectCon))
rectCon=sorted(rectCon,key=cv2.contourArea,reverse=True) # Sort out the contours based on largest area to smallest
bigPeri=cv2.arcLength(rectCon[0],True)
cornerPoints=cv2.approxPolyDP(rectCon[0],0.01*peri,True)
# Reorder bigCornerPoints so I can prepare it for warp transform (bird eyes view)
cornerPoints=cornerPoints.reshape((4,2))
mynewpoints=np.zeros((4,1,2),np.int32)
add=cornerPoints.sum(1)
mynewpoints[0]=cornerPoints[np.argmin(add)]
mynewpoints[3]=cornerPoints[np.argmax(add)]
diff=np.diff(cornerPoints,axis=1)
mynewpoints[1]=cornerPoints[np.argmin(diff)]
mynewpoints[2]=cornerPoints[np.argmax(diff)]
# Draw my corner points
#cv2.drawContours(img,mynewpoints,-1,(0,0,255),10)
##cv2.imshow('Corner Points in Red',img)
##print(mynewpoints)
# Bird Eye view of your region of interest
pt1=np.float32(mynewpoints) #What are your corner points
pt2=np.float32([[0,0],[300,0],[0,200],[300,200]])
matrix=cv2.getPerspectiveTransform(pt1,pt2)
imgWarpPers=cv2.warpPerspective(img,matrix,(300,200))
cv2.imshow('Result',imgWarpPers)
Now you just have to fix the tilt (opencv has skew) and then use some threshold to detect the letters and then recognise each letter.
As for a general purpose, I think images need to be normalised first so that we can easily detect the edges.
I am trying to plot an image after some processing. I get three different images using the three options below. The image obtained is after applying the Sobel filter twice on a road lane image.
sample_image.jpg
The three methods to plot are shown in the below Python code.
img = cv2.imread('sample_image.jpg')
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
gaussian = cv2.GaussianBlur(gray,(3,3),0)
sobely = cv2.Sobel(gaussian,cv2.CV_64F,1,0,ksize=5) # y
sobelyy = cv2.Sobel(sobely,cv2.CV_64F,1,0,ksize=5) # y
# method 1
cv2.imshow('sobelyy', sobelyy)
# method 2
cv2.imwrite('filtered_img1.JPG', sobelyy)
s_img = cv2.imread('filtered_img1.JPG')
cv2.imshow('s_img', s_img)
# method 3
plt.figure()
plt.imshow(sobelyy, cmap='gray')
plt.title('Filtered sobelyy image, B(x,y)'), plt.xticks([]), plt.yticks([])
plt.show()
The images I get are:
method 1
method 2
method 3
The image I want to get is the one obtained in method 3.
Why are the images shown in different ways?
How can I get to save the output image like the result of method 3?
Thank you in advance!
Why are the images shown in different ways?
OpenCV and Matplotlib use different color spaces to display images - that's why they look differently even when they are actually the same.
As for your first 2 methods those should actually look the same and they do when I try out your code.
How can I get to save the output image like the result of method 3?
Matplotlib has a build in function to write plotted images to disc, just use:
plt.savefig('your_filename.png')
I am using functions, e.x: ml.dis.top.plot(). I would like to rotate these figures and delete titles. How can I do that?
plt.title('') seems to work for titles but I cannot rotate these figures. Here is the part of the script:
fig = plt.figure(figsize=(75, 75))
plt.subplot(1,1,1,aspect='equal')
mf.dis.top.plot(contour=True, colorbar=True)
plt.title('')
plt.savefig('top_plot.png')
So there are different ways that you can make model plots in flopy. You are using the quick and easy way to plot one of our arrays. What you probably want to do is use the ModelMap capability, which is described in https://github.com/modflowpy/flopy/blob/develop/examples/Notebooks/flopy3_MapExample.ipynb. This will give you full control over your figure, including rotation and offset and will allow you to customize the title and anything else you'll need to do. The code might look something like the following:
fig = plt.figure(figsize=(75, 75))
ax = plt.subplot(1, 1, 1, aspect='equal')
modelmap = flopy.plot.ModelMap(model=mf, rotation=14)
modelmap.contour_array(mf.dis.top.array)
plt.savefig('top_plot.png')
I have been around this problem for quite a long time but I'm not able to find an answer.
So, I have a list with matrices which I want to plot (for the sake of this question I'm just having 2 random matrices:
list = [np.random.random((500, 500)), np.random.random((500, 500))]
I then want to plot each element of the list using matshow in a separate page of a pdf file:
with PdfPages('file.pdf') as pdf:
plt.rc('figure', figsize=(3,3), dpi=40)
for elem in list:
plt.matshow(elem, fignum=1)
plt.title("title")
plt.colorbar()
plt.text(0,640,"Caption")
pdf.savefig() # saves the current figure into a pdf page
plt.close()
The result is the following:
My problem is with the caption. You can see I put "Caption" in the edge of the document on purpose. This is because sometimes the actual captions I want to insert are too big to fit in one single pdf page.
So, how can I make each pdf page adjustable to the caption's content (that might vary in each page)? For example, would it be possible to set each page size to A4 or A3, and then plot/write everything in each page?
I've already tried setting up plt.figure(figsize=(X, X)) with a variable X size, but it just changes the resolution of the pdf I guess.
You may want to use the bbox_inches="tight" option when saving the file. This will adapt the figure size to its content. So it then suffices to place some text at position (0,0) in figure coordinates and align it to the top. This will then extent towards the bottom and outside the figure (so the figure when shown on screen would not contain that text), but with the bbox_inches="tight" option of savefig, the saved figure will become large enough to contain that text.
The use of the textwrap package will then also allow to limit the text in horizontal direction.
import numpy as np; np.random.seed(1)
import textwrap
import matplotlib.pyplot as plt
from matplotlib.backends.backend_pdf import PdfPages
p = np.ones(12); p[0] = 7
text2 = "".join(np.random.choice(list(" abcdefghijk"),p=p/p.sum(), size=1000))
text2 = textwrap.fill(text2, width=80)
texts = ["Caption: Example", "Caption 2: " + text2 ]
lis = [np.random.random((500, 500)), np.random.random((500, 500))]
with PdfPages('file.pdf') as pdf:
for elem,text in zip(lis,texts):
fig = plt.figure(dpi=100)
grid_size = (3,1)
plt.imshow(elem)
plt.title("title")
plt.colorbar()
fig.text(0,0, text, va="top")
plt.tight_layout()
pdf.savefig(bbox_inches="tight")
plt.close()
I think I have come up with an answer to this question myself, which solves the problem of having enough space for my text:
However, a perfect answer would be making each page's size dynamic, according to the amount of caption I put.
Anyway, my answer is the following (I essentially divided each page in a grid with 3 rows, making the upper 2 rows for the plots, and the last just for the caption) :
with PdfPages('file.pdf') as pdf:
for elem in list:
fig = plt.figure(figsize=(8.27, 11.69), dpi=100)
grid_size = (3,1)
plt.subplot2grid(grid_size, (0, 0), rowspan=2, colspan=1)
plt.imshow(elem)
plt.title("title")
plt.colorbar()
plt.subplot2grid(grid_size, (2, 0), rowspan=2, colspan=1)
plt.axis('off')
plt.text(0,1,"Caption")
plt.tight_layout()
pdf.savefig()
plt.close()
Which produces the following in each page:
Could someone find a better solution? :)