I want to plot an inset by specifying the width and height in the Data reference frame. However when converting these values to inches (as required by inset_axes) using transData.transform the inset axe doesn't respect the given width and height. Any idea why? Here is my code:
import matplotlib.pyplot as plt
from mpl_toolkits.axes_grid1.inset_locator import inset_axes
from matplotlib.projections import get_projection_class,get_projection_names
fig,ax_main=plt.subplots()
width=0.5
height=0.5
x_center=0
y_center=0
new_width,_=(ax_main.transData.transform([width,0])-ax_main.transData.transform([0,0]))/72
_,new_height=(ax_main.transData.transform([0,height])-ax_main.transData.transform([0,0]))/72
print(new_width,new_height)
### Process
ax_val= inset_axes(ax_main, width=new_width, height=new_height, loc=3,
bbox_to_anchor=(x_center,y_center),
bbox_transform=ax_main.transData,
borderpad=0.0)
Though I don't know in how far the result is unexpected, the problem might be due to the wrong conversion used. transData transforms from data in pixel space. You divide the result by 72. The result of this may or may not be inches, depending on whether the figure dpi is 72 or not. By default the dpi is set to value from the rc params "figure.dpi" and that is 100 for a fresh matplotlib install and in case you haven't changed your rc params.
To be on the safe side,
either set your figure dpi to 72, plt.subplots(dpi=72)
or divide by the figure dpi, (ax_main.... ) / fig.dpi
However, much more generally, it seems you want to set the width and height of the inset_axes in data coordinates. So best don't specify the size in inches at all. Rather use the bounding box directly.
ax_val = inset_axes(ax_main, width="100%", height="100%", loc=3,
bbox_to_anchor=(x_center,y_center,width,height),
bbox_transform=ax_main.transData,
borderpad=0.0)
I updated the inset_axes documentation as well as the example a couple of months ago, so hopefully this case should also be well covered. However feel free to give feedback in case some information is still missing.
Even more interesting here might be the new option in matplotlib 3.0, to not use the mpl_toolkits.axes_grid1.inset_locator, but the Axes.inset_axes method. It's still noted to be "experimental" but should work fine.
ax_val = ax_main.inset_axes((x_center,y_center,width,height), transform=ax_main.transData)
Related
im using python 3.10.5
and heres my code
import pyautogui
target = pyautogui.locateCenterOnScreen('target.png')
print(target)
pyautogui.moveTo(target)
but for some reason it just prints None
and doesnt move the mouse to the images coordinates
The fact that it prints None means it didn't find the image. Check the image you are trying to find (is it properly cropped?) or try setting the confidence parameter to make a match more likely.
pyautogui.locateCenterOnScreen('target.png', confidence=x)
# x can be anywhere between 1 and 0, the lower the more likely a match
import pyautogui
target = pyautogui.locateCenterOnScreen('target.png', confidence = 0.5)
#start at 0.5 and then scale as needed.
print(target.x,target.y)
pyautogui.moveTo(target.x,target.y)
when returning location from locateCenterOnScreen, it needs coordinates, I use this one extensively, with locateonscreen to check for validity first, then the former for actual utility. I use .sleep() extensively as the target applications usually don't respond at computer speed.
I have pictures containing ArUco markers but I am unable to detect all of them with the detectMarkers function. Actually, I have many pictures : in some of them I can detect all the markers, in others I cannot and I don't really understand why.
I thought it was because of the quality of the photo, but it seems to be not so simple. Here's an example of my code :
import cv2
import matplotlib.pyplot as plt
from cv2 import aruco
aruco_dict = aruco.Dictionary_get(aruco.DICT_4X4_1000)
inputfile = 'EOS-1D-X-Mark-II_1201-ConvertImage.jpg'
frame = cv2.imread(inputfile)
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
parameters = aruco.DetectorParameters_create()
corners, ids, rejectedImgPoints = aruco.detectMarkers(frame, aruco_dict, parameters=parameters)
frame_markers = aruco.drawDetectedMarkers(frame.copy(),rejectedImgPoints)
plt.figure(figsize=(20,10))
plt.imshow(frame_markers)
for i in range(len(ids)):
c = corners[i][0]
plt.plot([c[:, 0].mean()], [c[:, 1].mean()], "o", label = "id={0}".format(ids[i]))
plt.legend()
plt.show()
In this picture, 1 marker is not detected and I don't understand why.
I tried to tune the parameters of detectMarkers function manually with an interactive method thanks to jupyter notebook. There are many parameters and I found nothing that really helped me, except in some photos the reduction of polygonalApproxAccuracyRate.
The photo is orginally in 5472 x 3648 pixels but the one I send in this post is 2189 x 1459 pixels. Note that it doesn't work with the better resolution neither. Actually, I found in some photos that reducing the resolution help to detect the markers ... It's a contradiction but I think this is because the default parameters of the function are not adapted to my pictures, but I found no solution when tuning the parameters.
Another idea is to use the refineDetectMarkers function after calling detectMarkers. It uses the candidates that were found in detectMarkers but failed to be identified, and try to refine their identification. However, as far as I understood, I need to know where my markers should be in the picture and put it in refineDetectMarkers (as a board). In my situation, I don't know where the markers should be, otherwise I wouldn't take photos. The photos are used to observe precisely the evolution of their positions.
I am interested in any ideas you may have, thanks for reading !
I recently ran across the PhotUtils package and am trying to use it to perform PSF Photometry on some images I have. However, when I try to run the code, I get very strange results. When I plot the image generated by get_residual_image(), the stars are not removed well. Some sample images are shown below.
The first image has sigma set to 2.05, as it is in one of the sample programs in the PhotUtils documentation:
However, the stars only appear to be removed in their center.
The second image has sigma set to 5.0. This one is especially strange. Some stars are way over-removed, some are under removed, some black squares are added to the image, etc.
Here is my code:
import photutils
from photutils.psf import DAOPhotPSFPhotometry as DAOP
from photutils.psf import IntegratedGaussianPRF as PRF
from photutils.background import MMMBackground
bkg = MMMBackground()
background = 2.5*bkg(img)
gaussian_prf = PRF(sigma=5.0)
gaussian_prf.sigma.fixed = False
photTester = DAOP(8,background,5,gaussian_prf,31)
photResults = photTester(imgStars)
finalImg = photTester.get_residual_image()
After this, I simply plot the original and final image in MatPlotLib. I use a greyscale colormap. The reason that the left images appear slightly darker is that they use a different color scaling.
Perhaps I have set one of the parameters incorrectly?
Could someone help me out with this? Thank you!
Looking at the residual image instantly told me that the background subtraction might be wrong. I could reproduce the result and wondered, if MMMBackground did not do the job correctly.
After taking a closer look at the documentation, Getting startet with Photutils finally gave the essential hint:
image -= np.median(image)
The question about how to do maximize a window before saving has been asked several times and has several questions (still no one is portable, though), How to maximize a plt.show() window using Python
and How do you change the size of figures drawn with matplotlib?
I created a small function to maximize a figure window before saving the plots. It works with QT5Agg backend.
import matplotlib.pyplot as plt
def maximize_figure_window(figures=None, tight=True):
"""
Maximize all the open figure windows or the ones indicated in figures
Parameters
----------
figures: (list) figure numbers
tight : (bool) if True, applies the tight_layout option
:return:
"""
if figures is None:
figures = plt.get_fignums()
for fig in figures:
plt.figure(fig)
manager = plt.get_current_fig_manager()
manager.window.showMaximized()
if tight is True:
plt.tight_layout()
Problems:
I have to wait for the windows to be actually maximized before using the plt.savefig() command, otherwise it is saved with as not maximized. This is a problem if I simply want to use the above function in a script
(minor problems:)
2. I have to use the above function twice in order to get the tight_layout option working, i.e. the first time tight=True has no effect.
The solution is not portable. Of course I can add all the possible backend I might use, but that's kind of ugly.
Questions:
how to make the script wait for the windows to be maximized? I don't want to use time.sleep(tot_seconds) because tot_seconds would be kind of arbitrary and makes the function even less portable
how to solve problem 2 ? I guess it is related to problem 1.
is there a portable solution to "maximize all the open windows" problem?
-- Edit --
For problem 3. #DavidG suggestion sounds good. I use tkinter to automatically get width and height, convert them to inches, and use them in fig.set_size_inches or directly during the figure creation via fig = plt.figure(figsize=(width, height)).
So a more portable solution is, for example.
import tkinter as tk
import matplotlib.pyplot as plt
def maximize_figure(figure=None):
root = tk.Tk()
width = root.winfo_screenmmwidth() / 25.4
height = root.winfo_screenmmheight() / 25.4
if figure is not None:
plt.figure(figure).set_size_inches(width, height)
return width, height
where I allow the figure to be None so that I can use the function to just retrieve width and height and use them later.
Problem 1 is still there, though.
I use maximize_figure() in a plot function that I created (let's say my_plot_func()) but still the saved figure doesn't have the right dimensions when saved on file.
I also tried with time.sleep(5) in my_plot_func() right after the figure creation. Not working.
It works only if a manually run in the console maximize_figure() and then run my_plot_func(figure=maximized_figure) with the figure already maximized. Which means that dimension calculation and saving parameters are correct.
It does not work if I run in the console maximize_figure() and my_plot_func(figure=maximized_figure) altogether, i.e. with one call the the console! I really don't get why.
I also tried with a non-interactive backend like 'Agg', so that the figure doesn't get actually created on screen. Not working (wrong dimensions) no matter if I call the functions altogether or one after the other.
To summarize and clarify (problem 1):
by running these two pieces of code in console, figure gets saved correctly.
plt.close('all')
plt.switch_backend('Qt5Agg')
fig = plt.figure()
w, h = maximize_figure(fig.number)
followed by:
my_plot_func(out_file='filename.eps', figure=fig.number)
by running them together (like it would be in a script) figure is not saved correctly.
plt.close('all')
plt.switch_backend('Qt5Agg')
fig = plt.figure()
w, h = maximize_figure(fig.number)
my_plot_func(out_file='filename.eps', figure=fig.number)
Using
plt.switch_backend('Agg')
instead of
plt.switch_backend('Qt5Agg')
it does not work in both cases.
I would like to rotate an image to follow my mouse and my school computers don't have PIL.
Bryan's answer is technically correct in that the PhotoImage class doesn't contain rotation methods, nor does tk.Canvas. But that doesn't mean we can't fix that.
def copy_img(img):
newimg = tk.PhotoImage(width=img.width(), height=img.height())
for x in range(img.width()):
for y in range(img.height()):
rgb = '#%02x%02x%02x' % img.get(x, y)
newimg.put(rgb, (x, y))
return newimg
The above function creates a blank PhotoImage object, then fills each pixel of a new PhotoImage with data from the image passed into it, making a perfect copy, pixel by pixel... Which is a useless thing to do.
But! Let's say you wanted a copy of the image that was upside-down. Change the last line in the function to:
newimg.put(rgb, (x, img.height()-1 - y))
And voila! The function still reads from the top down, but it writes from the bottom up, resulting in an image mirrored on the y axis. Want it rotated 90-degrees to the right, instead?:
newimg.put(rgb, (img.height() - y, x))
Substituting the y for the x makes it write columns for rows, effectively rotating it.
How deep you go into image processing PhotoImage objects is up to you. If you can get access to PIL (Python Imaging Library)... someone has basically already done this work, refined it to be optimal for speed and memory consumption, and packaged it into convenient classes and functions for you. But if you can't or don't want PIL, you absolutely CAN rotate PhotoImage's. You'll just have to write the methods yourself.
Thanks to acw1668's post that hipped me to the basics of PhotoImage manipulation here:
https://stackoverflow.com/a/41254261/9710971
You can't. The canvas doesn't support the ability to rotate images, and neither does the built-in PhotoImage class.
From the official Canvas documentation:
Individual items may be moved or scaled using widget commands described below, but they may not be rotated.