Draw2d Figure Rotate - transform

I have a RectangleFigure, I want to Rotate it with angle of 45 or so.
Please give me sample code. I tried with Transform Class but in vain.

Related

OpenCV - ArUco : detectMarkers failed identified some markers in a photos

I have pictures containing ArUco markers but I am unable to detect all of them with the detectMarkers function. Actually, I have many pictures : in some of them I can detect all the markers, in others I cannot and I don't really understand why.
I thought it was because of the quality of the photo, but it seems to be not so simple. Here's an example of my code :
import cv2
import matplotlib.pyplot as plt
from cv2 import aruco
aruco_dict = aruco.Dictionary_get(aruco.DICT_4X4_1000)
inputfile = 'EOS-1D-X-Mark-II_1201-ConvertImage.jpg'
frame = cv2.imread(inputfile)
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
parameters = aruco.DetectorParameters_create()
corners, ids, rejectedImgPoints = aruco.detectMarkers(frame, aruco_dict, parameters=parameters)
frame_markers = aruco.drawDetectedMarkers(frame.copy(),rejectedImgPoints)
plt.figure(figsize=(20,10))
plt.imshow(frame_markers)
for i in range(len(ids)):
c = corners[i][0]
plt.plot([c[:, 0].mean()], [c[:, 1].mean()], "o", label = "id={0}".format(ids[i]))
plt.legend()
plt.show()
In this picture, 1 marker is not detected and I don't understand why.
I tried to tune the parameters of detectMarkers function manually with an interactive method thanks to jupyter notebook. There are many parameters and I found nothing that really helped me, except in some photos the reduction of polygonalApproxAccuracyRate.
The photo is orginally in 5472 x 3648 pixels but the one I send in this post is 2189 x 1459 pixels. Note that it doesn't work with the better resolution neither. Actually, I found in some photos that reducing the resolution help to detect the markers ... It's a contradiction but I think this is because the default parameters of the function are not adapted to my pictures, but I found no solution when tuning the parameters.
Another idea is to use the refineDetectMarkers function after calling detectMarkers. It uses the candidates that were found in detectMarkers but failed to be identified, and try to refine their identification. However, as far as I understood, I need to know where my markers should be in the picture and put it in refineDetectMarkers (as a board). In my situation, I don't know where the markers should be, otherwise I wouldn't take photos. The photos are used to observe precisely the evolution of their positions.
I am interested in any ideas you may have, thanks for reading !

Photutils DAOPhot Not Fitting stars well?

I recently ran across the PhotUtils package and am trying to use it to perform PSF Photometry on some images I have. However, when I try to run the code, I get very strange results. When I plot the image generated by get_residual_image(), the stars are not removed well. Some sample images are shown below.
The first image has sigma set to 2.05, as it is in one of the sample programs in the PhotUtils documentation:
However, the stars only appear to be removed in their center.
The second image has sigma set to 5.0. This one is especially strange. Some stars are way over-removed, some are under removed, some black squares are added to the image, etc.
Here is my code:
import photutils
from photutils.psf import DAOPhotPSFPhotometry as DAOP
from photutils.psf import IntegratedGaussianPRF as PRF
from photutils.background import MMMBackground
bkg = MMMBackground()
background = 2.5*bkg(img)
gaussian_prf = PRF(sigma=5.0)
gaussian_prf.sigma.fixed = False
photTester = DAOP(8,background,5,gaussian_prf,31)
photResults = photTester(imgStars)
finalImg = photTester.get_residual_image()
After this, I simply plot the original and final image in MatPlotLib. I use a greyscale colormap. The reason that the left images appear slightly darker is that they use a different color scaling.
Perhaps I have set one of the parameters incorrectly?
Could someone help me out with this? Thank you!
Looking at the residual image instantly told me that the background subtraction might be wrong. I could reproduce the result and wondered, if MMMBackground did not do the job correctly.
After taking a closer look at the documentation, Getting startet with Photutils finally gave the essential hint:
image -= np.median(image)

Wrong width and height when using inset_axes and transData

I want to plot an inset by specifying the width and height in the Data reference frame. However when converting these values to inches (as required by inset_axes) using transData.transform the inset axe doesn't respect the given width and height. Any idea why? Here is my code:
import matplotlib.pyplot as plt
from mpl_toolkits.axes_grid1.inset_locator import inset_axes
from matplotlib.projections import get_projection_class,get_projection_names
fig,ax_main=plt.subplots()
width=0.5
height=0.5
x_center=0
y_center=0
new_width,_=(ax_main.transData.transform([width,0])-ax_main.transData.transform([0,0]))/72
_,new_height=(ax_main.transData.transform([0,height])-ax_main.transData.transform([0,0]))/72
print(new_width,new_height)
### Process
ax_val= inset_axes(ax_main, width=new_width, height=new_height, loc=3,
bbox_to_anchor=(x_center,y_center),
bbox_transform=ax_main.transData,
borderpad=0.0)
Though I don't know in how far the result is unexpected, the problem might be due to the wrong conversion used. transData transforms from data in pixel space. You divide the result by 72. The result of this may or may not be inches, depending on whether the figure dpi is 72 or not. By default the dpi is set to value from the rc params "figure.dpi" and that is 100 for a fresh matplotlib install and in case you haven't changed your rc params.
To be on the safe side,
either set your figure dpi to 72, plt.subplots(dpi=72)
or divide by the figure dpi, (ax_main.... ) / fig.dpi
However, much more generally, it seems you want to set the width and height of the inset_axes in data coordinates. So best don't specify the size in inches at all. Rather use the bounding box directly.
ax_val = inset_axes(ax_main, width="100%", height="100%", loc=3,
bbox_to_anchor=(x_center,y_center,width,height),
bbox_transform=ax_main.transData,
borderpad=0.0)
I updated the inset_axes documentation as well as the example a couple of months ago, so hopefully this case should also be well covered. However feel free to give feedback in case some information is still missing.
Even more interesting here might be the new option in matplotlib 3.0, to not use the mpl_toolkits.axes_grid1.inset_locator, but the Axes.inset_axes method. It's still noted to be "experimental" but should work fine.
ax_val = ax_main.inset_axes((x_center,y_center,width,height), transform=ax_main.transData)

Get_transform not rotating image

I have been making a game in pyglet for quite a while, and I encountered an error that I cannot fix. I am trying to rotate an image using image.get_transform(rotate=deg), however get the error
AssertionError: Only 90 degree rotations are supported.
I do not know how to fix this, and the degrees are always between -90 and 90, so I do not see why this is not working. The part of the code that does not work is here:
deg=round(game.getAngle(x,y))
print(deg)
self.sprite=self.sprite.get_transform(rotate=deg)
the getAngle function is as such:
def getAngle(self,x,y):
return math.degrees(math.atan(y/x))
Any help would be appretiated
I am not sure which package you are using, but an alternative might be using rotate from scipy.image
Have a look at the docs here
The code would essentially boil down to using:
from scipy import ndimage
rotated_image = scipy.ndimage.rotate(old_image, angle)

Basic importing coordinates into R and setting projection

Ok, I am trying to upload a .csv file, get it into a spatial points data frame and set the projection system to WGS 84. I then want to determine the distance between each point This is what I have come up with but I
cluster<-read.csv(file = "cluster.csv", stringsAsFactors=FALSE)
coordinates(cluster)<- ~Latitude+Longitude
cluster<-CRS("+proj=longlat +datum=WGS84")
d<-dist2Line(cluster)
This returns an error that says
Error in .pointsToMatrix(p) :
points should be vectors of length 2, matrices with 2 columns, or inheriting from a SpatialPoints* object
But this isn't working and I will be honest that I don't fully comprehend importing and manipulating spatial data in R. Any help would be great. Thanks
I was able to determine the issue I was running into. With WGS 84, the longitude comes before the latitude. This is just backwards from how all the GPS data I download is formatted (e.g. lat-long). Hope this helps anyone else who runs into this issue!
thus the code should have been
cluster<-read.csv(file = "cluster.csv", stringsAsFactors=FALSE)
coordinates(cluster)<- ~Longitude+Latitude
cluster<-CRS("+proj=longlat +datum=WGS84")

Resources