I have been making a game in pyglet for quite a while, and I encountered an error that I cannot fix. I am trying to rotate an image using image.get_transform(rotate=deg), however get the error
AssertionError: Only 90 degree rotations are supported.
I do not know how to fix this, and the degrees are always between -90 and 90, so I do not see why this is not working. The part of the code that does not work is here:
deg=round(game.getAngle(x,y))
print(deg)
self.sprite=self.sprite.get_transform(rotate=deg)
the getAngle function is as such:
def getAngle(self,x,y):
return math.degrees(math.atan(y/x))
Any help would be appretiated
I am not sure which package you are using, but an alternative might be using rotate from scipy.image
Have a look at the docs here
The code would essentially boil down to using:
from scipy import ndimage
rotated_image = scipy.ndimage.rotate(old_image, angle)
Related
import turtle
t = turtle.Turtle()
t.right(100)
t.done()
That is my code, but there is no result after run. Why this problem is arising and what can I do now?
t.right(100)
this rotates the turtle by 100 degrees
t.done()
is not a function. All your code is doing essentially is creating a canvas and putting nothing on it. From here, you would need to know about t.pendown() and other functions to make it work this link will be helpful in getting started with the python turtle.
I have pictures containing ArUco markers but I am unable to detect all of them with the detectMarkers function. Actually, I have many pictures : in some of them I can detect all the markers, in others I cannot and I don't really understand why.
I thought it was because of the quality of the photo, but it seems to be not so simple. Here's an example of my code :
import cv2
import matplotlib.pyplot as plt
from cv2 import aruco
aruco_dict = aruco.Dictionary_get(aruco.DICT_4X4_1000)
inputfile = 'EOS-1D-X-Mark-II_1201-ConvertImage.jpg'
frame = cv2.imread(inputfile)
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
parameters = aruco.DetectorParameters_create()
corners, ids, rejectedImgPoints = aruco.detectMarkers(frame, aruco_dict, parameters=parameters)
frame_markers = aruco.drawDetectedMarkers(frame.copy(),rejectedImgPoints)
plt.figure(figsize=(20,10))
plt.imshow(frame_markers)
for i in range(len(ids)):
c = corners[i][0]
plt.plot([c[:, 0].mean()], [c[:, 1].mean()], "o", label = "id={0}".format(ids[i]))
plt.legend()
plt.show()
In this picture, 1 marker is not detected and I don't understand why.
I tried to tune the parameters of detectMarkers function manually with an interactive method thanks to jupyter notebook. There are many parameters and I found nothing that really helped me, except in some photos the reduction of polygonalApproxAccuracyRate.
The photo is orginally in 5472 x 3648 pixels but the one I send in this post is 2189 x 1459 pixels. Note that it doesn't work with the better resolution neither. Actually, I found in some photos that reducing the resolution help to detect the markers ... It's a contradiction but I think this is because the default parameters of the function are not adapted to my pictures, but I found no solution when tuning the parameters.
Another idea is to use the refineDetectMarkers function after calling detectMarkers. It uses the candidates that were found in detectMarkers but failed to be identified, and try to refine their identification. However, as far as I understood, I need to know where my markers should be in the picture and put it in refineDetectMarkers (as a board). In my situation, I don't know where the markers should be, otherwise I wouldn't take photos. The photos are used to observe precisely the evolution of their positions.
I am interested in any ideas you may have, thanks for reading !
I'm currently trying to perform a Polar to Cartesian Coordinate Image transformation, to display a raw sonar image into a 'fan-display'.
Initially I have a Numpy Array image of type np.float64, that can be seen below:
After doing some searching, I came across this StackOverflow post Inverse transform an image from Polar to Cartesian in OpenCV with a very similar problem, in which the poster seemed to have solved his/her issue by using the Python Wand library (http://docs.wand-py.org/en/0.5.9/index.html), specifically using their set of Distortion functions.
However, when I tried to use Wand and read the image in, I instead ended up with Wand getting the image below, which seems to be smaller than the original one. However, the weird thing is that img.size still gives the same size number as the original image's shape.
The code for this transformation can be seen below:
print(raw_img.shape)
wand_img = Image.from_array(raw_img.astype(np.uint8), channel_map="I") #=> (369, 256)
display(wand_img)
print("Current image size", wand_img.size) #=> "Current image size (369, 256)"
This is definitely quite problematic as Wand will automatically give the wrong 'fan image'. Is anybody familiar with this kind of problem with the Wand library previously, and if yes, may I ask what is the recommended solution to fix this issue?
If this issue isn't resolved soon I have an alternative backup of using OpenCV's cv::remap function (https://docs.opencv.org/4.1.2/da/d54/group__imgproc__transform.html#ga5bb5a1fea74ea38e1a5445ca803ff121). However the problem with this is that I'm not sure what mapping arrays (i.e. map_x and map_y) to use to perform the Polar->Cartesian transformation, as using a mapping matrix that implements the transformation equations below:
r = polar_distances(raw_img)
x = r * cos(theta)
y = r * sin(theta)
didn't seem to work and instead threw out errors from OpenCV as well.
Any kind of help and insight into this issue is greatly appreciated. Thank you!
- NickS
EDIT I've tried on another image example as well, and it still shows a similar problem. So first, I imported the image into Python using OpenCV, using these lines of code:
import matplotlib.pyplot as plt
from wand.image import Image
from wand.display import display
import cv2
img = cv2.imread("Test_Img.jpg")
img_rgb = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
plt.figure()
plt.imshow(img_rgb)
plt.show()
which showed the following display as a result:
However, as I continued and tried to open the img_rgb object with Wand, using the code below:
wand_img = Image.from_array(img_rgb)
display(img_rgb)
I'm getting the following result instead.
I tried to open the image using wand.image.Image() on the file directly, which is able to display the image correctly when using display() function, so I believe that there isn't anything wrong with the wand library installation on the system.
Is there a missing step that I required to convert the numpy into Wand Image that I'm missing? If so, what would it be and what is the suggested method to do so?
Please do keep in mind that I'm stressing the conversion of Numpy to Wand Image quite crucial, the raw sonar images are stored as binary data, thus the required use of Numpy to convert them to proper images.
Is there a missing step that I required to convert the numpy into Wand Image that I'm missing?
No, but there is a bug in Wand's Numpy implementation in Wand 0.5.x. The shape of OpenCV's ndarray is (ROWS, COLUMNS, CHANNELS), but Wand's ndarray is (WIDTH, HEIGHT, CHANNELS). I believe this has been fixed for the future 0.6.x releases.
If so, what would it be and what is the suggested method to do so?
Swap the values in img_rgb.shape before passing to Wand.
img_rgb.shape = (img_rgb.shape[1], img_rgb.shape[0], img_rgb.shape[2],)
with Image.from_array(img_rgb) as img:
display(img)
I recently ran across the PhotUtils package and am trying to use it to perform PSF Photometry on some images I have. However, when I try to run the code, I get very strange results. When I plot the image generated by get_residual_image(), the stars are not removed well. Some sample images are shown below.
The first image has sigma set to 2.05, as it is in one of the sample programs in the PhotUtils documentation:
However, the stars only appear to be removed in their center.
The second image has sigma set to 5.0. This one is especially strange. Some stars are way over-removed, some are under removed, some black squares are added to the image, etc.
Here is my code:
import photutils
from photutils.psf import DAOPhotPSFPhotometry as DAOP
from photutils.psf import IntegratedGaussianPRF as PRF
from photutils.background import MMMBackground
bkg = MMMBackground()
background = 2.5*bkg(img)
gaussian_prf = PRF(sigma=5.0)
gaussian_prf.sigma.fixed = False
photTester = DAOP(8,background,5,gaussian_prf,31)
photResults = photTester(imgStars)
finalImg = photTester.get_residual_image()
After this, I simply plot the original and final image in MatPlotLib. I use a greyscale colormap. The reason that the left images appear slightly darker is that they use a different color scaling.
Perhaps I have set one of the parameters incorrectly?
Could someone help me out with this? Thank you!
Looking at the residual image instantly told me that the background subtraction might be wrong. I could reproduce the result and wondered, if MMMBackground did not do the job correctly.
After taking a closer look at the documentation, Getting startet with Photutils finally gave the essential hint:
image -= np.median(image)
I have a RectangleFigure, I want to Rotate it with angle of 45 or so.
Please give me sample code. I tried with Transform Class but in vain.