In PyQt 5, is there a way to obtain all pixel positions that would be modified by a call to QPainter.drawPolygon for a QPainter object constructed with some QImage as an argument without actually drawing the polygon? Ideally I would like to obtain separate sets of pixel positions for the polygon's border and for all pixels inside the polygon.
Just like #ekhumoro said, QPolygon is a subclass of QVector (that is, a QList). However, in Pyqt this is a Python array and not a QList. I got runtime errors when trying to iterate over this list, because it was inside the QPolygon object and there was no getter. In this case, in PyQt the solution is not very efficient. You need to iterate over each pixel of the image, creating a QPoint with pixel coordinates and checking if the QPolygon contains this point through the containsPoint method. There aren't many implementation details, but consider the following code snippet.
array_qpoints = [] # this array will have all the QPoints
polygon = QPolygon([
QPoint(140,234),
QPoint(126,362),
QPoint(282,409),
QPoint(307,273),
QPoint(307,233),
])
# let's consider a 640x480 image
for x in range(640):
for y in range(480):
point = QPoint(x, y)
if polygon.containsPoint(point, Qt.FillRule.OddEvenFill):
array_qpoints.append(point)
You can get the coordinates of each pixel by calling the x() and y() methods for each element in array_qpoints.
for point in array_qpoints:
x = point.x()
y = point.y()
# do what you want with the information
I'm posting this answer for others who visit this question and are looking for a solution by code. Since it's been several years, if you've found a better solution, please post :)
Related
Suppose we have an Axes object and a graph plotted in the coordinate system defined by these axes.
Is it possible to "zoom-out" by one or both axes, so we can see more of the graph, while the dimensions of the Coordinate system on the screen are kept constant?
For example, I've tried using the ValueTracker for both x_range properties of the Axes and the graph but it gives strange and unexpected results.
class Test(Scene):
def construct(self):
x_max_tracker = ValueTracker(0.0)
axes = always_redraw(lambda: Axes(
(-np.pi, x_max_tracker.get_value(), 0.5), (-5., 5.),
width=8, height=10
))
xsin_graph = always_redraw(
lambda: axes.get_graph(
lambda x: 0.5*x*np.sin(x)-1, color=BLUE,
x_range=[-np.pi, x_max_tracker.get_value()]
)
)
self.play(
Write(axes, lag_ratio=0.01, run_time=1), ShowCreation(xsin_graph)
)
self.wait(2)
self.play(x_max_tracker.animate.set_value(4*np.pi), run_time=2)
Additional, but connected question: is it possible to give the position of the coordinate system (Axes) at initiation?
UPDATE
I have defined a method generate_axes() which: 1) generates the Axes object; 2) Places it at specified coordinates on the Scene.
Now, if I am calling the always_redraw method on this generate_axes() method (keeping the x_tracker from the code above to control the x_range) then I could obtain nice "zoom-out/in" animation by calling play(x_tracker.animate.set_value(X)).
However, this doesn't change the axes variable, which is, apparently, still keeping the pointer on the first initial Axes object with not modified x_range. I thought that always_redraw() creates a new mobject each frame? Somehow this updated object is transferred to the Scene to be displayed but can't be accessed! For example, if I print axes.x_range after the end of the animation I am getting the initial x_range value.
P.S.: I am using the manimgl package, so the method always_redraw is probably not from the standard manim package. But it is generally the add_updater with become
Currently, Axes unfortunately do not support the sort of rescaling you would like to use. The easiest way to achieve this sort of behavior probably is by implementing a custom animation that repeatedly updates the axes and any curves within with become.
And as for your second question: Axes are always drawn in a way such that the center of the mobject is in the scene origin. You can move them to where you would like to show them, and only add them after moving.
Update
.become creates a new mobject, yes, but then only transfers some of the new mobject's properties and attributes to the original mobject. If there are some attributes that you need updated, it is best if you simply updated them yourself in your method -- which is also why using a general updater function is more flexible than always_redraw.
And for future reference: make sure to say right away whether you are working with manim or manimgl, they are substantially different in some aspects.
I'm working on a program in Godot using Gdscript, and I ran into a problem when trying to use the Transform.translated(Vector3) function. My code is supposed to move a bone to (0,0,0) by translating it by its current coordinates but with negative sign. Example: (1,2,3) would be translated by (-1,-2,-3) so it would end up at (0,0,0). For some reason when I do this, the end position of the bone is not (0,0,0), but some other coordinate. In the Godot documents, it says the .translated function is "relative to the transform's basis vectors", so maybe that's why? Also if there is a better way to change a bones position than using the Transform.translated(Vector3) function that would be helpful too. Thanks!
My Code:
bonePose = skel.get_bone_global_pose(bone)
var globalBonePose = skel.to_global(bonePose.origin)
translateVector = -globalBonePose
var newPose = bonePose.translated(translateVector)
skel.set_bone_pose(bone, newPose)
Code Output / Results:
bonePose (the original position of the bone) is around (-0.82,0.49,0.50)
translateVector (the amount the bone will be translated) is around (0.82,-0.49,-0.50)
newPose (the final position of the bone -- should be [0,0,0]) is around (0.82,-0.66,-0.46). Even when I call skel.to_global(newPose.origin) to see the global coordinates, it's (-0.76,0.44,0.42), which is not (0,0,0)
In Godot a Transform is composed of a basis (a Basis) and an origin (a Vector3). Where the origin handles the translation part of the transform, and the Basis the rest.
A Basis is the set of vectors that define the coordinate system. There is a vector that defines the x axis, another for the y axis, and another for the z axis. And this is the way Godot will encode rotation and scaling transformations.
When the documentation says "relative to the transform's basis vectors" it means the Basis will be applied to the vector you pass in. Thus, in your case, you are getting a translation on the local space of the bone. Which implies that if the bone is rotated or scaled (or something like that), that will affect the translation.
If you don't want to deal with rotation, scaling, et.al. I suggest you work with the origin of the Transform instead.
If you have a Transform and you want another that is otherwise equal but located at (0, 0, 0), you do this:
var new_transform = Transform(transform.basis, Vector.ZERO)
Or replace Vector.ZERO with whatever origin you want to give the new transform.
I also need to remind you that get_bone_global_pose and set_bone_pose do not operate on the same thing. On one hand set_bone_pose is relative to the parent bone, on the other get_bone_global_pose is relative to the Skeleton. Thus, I suggest you use set_bone_global_pose_override instead.
The final piece you need is the opposite of Spatial.to_global. Because setting the pose like as follows…
bonePose = skel.get_bone_global_pose(bone)
var newPose = Transform(bonePose.basis, Vector.ZERO)
skel.set_bone_global_pose_override(bone, newPose, 1.0)
… Would place it at the origin of the Skeleton.
Well, the opposite of Spatial.to_global is Spatial.to_local, and you would use it like this:
bonePose = skel.get_bone_global_pose(bone)
var newPose = Transform(bonePose.basis, skel.to_local(Vector.ZERO))
skel.set_bone_global_pose_override(bone, newPose, 1.0)
Here skel.to_local(Vector.ZERO) should give the origin of the world relative to the Skeleton. And given that set_bone_global_pose_override wants a Transform relative to the Skeleton, the result should be that the bone is placed at the origin of the world. With its rotation and scaling preserved.
I´m starting with cocos2d for python and would like to flip a sprite among its x (or y) axis. From what I gather this should be possible with the underlying pyglet lib but I couldn´t figure out how. I tried it like this:
class Ninja(cocos.sprite.Sprite):
def __init__(self):
super(Ninja, self).__init__("Idle__000.png")
self.flip_x = True
I think there should be a flip() or transform() function somewhere, but couldn´t find anything going through cocos2d-python and pyglets sources.
How can I flip a sprite after instantiation?
Alternative approach: If I can´t flip a sprite programmatically, I´d
try to just swap out the picture with an already flipped version. How
would I do this then?
Hi if there is not flip method on Sprite try set property scale_x or scale_y to -1. Or make Ninja Sprite with scale parameter. There is list of parameters for sprite initialization.
http://python.cocos2d.org/doc/api/cocos.sprite.html?highlight=cocos.sprite.sprite#cocos.sprite.Sprite
I've already know how to dark-down a CCSprite object by:
sprite.color = ccc3(x, x, x); // x is a value less then 255
(As far as i know, it should be a direct mapping of openGL functions, so its easy to achieve.)
But when it comes to light-up, my current solution is adding another mask sprite (same shape but all in white), changing its blendFunc to { GL_SRC_ALPHA, GL_ONE } and overlaying it onto the target. Besides all the codes added, there should be a mask image for each need-to-light-up one.
Is there a way to do light-up as easily as dark-down?
However, not as easy as setColor, in Cocos2d 2.x, with OpenGL ES 2.0 support, you can achieve this by using custom shaders. You can get started here:
http://www.raywenderlich.com/10862/how-to-create-cool-effects-with-custom-shaders-in-opengl-es-2-0-and-cocos2d-2-x
You may also try inverting the sprite's darker color to get a lighter one.
How do I draw a bitmap to a DC, while rotating it by a specified angle?
I agree with Al - he deserves the answer, but this (admittedly untested) code fragment should do what you asked for:
def write_bmp_to_dc_rotated( dc, bitmap, angle ):
'''
Rotate a bitmap and write it to the supplied device context.
'''
img = bitmap.ConvertToImage()
img_centre = wx.Point( img.GetWidth()/2, img.GetHeight()/2 )
img = img.Rotate( angle, img_centre )
dc.WriteBitmap( img.ConvertToBitmap(), 0, 0 )
One thing to note though from the docs:
...using wxImage is the preferred way to load images in wxWidgets, with the exception of resources...
Was there a particular reason to load it as a bitmap rather than a wx.Image?
I'm not sure that this is the best way of doing it, but one option would be to convert it to a wx.Image with ConvertToImage (wxWidgets help) and then use the rotate function (wxWidgets help). You could then (if necessary) convert it back with ConvertToBitmap (wxWidgets help).
I couldn't see an obvious function that could be used to apply a coordinate transform to the drawing context (DC), but there may be one in there somewhere...
Hope that helps.
Better way would be to use Graphics context if you want a generic rotation e.g. rotate bitmap or text or any other drawing path
gc = wx.GCDC(dc)
gc.Rotate(angle)
gc.DrawText("anurag", 100, 100)