I am working on a simple game in Python 3 and I need to draw a rectangle rotated by a given angle and around a specific centre of rotation.
Is there a simple way of doing this?
As long as you are using the pygame library, the pygame.transform.rotate(*Surface*,*angle*) function may be worth a try.
Without a code sample it is hard to understand the context you are trying to apply this in.
Related
I'm trying to draw a circle with filled, randomly generated polygons drawn on top of it, but I can't work out how to make it so that the polygons are only drawn on top of the circle.
Here's a mockup as an example:
I have a achieved the random polygons drawn on a circle, using the love.graphics.polygon() function with a set of randomly generated points, but I'm looking for a way of clipping them when they're drawn so that they're only filled in on top of the circle.
Here's what I've actually got so far:
So, my question is: is there a function that I can call in the love.draw function that clips parts of the polygon drawn outside of a range, or is it going to be harder to fix than that?
Thanks in advance!
It turns out that I could have just spent a minute looking at the love.graphics documentation. Anyway, the love.graphics.stencil() function and its counterpart love.graphics.setStencilTest() are just what I needed.
You can pass the draw function for the circle to the love.graphics.stencil() function, and the using the setStencilTest(), you can make it not draw pixels outside that circle function. The documentation has some good examples.
enter image description here
My goal is to take the image above and "open" it along the center so that the 9 black doublets are in a straight line rather than in a circle. I have tried using the cv2.toPolar() function in OpenCV but the image is quite distorted, as can be seen below:
enter image description here
I am attempting to try a different approach now. From the center, I would like to access each of the doublet individually, like a pizza slice, and place them side by side
Initially I was thinking of slicing each doublet using two lines from the center of the image to the mid point between the doublets on either side.
My question is: how can I draw contours from the center of the image to the edge of the image, passing through the mid point between any two doublet. If I can draw one, I know that the angle between any two such consecutive contour is 40 degrees.
Any help is greatly appreciated!
I noted a few problems here:
The toPolar() conversion might have been around the center of the image file, but it is not the center of the object. This causes part of the distortion. If you share your code, I could try playing with the code and improving it.
2.The object is somewhat elliptical, not circular. This means you will still have a wave after correcting the above problem.
If you don't mind a semi-automatic solution, you could use OpenCV mouse events to specify the first line and let the program use the 40 degree angle to calculate the rest.
I want to make a game such that you have a circle moving around and several other circles chasing it. In order to destroy the enemies you must hit spacebar which draws a circle with a gradient that destroys nearby enemies.
I was wondering if it is more efficient to check to see if the colour at the top bottom left and right is more efficient than checking the collision of the circles. Or is there a better way all together to do this more efficiently.
To be completely honest if you are using pygame 1.8.1 or later and since you are using circles I would try using pygame.Sprite.collide_circle()
Here's where you can find the documentation for it https://www.pygame.org/docs/ref/sprite.html#pygame.sprite.collide_circle
I'm starting to develop a poc with the main features of a turn-based RPG similar to Breath of Fire 4, a mixture of 3D environment with characters and items such as billboards.
I'm using an orthographic camera with an angle of 30 degrees on the X axis, I did my sprite to act as a billboard with the pivot in the center, the problem occurs when the sprite is nearing a 3D object such as a wall.
Check out the image:
I had tried the solution leaving the rotation matrix of the billboard "upright", worked well, but of course, depending on the height and angle of the camera toward the billboard it gets kinda flattened, I also changed the pivot to the bottom of the sprite but this problem appears with objects in front of the sprite too. I was thinking that the solution would be to create a fragment shader that relies on the depth texture of some previous pass, I tried to think in how to do it with shaders but I could not figure it out. Could you help me with some article or anything that puts me in the right direction? Thank you.
See what I am trying to achieve on this video.
You had got the right approach. Use the upright matrix, and scale up Z of billboards preparing flattened Z by your camera. The Z scaling should be about 1.1547. It is (1 / cos30), which makes billboards look like original size from the camera with the angle of 30 degrees. It seems a tricky way but developers of BoF4 on the video might use the same solution too.
So, I'm currently developing a puzzle game of sorts, and I came upon something I'm not sure how to approach.
As you can see from the screenshot below, the text on the sides next to the main square is distorted along the diagonal of the quadrilateral. This is because this is not a screenshot of a 3D environment, but rather a 2D environment where the squares have been stretched in such a way that it looks like it's 3D.
I have tried using 3D perspective and changing depths, and while it solves the issue of the distorted sides, I was wondering if it's possible to fix this issue without doing 3D perspectives. Mainly because the current mesh transformation scheme took a while to get to, and converting that to something that works on 3D space is extra effort that might be avoidable.
I have a feeling this is unavoidable, but I'm curious if anyone knows a solution. I'm currently using OpenGL ES 1.
Probably not the answer you wanted, but I'd go with the 3d transformation because it will save you not only this distortion, but will simplify many other things down the road and give you opportunities to do nice effects.
What you are lacking in this scene is "perspective-correct interpolation", which is slightly non-linear, and is done automatically when you provide coordinates with depth information.
It may be possible to emulate it another way (though your options are limited since you do not have shaders available) but they will all likely be less efficient than using the dedicated functionality of your GPU. I recommend that you switch to using 3D coordinates.
Actually, I just found the answer. Turns out there's a Q coordinate which you can use to play around with trapezoidal texture distortion:
texture mapping a trapezoid with a square texture in OpenGL
http://www.xyzw.us/~cass/qcoord/
http://hacksoflife.blogspot.com.au/2008/08/perspective-correct-texturing-in-opengl.html
Looks like it won't be as correct as doing it 3D, but I suppose it will be easier for my use right now.