Slowly scaling Rect (pygame) - python-3.x

I am building a Star Fox like game. The player needs to control a ship in order to move trough gaps in the walls. Here are my problems:
I need to somehow detect collision with wall (if any)
How do I make the wall (Rect) slowly get bigger until it reaches a point?
Full code
If the solution can be done with classes, that would be great!

Here's the documentation for the pygame.Rect class: https://www.pygame.org/docs/ref/rect.html#pygame.Rect.inflate
To detect collision, the pygame.Rect class has methods to detect collision between Rects. There are a few there, so you could use collidelist() to check if the player's ship's Rect collides with any of the wall Rects.
The class also has two methods inflate() and inflate_ip which can be used to increase the size of any Rects.

Related

How to make the character draw correctly in relation to objects, being on different sides from them on Godot?

Using tile maps is pretty convenient, but there is one drawback. All tiles are on the same layer. This does not allow performing some operations with graphics, as, for example, in my case
I need that when my character is in front of some tile (wall), his sprite is drawn in front, and when in the back, vice versa.
This can be achieved by changing the position of the tilemap layer, but then only one tile will be drawn correctly. The tiles on the other side of the character will be drawn at the same level. How can the problem be solved?
Add a YSort node to your scene and place your player inside of it. The YSort arranges nodes so that the lower they are on the screen, the closer they are to it.
For example, if my player were below a fence, he would stand in front. If he were above the fence, he would be drawn behind it.
This video displays the effect you're going for, using autotile and YSort together https://www.youtube.com/watch?v=RPgTlxb7Bno.

"Inverting" a concave polygon

I'm building a 2D game where player can only see things that are not blocked by other objects. Consider this example on how it looks now:
I've implemented raytracing algorithm for this and it seems to work just fine (I've reduced the boundaries for demo to make all edges visible).
As you can see, lighter area is built with a bunch of triangles, each of them having common point in the position of player. Each two neighbours have two common points.
However I'm willing to calculate bounds for external the part of the polygon to fill it with black-colored triangles "hiding" what player cannot see.
One way to do it is to "mask" the black rectangle with current polygon, but I'm afraid it's very ineffective.
Any ideas about an effective algorithm to achieve this?
Thanks!
A non-analytical, rough solution.
Cast rays with gradually increasing polar angle
Record when a ray first hits an object (and the point where it hits)
Keep going until it no longer hits the same object (and record where it previously hits)
Using the two recorded points, construct a trapezoid that extends to infinity (or wherever)
Caveats:
Doesn't work too well with concavities - need to include all points in-between as well. May need Delaunay triangulation etc... messy!
May need extra states to account for objects tucked in behind each other.

Erase Pixels From Sprite Cocos2d-JS

I'm getting the feeling this won't be possible, but worth asking anyway I guess.
I have a background sprite and a foreground sprite, both are the same size as the window/view.
As the player sprite moves across the screen I want to delete the pixels it touches to reveal the background sprite.
This is not just for display purposes, I want the gaps the player has drawn or "dug" out of the foreground layer to allow enemies to travel through, or objects to fall into. So hit detection will be needed with the foreground layer.
This is quite complex and maybe Cocos2D-JS is not the best platform to use, if not possible could you recommend another which would be easier to achieve this effect with?
I believe it's possible, but I'm not capable of giving you a proper answer.
All I can say is that you'll most likely have two choices:
a. Make a physics polygonal shape and deform it, then use it as a "filter" to display your terrain image (here's a proof of concept example in another language using box2d).
b. Directly manipulate pixels and use a mask for collision detection (here's pixel-perfect collision detection in cocos2d-js, sadly I've got no info in modifying pixels).

Is checking a pixel colour more efficient than using pygames collide function

I want to make a game such that you have a circle moving around and several other circles chasing it. In order to destroy the enemies you must hit spacebar which draws a circle with a gradient that destroys nearby enemies.
I was wondering if it is more efficient to check to see if the colour at the top bottom left and right is more efficient than checking the collision of the circles. Or is there a better way all together to do this more efficiently.
To be completely honest if you are using pygame 1.8.1 or later and since you are using circles I would try using pygame.Sprite.collide_circle()
Here's where you can find the documentation for it https://www.pygame.org/docs/ref/sprite.html#pygame.sprite.collide_circle

2d tile based game design, how do I draw a map with a viewport?

I've been struggling with this for a while.
Presently, I have a grid of 100 by 100 tiles, which belong to a Map.
The Map implements IDrawable. I call Draw() and it draws itself at 0,0 which is fine.
However, I want to expand this to draw essentially a viewport. The player will be drawn on the screen in the middle, and thus I want to display say, 10 tiles in each direction (rather than the entire map).
I'm having trouble thinking up the architecture for this one. I'm in the mindset that things should draw themselves, ie I say player1.Draw() and it draws itself. This would have worked before, where it drew the player at x,y on the screen, but with a viewport it will no longer know where to draw itself.
So should the viewport be told to draw, and examine every object in the game and draw those which are visible? Should the map tiles be objects that are subjected to this? Or should the viewport intelligently draw the map by coupling both together?
I'd love to know how typical scrolling tile games accomplish this.
If it matters, I'm using XNA
Edit to add: Can you do graphics manipulation such as trying the HTML rendering approach, where you tell things to draw, and they return a graphic of themselves, and then the parent places the graphic in the correct location? I'm thinking, if I had 2 viewports side by side for splitscreen, how would I stop them drawing outside the edges?
Possible design:
There's a 2D "world" that contains object instances.
"Object instance" is a sprite reference + its coordinates in the world.
When you draw scene, you request list of visible objects that exist in given 2D area, THEN you draw them.
With such design world can be very huge.
I'm in the mindset that things should draw themselves, ie I say player1.Draw() and it draws itself.
visible things should draw themselves. Objects outside of viewport are not visible.
, how would I stop them drawing outside the edges?
Not sure about XNA, but OpenGL has "scissors test"/"glViewport" and Direct3D 9 has "SetViewport" method that allows you to use part of the screen/window for rendering. There are also clipplanes and stencil buffer (using stencil for 2D clipping is overkill, though) You could also render to texture then render the texture. There are many ways to deal with this.
So should the viewport be told to draw, and examine every object in the game and draw those which are visible?
For a large world, you shouldn't examine every object, because it will be slow. You should be able to find visible object without testing every one of them. For that you'll need some kind of space partitioning - quad trees (because we are in 2D), k-d trees, etc. This way you should be able to handle few thousands (or even hundreds of thousands) of objects, as long as you don't see them all at once.
Should the map tiles be objects that are subjected to this?
If you keep drawing invisible things, FPS will drop.
and they return a graphic of themselves
For 2D game this may be very slow. Remember KISS principle.
Some basic ideas, not specifically for XNA:
objects draw themselves to a "virtual screen" in world coordinates, they don't draw themselves to the screen directly
drawable objects get a "graphics context" object which offers you a drawing API. The "graphics context" knows about the current viewport bounds and realizes the coordinate transformation from world coordinates to screen coordinates (for every drawing operations). The graphics context also does the direct drawing to the screen (or to a background screen buffer, if you need double buffering).
when you have many objects outside the visible bounds of your viewport, then as a performance optimization, your drawing loop can make a before-hand bounds-check for your objects and test if they are completely outside the visible area. If so, there is no need to let them draw themselves.

Resources