Mario plays multiple games. Normally, while playing in a game, when he jumps [underneath] a question block, he gets a free surprise. When he encounters a Goomba or another dangerous hooligan, he simply jumps on its head. After coming to a different game, Mario jumps under a question box as usual and finds that nothing happens. Mario jumps on top of a Goomba and dies. Mario is very confused, what is wrong here?
This new game has collision detection (hence the death from the Goomba) but doesn't know how to tell which side has been collided with. Assuming only one side can collide at a time, and that the left of Mario can only collide with the right side of an object (right->left; top->bottom; etc.).
How can I do collision testing that also returns which side of poor Mario collided (to ensure jumping [underneath] a box gives him a surprise but jumping [on] a box doesn't give him anything).
Pseudo-code would be appreciated.
You could use current velocity, if your engine/game doesn't have lag issues.
For instance:
touching box: if vertical speed isn't 'positive', no gift
touching foe: if vertical speed isn't 'negative', death
I'm using vertical axis as usual in geometry, up toward the top of the screen, not usual in screen space where pixels starts at the top, so up is going down.
Related
I'm new to programming, and I wanted to start by making a snake game like snake.io. So as you can see by clicking the link below, I created the snake and all its movements, then I made a red border and coded the "health" of the snake, representing them with some upside-down triangles. What is supposed to happen is that each time the snake touches the red border, it should lose half a heart, and its "health" should go down. However, what actually happens is that I can't move anymore as soon as I touch the border, and my health goes down(since my health is printed in the output), but I don't lose any hearts(visually).
movement code
main part of the problem
I tried to change the entire code, but I was not too fond of it, so I switched back. Can you help me? Thanks.
I'm writing an OpenGL application in Linux using Xlib and GLX. I would like to use the mouse pointer to draw and to drag objects in the window. But no matter what method I use to draw or move graphic objects, there is always a very noticeable lag between the actual mouse pointer position (as drawn by the X server) and the positions of the objects I draw with the pointer coordinates I get from Xlib (XQueryPointer or the X events) or reading directly from /dev/input/event*
So my question is: what code is used by the XOrg server to actually draw the mouse pointer on the screen? So I could use the same type of code to place the graphics on the screen and have the mouse pointer and the graphic objects positions always perfectly aligned.
Even a pointer to the relevant source file(s) of XOrg would be great.
So my question is: what code is used by the XOrg server to actually draw the mouse pointer on the screen?
If everything goes right no code at all is drawing the mouse pointer. So called "hardware cursor" support has been around for decades. Essentially it's what's being known as a "sprite engine" in the hardware that takes some small picture and a pair of values (x,y) where it shall appear on the screen. At every frame the graphics hardware sends to the display the cursor image is overlaid at the specific position.
The graphics system is constantly updating the position values based on the input device movements.
Of course there is also graphics hardware that does not have this kind of "sprite engine". But the trick here is, to update often, to update fast and to update late.
But no matter what method I use to draw or move graphic objects, there is always a very noticeable lag between the actual mouse pointer position (as drawn by the X server) and the positions of the objects I draw with the pointer coordinates I get from Xlib (XQueryPointer or the X events) or reading directly from /dev/input/event*
Yes, that happens if you read it at integrate it into your image at the wrong time. The key ingredient to minimizing latency is to draw as late as possible and to integrate as much input for as long as long as possible before you absolutely have to draw things to meet the V-Sync deadline. And the most important trick is not draw what's been in the past, but to draw what will be the state of affairs right at the moment the picture appears on screen. I.e. you have to predict the input for the next couple of frames drawn and use that.
The Kalman filter has become the de-facto standard method for this.
I don't really want to move the whole world in my game but I do want the screen to follow my character.
So for example, normally pygame would render at the position of 0,0 and the window height width would allow you to expand that viewing area. But I want to move the starting position so that I can view something at coordinates 1000x1000 even if my screen is only 500x500 big.
Is it possible to make camera tracking without moving the whole world?
No.
By relativity, moving the player within the world is the same as moving the world around the player. Since your camera is fixed on the player, by definition you will see the world moving when the player moves within it. Therefore, you must draw your world in a different place.
It is more explicit in 3D graphics; we represent the scene's motion as one matrix, and the camera's as another. The renderer uses their product. It doesn't care which contributed to the motion.
I'm struggling to implement a good chaser. I have a hockey player who needs to chase a puck. I can predict both next player and puck positions. I was trying to use steering behaviors but failed to find a good predictor for situations when puck is close (imagine for example the puck heads almost towards the player with a high speed. The player makes just a little angle turns when the puck is somewhat away. However when the puck comes closer and it just misses the player, the last two-three ticks the player needs to turn much bigger angles to still be looking at the puck. When there's a limit to the turning angle, the puck escapes and the player can't do anything. If it started to turn earlie, it would be just fine, but when I predict more steps, the player tends to start turning for a puck position far behind him...). Then I was going to use a* search. Works great while the puck is ahead and the puck speed is lower then that of the player. However when the puck speed is bigger, it becomes an escaping target. So every time a* expands a new state, it tends to look back and find that on the previous states the puck was closer to the player (the puck escapes!), so it prefers the previous states and becomes bfs.
So I guess there's a well-known solution to this, but I fail to google anything on that, so maybe community will help me. Thanks in advance!
UPDATE: so basically I reinvented the wheel I guess. What I'm doing now is I'm iterating through the puck positions. When I hit the first position that is reachable with the same number of ticks, I declare victory. This is VERY resource expensive, but I couldn't come up with anything better.
The part about the behavior is pretty hard to understand at the moment but about the A* problem I think that your problem is that since your agent (the player) is operating in a dynamic environment you have to re-compute the heuristic every expansion step because of course, the h values for the states in the fronteer are now obsolete since the puck is moving. Am I any close to having understood what your problem is?
Curiosity, what kind of heuristic are you using?
In my application I want to draw polygons using Windows Create Graphics method and later edit the polygon by allowing the user to select the points of the polygon and allowing to re-position them.
I use moue move event to get the new position of the point to get the new coordinates of the point being moved and use Paint event to re-draw the polygon. The application is working but when a point is moved the movement is not smooth.
I dont know weather the mouse move or the paint event the performance hindrance.
Can anyone make a suggestion as to how to improve this?
Make sure that you don't repaint for every mouse move. The proper way to do this is to handle all your input events, modifying the polygon data and setting a flag that a repaint needs to occur (on windows possibly just calling InvalidateRect() without calling UpdateWindow()).
You might not have a real performance problem - it could be that you just need to draw to an off screen DC and then copy that to your window, which will reduce flicker and make the movement seem much smoother.
If you're coding using the Win32 api, look at this for reference.
...and of course, make sure you only invalidate the area that needs to be repainted. Since you're keeping track of the polygons, invalidate only the polygon area (the rectangular union of the before and after states).