I'm moving a character (ellipsoid) around in my physics engine. The movement must be constrained by the static geometry, but should slide on the edges, so it won't be stuck.
My current approach is to move it a little and then push it back out of the geometry. It seems to work, but I think it's mostly because of luck. I fear there must be some corner cases where this method will go haywire. For example a sharp corner where two walls keeps pushing the character into each other.
How would a "state of the art" game engine solve this?
Consider using a 3rd party physics library such as Chipmunk-physics or Box2D. When it comes to game physics, anything beyond the most basic stuff can be quite complex, and there's no need to reinvent the wheel.
Usually the problem you mention is solved by determining the amount of overlap, contact points and surface normals (e.g., by using separating-axis theorem). Then impulses are calculated and applied, which change object velocities, so that in the next iteration the objects are moved apart in a physically realistic way.
I have not developed a state of the art game engine, but I once wrote a racing game where collision was simply handled by reversing the simulation time and calculate where the edge was crossed. Then the car was allowed to bounce back into the game field. The penalty was that the controls was disabled until the car stopped.
So my suggestion is that you run your physics engine to calculate exactly where the edge is hit (it might need some non-linear equation solving approach), then you change your velocity vector to either bounce off or follow the edge.
In the case of protecting against corner cases, one could always keep a history of the last valid position within the game and state of the physics engine. If the game gets stuck, the simulation can be restarted from that point but with a different condition (say by adding some randomization to the internal parameters).
Related
So when my player falls off the map, I want the level to reload. I have used an area2d with a collisionshape2d to create an area that will call a function when the player collides with this area. However, when the game is run with this code included, the player will animate through a few frames then the game completely freezes before I can even move the player.
func _on_Area2D_body_entered(body):
get_tree().reload_current_scene()
If I delete this code, or set monitoring to off, and re-run the game it will not freeze.
Below is a screenshot of my level design.
Level design
Any help would be greatly appreciated :) - Is this a bug or am I doing something stupid?
When I set a breakpoint on the get_tree().reload_current_scene() line the following report shows
debugger
does this mean the player is colliding with a tile - If this is the case I don't see how as the program freezes before the player touches the ground.
As I said in the comments, this line:
get_tree().reload_current_scene()
Returns a value.
Now, you have said that 0 is "continuously outputted". In this context 0 means OK, in other words: it was able to reload the scene. The problem is the "continuously" part. It means that the scene reloads and then this code is triggered, and then it reloads again, and then this code is triggered again, and so on.
Now, apparently the Area2D is colliding with the TileMap. That makes sense. If it is a collision between the Area2D and a tile upon loading the scene, you would get the observed behavior. And the way the Area2D and TileMap are positioned in the scene supports the idea.
And about fixing it. I'll give you three solutions, either of these will work, with their drawbacks and caveats:
Don't have the Area2D positioned in a way that intersects non-passable tiles. This is easy to do by moving the Area2D further down, or by removing any tiles that overlap it.
The drawback with this approach is that it is fragile. You may forget in the future and move the Area2D or add tiles or something else that make the problem return. Also, it might not work well with your intended scenario design.
Change the collision_mask and collision_layer in such way that the tiles and the Area2D do not collide. As long as the bits from the mask do not overlap the bits from the layer of the other and viceversa, Godot will not even check for a collision between them.
The main drawback with this approach is that you have limited number of layers.
There is also the fact that it is less intuitive that simply placing things in such a way they don't collide.
To make it easier to work with, assign layers to different kinds of things… Go to your Project Settings, on the General Tab, under Layer Names, and 2D Physics, and give them names (e.g. "environment", "enemies", "enemy bullets", "player", "player bullets", "items", "others").
Then you can assign to each object on collision_layer what they are, and on collision_mask set every thing they MUST collide with. With the caveat that Godot will check both ways.
In this case you would set the collision_layer of the player character physics object (the KinematicBody2D) to "player" (or similar), and put the collision_mask of the Area2D to the same, so they collide. Have the collision_layer of the TileMap set to something else (e.g. "environment") that is not in the collision_mask of the Area2D, so the Area2D and the TileMap do not collide. And set the collision_mask of the player character to something that include the layer you set to the TileMap, so the player character also collides with it. I hope that makes sense.
And, of course, you can filter on the Area2D, with a little of code. It can be checking the class, or node group, or the name of the physics body. For example you can insert at the start of the method something like this: if body.name != "player": return. So that it exits the method before it reaches reload_current_scene unless it is the correct physics body.
The drawback with this approach is that it is still checking and registering the collision, so it has worse performance that using collision_mask and collision_layer. But it will work, and it will perform OK for a small to mid sized game.
For more complex situations, you may employ a combination of these approaches (because, as I said, there is a limited number of layers, so you need to add filtering on top of it). I have a more detailed explanation of how to setup physics objects, including the techniques mentioned here, in another answer.
Is there any body of evidence that we could reference to help determine whether a person is using a device (smartphone/tablet) with their left hand or right hand?
My hunch is that you may be able to use accelerometer data to detect a slight tilt, perhaps only while the user is manipulating some sort of on screen input.
The answer I'm looking for would state something like, "research shows that 90% of right handed users that utilize an input mechanism tilt their phone an average of 5° while inputting data, while 90% of left handed users utilizing an input mechanism have their phone tilted an average of -5°".
Having this data, one would be able to read accelerometer data and be able to make informed decisions regarding placement of on screen items that might otherwise be in the way for left handed users or right handed users.
You can definitely do this but if it were me, I'd try a less complicated approach. First you need to recognize that not any specific approach will yield 100% accurate results - they will be guesses but hopefully highly probable ones. With that said, I'd explore the simple-to-capture data points of basic touch events. You can leverage these data points and pull x/y axis on start/end touch:
touchStart: Triggers when the user makes contact with the touch
surface and creates a touch point inside the element the event is
bound to.
touchEnd: Triggers when the user removes a touch point from the
surface.
Here's one way to do it - it could be reasoned that if a user is left handed, they will use their left thumb to scroll up/down on the page. Now, based on the way the thumb rotates, swiping up will naturally cause the arch of the swipe to move outwards. In the case of touch events, if the touchStart X is greater than touchEnd X, you could deduce they are left handed. The opposite could be true with a right handed person - for a swipe up, if the touchStart X is less than touchEnd X, you could deduce they are right handed. See here:
Here's one reference on getting started with touch events. Good luck!
http://www.javascriptkit.com/javatutors/touchevents.shtml
There are multiple approaches and papers discussing this topic. However, most of them are written between 2012-2016. After doing some research myself I came across a fairly new article that makes use of deep learning.
What sparked my interest is the fact that they do not rely on a swipe direction, speed or position but rather on the capacitive image each finger creates during a touch.
Highly recommend reading the full paper: http://huyle.de/wp-content/papercite-data/pdf/le2019investigating.pdf
Whats even better, the data set together with Python 3.6 scripts to preprocess the data as well as train and test the model described in the paper are released under the MIT license. They also provide the trained models and the software to
run the models on Android.
Git repo: https://github.com/interactionlab/CapFingerId
I am currently working on a 2D project that generates random black terrain over a loaded background. I have a sprite that is loaded and controlled and am trying to find out what the best method for identifying the color behind the sprite to code some color based collision. I have tried a bunch of tutorials on perpixel and color but they all seem dependant on a collision map being used or bounding boxes between two preloaded images IE: sprite and colliding object.
If anyone could point me in the right direction it would be greatly appriciated.
Querying textures is a relatively expensive operation; I would strongly recommend that you avoid doing so in real time. Since you're generating your terrain information procedurally at runtime, why not just store it in an array and reference that?
If you need to composite textures or perform other rendering operations in order to create your terrain data, you can copy the resulting render target's data into an array in system memory using the following code:
var data = new Color[width * height];
texture.GetData(data);
Just try to avoid doing it any more often than is necessary.
I think the right direction would be away from pixel-perfect collisions. Most people assume it's necessary, but the fact is, 99% of games don't use pixel-perfect collisions because they are slow, difficult to implement properly, and overkill for most practical games. Most games use AABBs, circles, or spheres. They are simple to detect collisions between, and are "good enough" for most games. The only game I can name that uses pixel-perfect collisions is the original Worms.
This video also does a good job of covering collision detection: http://pyvideo.org/video/615/introduction-to-game-development (Collision Detection #1:13:20)
I am using Direct3D to display a number of I-sections used in steel construction. There could be hundreds of instances of these I-sections all over my scene.
I could do this two ways:
Using method A, I have fewer surfaces. However, with backface culling turned on, the surfaces will be visible from only one side. If backface culling is turned off, then the flanges (horizontal plates) and web (vertical plate) may be rendered in the wrong order.
Method B seems correct (and I could keep backface culling turned on), but in my model the thickness of plates in the I-section is of no importance and I would like to avoid having to create a separate triangle strip for each side of the plates.
Is there a better solution? Is there a way to switch off backface culling for only certain calls of DrawIndexedPrimitives? I would also like a platform-neutral answer to this, if there is one.
First off, backface culling doesn't have anything to do with the order in which objects are rendered. Other than that, I'd go for approach B for no particular reason other than that it'll probably look better. Also this object probably isn't more than a hand full of triangles; having hundreds in a scene shouldn't be an issue. If it is, try looking into hardware instancing.
In OpenGL you can switch of backface culling for each triangle you draw:
glEnable(GL_CULL_FACE);
glCullFace(GL_FRONT);
// or
glCullFace(GL_BACK);
I think something similar is also possible in Direct3D
If your I-sections don't change that often, load all the sections into one big vertex/index buffer and draw them with a single call. That's the most performant way to draw things, and the graphic card will do a fast job even if you push half a million triangle to it.
Yes, this requires that you duplicate the vertex data for all sections, but that's how D3D9 is intended to be used.
I would go with A as the distance you would be seeing the B from would be a waste of processing power to draw all those degenerate triangles.
Also I would simply fire them at a z-buffer and allow that to sort it all out.
If it get's too slow then I would start looking at optimizing, but even consumer graphics cards can draw millions of polygons per second.
For instance:
An approach to compute efficiently the first intersection between a viewing ray and a set of three objects: one sphere, one cone and one cylinder (other 3D primitives).
What you're looking for is a spatial partitioning scheme. There are a lot of options for dealing with this, and lots of research spent in this area as well. A good read would be Christer Ericsson's Real-Time Collision Detection.
One easy approach covered in that book would be to define a grid, assign all objects to all cells it intersects, and walk along the grid cells intersecting the line, front to back, intersecting with each object associated with that grid cell. Keep in mind that an object might be associated with more grid-cells, so the intersection point computed might actually not be in the current cell, but actually later on.
The next question would be how you define that grid. Unfortunately, there's no one good answer, and you need to consider what approach might fit your scenario best.
Other partitioning schemes of interest are different tree structures, such as kd-, Oct- and BSP-trees. You could even consider using trees combined with a grid.
EDIT
As pointed out, if your set is actually these three objects, you're definately better of just intersecting each one, and just pick the earliest one. If you're looking for ray-sphere, ray-cylinder, etc, intersection tests, these are not really hard and a quick google should supply all the math you might possibly need. :)
"computationally efficient" depends on how large the set is.
For a trivial set of three, just test each of them in turn, it's really not worth trying to optimise.
For larger sets, look at data structures which divide space (e.g. KD-Trees). Whole chapters (and indeed whole books) are dedicated to this problem. My favourite reference book is An Introduction to Ray Tracing (ed. Andrew. S. Glassner)
Alternatively, if I've misread your question and you're actually asking for algorithms for ray-object intersections for specific types of object, see the same book!
Well, it depends on what you're really trying to do. If you'd like to produce a solution that is correct for almost every pixel in a simple scene, an extremely quick method is to pre-calculate "what's in front" for each pixel by pre-rendering all of the objects with a unique identifying color into a background item buffer using scan conversion (aka the z-buffer). This is sometimes referred to as an item buffer.
Using that pre-computation, you then know what will be visible for almost all rays that you'll be shooting into the scene. As a result, your ray-environment intersection problem is greatly simplified: each ray hits one specific object.
When I was doing this many years ago, I was producing real-time raytraced images of admittedly simple scenes. I haven't revisited that code in quite a while but I suspect that with modern compilers and graphics hardware, performance would be orders of magnitude better than I was seeing then.
PS: I first read about the item buffer idea when I was doing my literature search in the early 90s. I originally found it mentioned in (I believe) an ACM paper from the late 70s. Sadly, I don't have the source reference available but, in short, it's a very old idea and one that works really well on scan conversion hardware.
I assume you have a ray d = (dx,dy,dz), starting at o = (ox,oy,oz) and you are finding the parameter t such that the point of intersection p = o+d*t. (Like this page, which describes ray-plane intersection using P2-P1 for d, P1 for o and u for t)
The first question I would ask is "Do these objects intersect"?
If not then you can cheat a little and check for ray collisions in order. Since you have three objects that may or may not move per frame it pays to pre-calculate their distance from the camera (e.g. from their centre points). Test against each object in turn, by distance from the camera, from smallest to largest. Although the empty space is the most expensive part of the render now, this is more effective than just testing against all three and taking a minimum value. If your image is high res then this is especially efficient since you amortise the cost across the number of pixels.
Otherwise, test against all three and take a minimum value...
In other situations you may want to make a hybrid of the two methods. If you can test two of the objects in order then do so (e.g. a sphere and a cube moving down a cylindrical tunnel), but test the third and take a minimum value to find the final object.