I have 3 scenes. one that is named "KinematicBody2D.tscn" which a KinematicBody2D node. this scene is a player which moves from left to right across the screen. I also have a scene named "mob.tscn" which is a rigidbody2d node. this scene only has the sprite and a tiny piece of code that makes it so that the mob deletes itself once it leaves the screen using the visibility notifier(I also turned of the mask square so there would be no physic) . finally I have the main scene with has the player scene inside it and instances the mob scene every so often to spawn mobs at the top of the screen.
I want to detect when the mob touches the player and give an output
please explain everything very thoughrouly as I have been trying for the past couple of days to figure it out but most of the places I have looked I didn't understand what to do and when I copied the code it didn't work. some examples of things I would like to be more clear are
where and how do I add a collision shape 2d or area2d or other nodes along those lines.
where & how to connect and write the code
thank you in advance
For this answer I will start by explaining collision layers and mask. Then move on to detecting the collision. Then filtering physic bodies and communication. And I'll mention some stuff about raycast and area, and other queries at the end. No, you don't need to know all that, but you asked for "everything very thoughrouly".
While the question is about 2D, for 3D it is mostly the same provided you use the 3D versions of the nodes. I'll mention where it differs. On that note, just in case, know that 3D objects can't collide with 2D objects.
If I say Class(2D), I mean that what I'm saying applies for both Class and Class2D.
Also, since we are talking about physics, in general you will be working in _physics_process. I'll mention any case when you need to write code somewhere else.
Collision Layers and Collision Mask
To begin with, the default values of the collision_layer and collision_mask properties is set so that everything collides with everything.
Otherwise, it can be useful to define some layers. You can go to Project Settings -> General -> Layer names -> 2d Physics (or 3d Physics if you are working in 3D), and give names to the layers there.
For example, you may have layers for these (depending on the kind o game you are doing):
The player character
The enemy characters
Player projectiles
Enemy projectiles
Collectible objects
Ground and walls
Something like that. Then you give each physics object its own collision_layer depending on what they are.
In the collision_mask you set what they can collide with※. Sometimes it is easier to think about what they can't collide with. For example, an enemy projectile should not collide with other enemy projectiles (and telling Godot to not check for those collision can help with performance). The player character probably won't interact with player projectiles, and enemy character won't interact with enemy projectiles. Similarly, the enemy characters probably won't interact with collectibles. And everything collides with ground and walls.
※: Actually an object will collide with whatever they specify in their collision mask, and any object that specify them in their collision mask. That is, collisions are checked both ways. That is changing for Godot 4.0.
You have a total of 32 layers to work with. And for some people that is not enough, and we will have to resource to other ways of filtering we will see later. Regardless, try to be efficient with your layers. Use them as broad categories.
If you want to set the collision_layer and collision_mask properties from code, you need to keep in mind that they are sets of binary flags. I have explained that elsewhere.
Setting up colliders
The kinematic, static, and rigid bodies, and also area, need CollisionShape(2D) or CollisionPolygon(2D) as child. Direct node children. This is what defines their size and shape as far as physics is concerned. Adding a sprite or other graphic node is only concerned with graphics and has no effect in physics.
If you use CollisionShape(2D), make sure to set the shape to your CollisionShape(2D). Once you picked the type of shape you want, the editor will allow you to modify it visually, or you can set its parameters in the Inspector panel.
Similarly, if you use CollisionPolygon(2D), you need to edit the polygon (which is an array of points) property of your CollisionPolygon(2D). For that the editor will allow you to draw the polygon, or you can modify the coordinates of each point in the Inspector panel.
By the way, you can have multiple of these nodes. That is, if a single CollisionShape(2D) or CollisionPolygon(2D) is not enough to specify the shape and size of your object, you can add more. On that note, be aware that the simpler they are the better the performance.
If you have a graphic node (e.g. a sprite), look for a tool menu that should appear on the top (to the right from "View") when you have it selected. There you can find options to generate a CollisionShape(2D) or CollisionPolygon(2D) from your graphic node. I find this particularly useful with MeshInstance in 3D.
A simple typical setup may look like this:
KinematicBody2D
├ CollisionShape2D
└ Sprite
Or like this (3D):
KinematicBody
├ CollisionShape
└ MeshInstance
Detecting the collision
We have two objects colliding. Either of them can detect the collision.
Detecting on a Kinematic Body
The kinematic body can only detect objects it runs into as a result of its own motion. That is, if another object hits it, it might not be able to detect it.
We have two approaches depending on whatever you use move_and_collide or move_and_collide. We could also take advantage of an area for better detection. I'll come back to that.
move_and_collide
When you move the kinematic body using move_and_collide, it returns a KinematicCollision(2D) object that tells you information of what it collided with. You can get the object it collided by checking its collider property:
var collision := move_and_collide(direction * delta)
if collision != null:
var body := collision.collider
print("Collided with: ", body.name)
move_and_slide
In the more common case you move the kinematic body with move_and_slide (or move_and_slide_with_snap). In which case you should call get_slide_collision, which also gives us a KinematicCollision(2D) object. Here is an example:
velocity = move_and_slide(velocity)
for index in get_slide_count():
var collision := get_slide_collision(index)
var body := collision.collider
print("Collided with: ", body.name)
As you can see, we use get_slide_count to figure out with how many objects the kinematic body collided (sliding included) in its motion. And then we get each one taking advantage of get_slide_collision.
Detecting on a Rigid Body
To react to rigid body collision, you need to set its contact_monitor property to true and increase its contacts_reported property. That limits the number of collisions the rigid body will track. And thus, even thought you are probably only interested on the collision with the kinematic body, you need to keep room for walls and floor or other collision that might be going on at the time.
Next you are going to use the "body_entered" and "body_exited" signals. You can connect them to a script in the same rigid body (see "About connecting signals" for how to do that). The handlers would look like this:
func _on_body_entered(body:Node):
print(body, " entered")
func _on_body_exited(body:Node):
print(body, " exited")
Even though they are called "body_entered" and "body_exited", you can think of them as beginning and ending contact. If you only care about the instant of collision, then you want "body_entered":
func _on_body_entered(body:Node):
print("Collided with: ", body.name)
Detecting on an Area
The area node is a collision object, but not a physics object. It does not push things around and thing don't push it around. Instead they pass through.
They are monitoring collision by default (controlled by the monitoring property), and also have "body_entered" and "body_exited" signals that you can use the same way as the ones in rigid body. You can set a collision_mask to control that.
Furthermore, area has "area_entered" and "area_exited" signals. That is, they can detect other areas. Which is where its collision_layer and monitorable comes in.
I'll come back to uses of area.
Pickable
You can also make collision objects (static, kinematic, rigid bodies or area) pickable with the mouse or pointing device by setting input_pickable (or input_ray_pickable in 3D) to true.
Then connect the input_event signal (or override the _input_event method) of the body or area to find out when a player clicked it.
This is how the _input_event method would look like for a 2D node:
func _input_event(viewport: Object, event: InputEvent, shape_idx: int) -> void:
pass
And this is how the _input_event method would look like for a 3D node:
func _input_event(camera: Object, event: InputEvent, position: Vector3, normal: Vector3, shape_idx: int) -> void
pass
Do not confuse these with _input.
About connecting signals
You can connect signals from the editor, in the Node panel -> Signals tab, you will find the signals of the selected node. From there you can connect them to any node on the same scene that has a script attached. And thus, you should have an script attached beforehand on the node you want to connect the signal to.
Once you tell Godot to connect a signal, it will ask you to select the node you will connect it to, and allow you to specify the name of the method that will handle it (a name is generated by default). Under "advanced" you can also add extra parameters to be passed to the method, whether or not the signal can/will wait for the next frame ("deferred"), and if it will disconnect itself once triggered ("oneshot").
By pressing the Connect button, Godot will connect the signal accordingly, creating a method with the provided name in the script of the target node if it does not exist.
It is also possible to connect and disconnect signals from code. To do that use the connect, disconnect and is_connected methods. So you can, for example, instance a scene from code, and then use the connect methods to connect the signals to and from the instance.
Filtering physic bodies and communication
We have already been over your first tool to filter collision: collision layers and masks.
Now, by whatever means you are detecting collisions, you are getting the node the collision happened with. But you need to distinguish between them.
To do that we commonly use three types of filters:
Filter by class.
Filter by group.
Filter by property.
Filter by class
To filter by class, we can use the is operator. For example:
if body is KinematicBody2D:
print("Collided with a KinematicBody2D")
Keep in mind this also works with user defined classes. So we can do this:
if body is PlayerCharacter:
print("Collided with a PlayerCharacter")
Provided that we added a const PlayerCharacter := preload("player_character.gd") in the script. Or we added class_name PlayerCharacter in our player character script.
Another way to do this is with the as operator:
var player := body as PlayerCharacter
if player != null:
print("Collided with a PlayerCharacter")
Which also gives us type safety. We can then easily access its properties:
var player := body as PlayerCharacter
if player == null:
return
print(player.some_custom_property)
Filtering by groups
Nodes also have node groups. You can set them from the editor in the Node panel -> Groups tab. Or you can manipulate them from code with add_to_group, remove_from_group. And, of course, we can check if an object is in a group with is_in_group:
if body.is_in_group("player"):
print("Collided with a player")
Filtering by property
You can, of course filter by some properties for the node. For starters, its name:
if body.name == "Player":
print("Collided with a player")
Or you could check if the collision_layer has a flag you are interested in:
if body.collision_layer & layer_flag != 0:
print("Collided with a player")
It is not straightforward to get the flag from the layer name, yet possible. I found an example elsewhere.
Communication
Once you have the object identified, you probably want to interact with it. Sometimes it is a good idea to add a method (func) to the player character explicitly to be called by other objects that collide with it.
For example, in your player character:
func on_collision(body:Node) -> void:
print("Collided with ", body.name)
And in your rigid body:
func _on_body_entered(body:Node):
var player := body as PlayerCharacter
if player == null:
return
player.on_collision(self)
You may also be interested in a signal bus, which I have explained elsewhere.
Uses of Area
You can improve upon the detection of a kinematic or static body by adding an area node as child. Give it the same collision shape as its parent, and connect its signals to it. That way you can get "body_entered" and "body_exited" on your kinematic or static body.
The setup looks something like this:
KinematicBody2D
├ CollisionShape2D
├ Sprite
└ Area2D
└ CollisionShape2D
With signals connected from Area2D to KinematicBody2D.
You can add an area to an enemy and make it larger. This is useful to define a "view cone" where enemies can detect the player. You can also combine this with raycasts to make sure the enemy has line of vision. I recommend the video The Easy Way to Make Enemies See in Godot by GDQuest.
Areas also allow you to override gravity (used by rigid bodies) locally. To do that use the properties under "Physics Overrides" in the Inspector panel. They allow you to have areas where the gravity has a different direction or strength, or even make the gravity point instead of a direction.
It is also a good idea to use areas for collectible objects, which should not cause any physic reaction (no bouncing, pushing, etc), but you still need to detect when the player collides with it.
And, of course, what I would argue is the prime use of areas: you can define areas in your map that will trigger some event when the player steps on it (e.g. a door closes behind the player).
RayCast
To detect collision on a RayCast(2D), make sure its enabled property is set to true. You can also specify the collision_mask. And make sure to set the cast_to to something sensible.
The raycast will allow you to query what physics object is in the segment from its position to where cast_to points to (the cast_to vector with the transform of the raycast applied. i.e. cast_to is relative to the raycast).
Note: no, an infinite cast_to cannot work. It is not simply a performance problem. The issue is that infinite vectors don't behave well when transformed (notably when rotated).
You can call is_colliding() to find out if the raycast is detecting something. And then get_collider() to get it. This is updated by Godot once per physics frame.
Example:
if $RayCast.is_colliding():
print("Detected: ", $RayCast.get_collider)
If you need to move the raycast and detect more often than once per physics frame, you need to call force_update_transform and force_raycast_update on it.
If you want to have have enemies avoid falling from platforms, you can use raycast to detect if there is ground ahead (example).
3D games may also use raycasts to detect what the player is looking at, or what the player clicked on. In 2D, you may want the method intersect_point I mention below.
Physic Queries
Sometimes we want to ask the Godot physics engine about stuff, without any collisions or extra nodes (such as area and raycast).
First of all, move_and_collide has a test_only parameter, which, if set to true, will give you the collision information, but not actually move the kinematic body.
Second, your rigid bodies have a test_motion method that will tell you if the rigid body would collide or not given a motion vector.
But, third… We don't need a node dedicated to raycast. We can do this:
3D: get_world().direct_space_state.intersect_ray(start, end). See PhysicsDirectSpaceState.
2D: get_world_2D().direct_space_state.intersect_ray(start, end). See Physics2DDirectSpaceState.
The 2D version of direct_space_state also gives you intersect_point which will allow you to check what physic object is in an specific point. There is also an intersect_point_on_canvas, which allows you to specify a canvas id, which is intended to match a CanvasLayer.
And the other methods you find in direct_space_state are shape casts. That is, they do not check only a segment (like the raycast) or a point (like intersect_point), but a shape.
Tutorials and resources
See the article "Tutorials and resources" on Godot's official documentation.
A note on debugging
While everything I said above are things to check if they are correct when something goes wrong. There are some tools and techniques for debugging to be aware of.
First of all, you can set breakpoints (with F9 or with the breakpoint keyword) and step through the code (F10 steps over and F11 steps into).
For debugging physics in particular you want to turn on "Visible Collision Shapes" from the debug menu.
Also, while Godot is running you can go to the Scene panel, and select the Remote tab, which will allow you to see and edit the nodes as they are on the game currently running.
You can also use the Project Camera Override option (the camera icon on the toolbar which is disabled while the game is not running) to control the camera of the running game form the editor.
And finally, you might be familiar with using print as a debugging tool. It allows you to log when an event happened and the values of variables. The visual equivalent would be to spawn visual objects (sprites or mesh instances) that show you when and where and event was triggered (perhaps using the color to convey extra information). Since Godot 3.5 includes Label3D you could use it to write some values too.
A note on copying code
Reasons why the code you copied didn't work may include that the scene tree was setup differently, or that some signal wasn't connected. However, it may also be whitespace. In GDScript whitespace is important (you can review GDScript basics). You may have to adjust the indentation level to make it work with your code. Also don't mix tabs and spaces, in particular in older versions of Godot.
A note on explaining problems
So your copied code and it didn't work. What does that mean? Did it do the wrong thing or did it do nothing? Was there any error or warning? "it didn't work" isn't a good way to describe a problem.
These Q&A sites work better if you have something you are trying to solve, such some code that does not work.
I want to encourage pointing out the specific problem. Either as a new question on this or a similar site, or as a comment to the author of the answer or tutorial that is giving you trouble. So, yes, if there is something here that is not working, tell me in the comment and I'll to improve the answer. But also go bother whoever provided the code you copied that didn't work (even if that is me again). Put some pressure on them to improve.
Question
Is it possible to tell OpenSceneGraph to use the Z-buffer but not update it when drawing semi-transparent objects?
Motivation
When drawing semitransparent objects, the order in which they are drawn is important, as surfaces that should be visible might be occluded if they are drawn in the wrong order. In some cases, OpenSceneGraph's own intuition about the order in which the objects should be drawn fails—semitransparent surfaces become occluded by other semitransparent surfaces, and "popping" (if that word can be used in this way) may occur, when OSG thinks the order of the object centers' distance to the camera has changed and decides to change the render order. It therefore becomes necessary to manually control the render order of semitransparent objects by manually specifying the render bin for each object using the setRenderBinDetails method on the state set.
However, this might still not always work either, as it in the general case, is impossible to choose a render order for the objects (even if the individual triangles in the scene were ordered) such that all fragments are drawn correctly (see e.g. the painter's problem), and one might still get occlusion. An alternative is to use depth peeling or some other order-independent transparency method but, frankly, I don't know how difiicult this is to implement in OpenSceneGraph or how much it would slow the application down.
In my case, as a trade-off between aestetics and algorithmic complexity and speed, I would ideally always want to draw a fragment of a semi-transparent surface, even though another fragment of another semi-transparent surface that (in that pixel) is closer to the camera has already been drawn. This would prevent both popping and occlusion of semi-transparent surfaces by other semi-transparent surfaces, and would effectivelly be achieved if—for every semi-transparent object that was rendered—the Z-buffer was used to test visibility but wasn't updated when the fragment was drawn.
You're totally on the right track.
Yes, it's possible to leave Z-test enabled but turn off Z-writes with setWriteMask() during drawing:
// Disable Z-writes
osg::ref_ptr<osg::Depth> depth = new osg::Depth;
depth->setWriteMask(false);
myNode->getOrCreateStateSet()->setAttributeAndModes(depth, osg::StateAttribute::ON)
// Enable Z-test (needs to be done after Z-writes are disabled, since the latter
// also seems to disable the Z-test)
myNode->getOrCreateStateSet()->setMode(GL_DEPTH_TEST, osg::StateAttribute::ON);
https://www.mail-archive.com/osg-users#openscenegraph.net/msg01119.html
http://public.vrac.iastate.edu/vancegroup/docs/OpenSceneGraphReferenceDocs-2.8/a00206.html#a2cef930c042c5d8cda32803e5e832dae
You may wish to check out the osgTransparencyTool nodekit we wrote for a CAD client a few years ago: https://github.com/XenonofArcticus/OSG-Transparency-Tool
It includes several transparency methods that you can test with your scenes and examine the source implementation of, including an Order Independent Transparency Depth Peeling implementation and a Delayed Blend method inspired by Open Inventor. Delayed Blend is a high performance single pass unsorted approximation that probably checks all the boxes you want if absolute transparency accuracy is not the most important criteria.
Here's a paper discussing the various approaches in excruciating detail, if you haven't read it:
http://lips.informatik.uni-leipzig.de/files/bathesis_cbluemel_digital_0.pdf
I'm currently implementing a basic deferred renderer with multithreading in Vulkan. Since my G-Buffer should have the same resolution as the final image I want to do it in a single render-pass with multiple sub-passes, according to this presentation, on slide 44 (page 138). It says:
vkCmdBeginCommandBuffer
vkCmdBeginRenderPass
vkCmdExecuteCommands
vkCmdNextSubpass
vkCmdExecuteCommands
vkCmdEndRenderPass
vkCmdEndCommandBuffer
I get that in the first sub-pass, you iterate the scene graph and record one secondary commandbuffer for each entity/mesh. What I don't get is how you are supposed to do the shading pass with secondary command buffers. Do you somehow spit the screen into parts and render each part in a separate thread or just record one secondary commandbuffer for the entire second sub-pass?
To me, like you said, you can need to multi thread your command buffer for the "building g-buffer subpass". However for the shading pass, it must depends on how are you doing things. To me (again), you do not need to multi thread your shading subpasses. However, you must take into consideration that you can have one "by region dependency".
So, I encourage you to procede that way.
Before to begin your RenderPass, use a Compute Shader to splat all your lights on the screen (here you have a kind of array of "quad").
By splatting I mean this kind of thing. You have a point light (for example), the idea is to compute the quad in screen space affected by the light. With that you have 4 vertices (that represents a quad) that you put into a SSBO and you can use it as a vertex Buffer in the shading subpass.
Now you begin the render pass.
MT the scene graph rendering if needed. and do your vkCmdExecuteCommands();
NextSubpass
Use the "array of quads" you create from the earlier compute shader (do not forget a VK_SUBPASS_EXTERNAL dependency).
NextSubpass and so on
However, you said
you iterate the scene graph and record one secondary commandbuffer for each entity/mesh.
I am not sure I really understand what you meant, but if you intend to have one secondary command buffer for one mesh, I really advice you to change the way you are doing. You must use batching. Let's say you have 64 000 different meshes to draw. You could for exemple create 64 command buffers (that you dispatch on 4 threads) and each command buffers have 1000 meshes to draw. (The number are took randomly, so profile your application).
So to answer your question for the shading subpass, I would not use command buffers or only very few (by kind of lights (punctual, directional))
What I don't get is how you are supposed to do the shading pass with secondary command buffers.
The shading pass (assumably the second subpass) would possibly take the G-buffers created by the first subpass as an Input Attachment. Then it would draw to equally sized screen-size quad using data from the G-buffers + from a set of lights (or whatever your deferred shader tries to defer).
The presentation you link tries to hint at this structure style starting at page 13 (marked "Page 107").
First step would be to make it working. Use e.g. this SW example. Then the next step of optimizing it into single renderpass should be easier.
Very new to graphics in general, starting off with Metal for immediate needs, will try out OpenGL soon enough.
Was wondering what the question means in lay man terms. Also, what is the extent of 'n', I have just used it as 0 in the 2D triangle I made.
In general, the color attachment(s) are where rendered images are stored (at least temporarily) during a render pass. It is common to only use one color attachment at index 0, so what you're doing is fine. It is also possible to render to multiple color attachments simultaneously, which is why there's an array. It's an advanced technique that you don't have to worry about until you see the need, at which point it should be straightforward how to do it.
There are two places where colorAttachments[n] appears in Metal. First is in MTLRenderPipelineDescriptor. The other is in MTLRenderPassDescriptor.
In both cases, the extent is given in the Metal Implementation Limits table, in the "Render targets" section, in the row labelled "Maximum number of color render targets per render pass descriptor".
For MTLRenderPipelineDescriptor, colorAttachments[n] is a reference to a MTLRenderPipelineColorAttachmentDescriptor. Here, you configure the pixel format, write mask, and color blending operation.
For MTLRenderPassDescriptor, colorAttachments[n] is a reference to a MTLRenderPassColorAttachmentDescriptor. This is a subclass of MTLRenderPassAttachmentDescriptor, which is where most of its properties are defined. Here, you configure which part of which texture you will render to, what should happen to that texture's data when the render pass starts and ends, and, if it's to be cleared, what color it should be cleared to.
Information about color attachments is split across those two objects based on how expensive it is to change. The render pipeline state object is fairly expensive to create from the pipeline descriptor. You would typically create all of your pipeline state objects once in a run and reuse them for the rest of your app's lifetime.
By contrast, you will create render command encoders from render pass descriptors fairly often; at least once per frame. They are relatively inexpensive to create, so you can change the descriptor and create a new one to render elsewhere.
The render pipeline state's color attachment pixel format has to match the pixel format of the texture of the render command encoder's color attachment texture.
Something like that which holds FlxBasic, FlxObject members, position and total size so I could group them and set their position as a group with ease.
I don't need other alternative class in another library. I can write a simple class like that. But I just ask so as to not waste my time writing one.
There's nothing like that for FlxBasic or FlxObject, just for FlxSprite (which is one further down the inheritance chain). There are two options:
FlxSpriteGroup: part of core flixel, basically a FlxSprite with a FlxGroup that can hold additional sprites and manipulates the properties of those sprites as well - this means stuff like rotation and scaling works on each individual sprite, as opposed to the group acting as a single rotated / scaled sprite.
FlxNestedSprite: part of flixel-addons (and perhaps not as well maintained as a result), pretty much an imitation of a DisplayObjectContainer.