How to set and get position of an Object in Godot (3D) - position

I have a question - how do I get and set the position of object in Godot? I couldn't find any tutorials, so I need help.

So we have three kinds of Nodes by their positioning rules: Control, Node2D, and Spatial. Since you mention 3D, I'll talk about Spatial on this answer.
You can find the differences between Node2D and Control on Why do some nodes use "position" and others use "rect_position"?. I also talk a little bit about Spatial there.
Spatial
The way Spatial are placed is by their Transform. This is similar to how Node2D are placed by their Transform2D.
You might expect Spatial to have a "position" property, the way Node2D has one. However they don't. Instead they have a translation property. So you would get or set that instead, for example:
translation = Vector3(1.0, 2.0, 3.0)
However, there is no "global_translation". I'll get to how to workaround that.
Godot 4 Note: In Godot 4 Spatial has been renamed to Node3D, and they do have position instead of translation. Although no "global_position".
I was saying that Spatial nodes are placed by their Transform. A Transform has two parts:
A Basis called basis. Which you can conceptualize as a set of three Vector3 that defines the axis of the coordinate system of the Spatial, or as a 3 by 3 matrix.
A Vector3 called origin. Which you can conceptualize as the origin position (aha!) of the coordinate system of the Spatial, or as vector that augments the 3 by 3 matrix to add a translation to the transformation.
Thus, you can place Spatial nodes by setting the origin of their Transform (which is equivalent to setting translation). For example:
transform.orign = Vector3(1.0, 2.0, 3.0)
It is worth noting that a Spatial has a transform property which is relative to their parent, and a global_transform which we will say it is relative to the world. If you compose (multiply, as in "matrix multiplication") all the transform from the root of the scene tree to your Spatial, you would have computed its global_transform.
You can directly get or set the origin of the global_transform. For example:
global_transform.orign = Vector3(1.0, 2.0, 3.0)
And this would be the equivalent to the missing "global_translation"/"global_position".
RigidBody
I also want to point out that directly modifying the transform of a RigidBody might be undesirable. To be more precise, the physics engine that Godot is using might want to move the RigidBody, and then you end up fighting the physics engine.
With that said. If you need to teleport a RigidBody, you can request it to the physics engine, like this:
PhysicsServer.body_set_state(
get_rid(),
PhysicsServer.BODY_STATE_TRANSFORM,
Transform.IDENTITY.translated(Vector3(1.0, 2.0, 3.0))
)
Yes, this works regardless of its mode. In fact, in my experience, this approach is better than switching modes.

Related

how to do cube mapping of a static environment onto a complex model by directx11 and HLSL?

I am very new to Shaders and programming in direct 11(c++) and HLSL for shaders. However, I have been given a task to:
Implement cube mapping of a static environment onto a complex model (not a cube). Cube mapping allows an object to reflect the scene around it.
There aren't many resources online can anyone please tell me the steps to follow to achieve a correct cube mapping. I'm more concerned about the calculations to do on the HLSL side.
For a very basic environment mapping, all you need to do is:
Compute the position and surface normal of the current pixel (in the pixel shader) in world space
Compute the (normalised) view direction (world space pixel position - world space camera position)
Compute the reflection vector from view direction and surface normal (there is a builtin HLSL function to do that, if you don't want to do the math yourself)
Sample the cube map with that reflection vector, and return that color.
This then works like a mirror: The reflection vector is the direction in which your line of sight would get reflected if the surface of your mesh would be a perfect mirror, and then you ask the cube map what color is in that direction (aka, whats the reflection you're seeing). How simple or complex your mesh shape is, doesn't matter during this, because you're always only looking at one pixel of that (rasterized) mesh at a time, using that pixels sufrace normal as a guide.
More advanced environment mapping techinques will then blur the reflection based on the surface roughness (usually by sampling different mip map levels of your cube map), merge the color with other light/color computations of that pixel, add indirect environment mapping coloring (which requires sampling a different cube map, which was pre-computed in a special way, with the direction of the surface normal directly), etc. That's then where all the papers and stuff come into play, but the very basic concept of environment mapping are just a few lines of code and is very straight forward.

Determing the direction of face normals consistently?

I'm a newbie to computer graphics so I apologize if some of my language is inexact or the question misses something basic.
Is it possible to calculate face normals correctly, given a list of vertices, and a list of faces like this:
v1: x_1, y_1, z_1
v2: x_2, y_2, z_2
...
v_n: x_n, y_n, z_n
f1: v1,v2,v3
f2: v4,v2,v5
...
f_m: v_j, v_k, v_l
Each x_i, y_i , z_i specifies the vertices position in 3d space (but isn't neccesarily a vector)
Each f_i contains the indices of the three vertices specifying it.
I understand that you can use the cross product of two sides of a face to get a normal, but the direction of that normal depends on the order and choice of sides (from what I understand).
Given this is the only data I have is it possible to correctly determine the direction of the normals? or is it possible to determine them consistently atleast? (all normals may be pointing in the wrong direction?)
In general there is no way to assign normal "consistently" all over a set of 3d faces... consider as an example the famous Möbius strip...
You will notice that if you start walking on it after one loop you get to the same point but on the opposite side. In other words this strip doesn't have two faces, but only one. If you build such a shape with a strip of triangles of course there's no way to assign normals in a consistent way and you'll necessarily end up having two adjacent triangles with normals pointing in opposite directions.
That said, if your collection of triangles is indeed orientable (i.e. there actually exist a consistent normal assignment) a solution is to start from one triangle and then propagate to neighbors like in a flood-fill algorithm. For example in Python it would look something like:
active = [triangles[0]]
oriented = set([triangles[0]])
while active:
next_active = []
for tri in active:
for other in neighbors(tri):
if other not in oriented:
if not agree(tri, other):
flip(other)
oriented.add(other)
next_active.append(other)
active = next_active
In CG its done by polygon winding rule. That means all the faces are defined so the points are in CW (or CCW) order when looked on the face directly. Then using cross product will lead to consistent normals.
However many meshes out there does not comply the winding rule (some faces are CW others CCW not all the same) and for those its a problem. There are two approaches I know of:
for simple shapes (not too much concave)
the sign of dot product of your face_normal and face_center-cube_center will tell you if the normal points inside or outside of the object.
if ( dot( face_normal , face_center-cube_center ) >= 0.0 ) normal_points_out
You can even use any point of face instead of the face center too. Anyway for more complex concave shapes this will not work correctly.
test if point above face is inside or not
simply displace center of face by some small distance (not too big) in normal direction and then test if the point is inside polygonal mesh or not:
if ( !inside( face_center+0.001*face_normal ) ) normal_points_out
to check if point is inside or not you can use hit test.
However if the normal is used just for lighting computations then its usage is usually inside a dot product. So we can use its abs value instead and that will solve all lighting problems regardless of the normal side. For example:
output_color = face_color * abs(dot(face_normal,light_direction))
some gfx apis have implemented this already (look for double sided materials or normals, turning them on usually use the abs value ...) For example in OpenGL:
glLightModeli(GL_LIGHT_MODEL_TWO_SIDE, GL_TRUE);

Create a polygon from a texture

Let's say I've got a rgba texture, and a polygon class , which constructor takes vector array of verticies coordinates.
Is there some way to create a polygon of this texture, for example, using alpha channel of the texture ...?
in 2d
Absolutely, yes it can be done. Is it easy? No. I haven't seen any game/geometry engines that would help you out too much either. Doing it yourself, the biggest problem you're going to have is generating a simplified mesh. One quad per pixel is going to generate a lot of geometry very quickly. Holes in the geometry may be an issue if you're tracing the edges and triangulating afterwards. Then there's the issue of determining what's in and what's out. Alpha is the obvious candidate, but unless you're looking at either full-on or full-off, you may be thinking about nice smooth edges. That's going to be hard to get right and would probably involve some kind of marching squares over the interpolated alpha. So while it's not impossible, its a lot of work.
Edit: As pointed out below, Unity does provide a method of generating a polygon from the alpha of a sprite - a PolygonCollider2D. In the script reference for it, it mentions the pathCount variable which describes the number of polygons it contains, which in describes which indexes are valid for the GetPath method. So this method could be used to generate polygons from alpha. It does rely on using Unity however. But with the combination of the sprite alpha for controlling what is drawn, and the collider controlling intersections with other objects, it covers a lot of use cases. This doesn't mean it's appropriate for your application.

How to create holes in objects without modifying the mesh structure in WebGL?

I'm new to WebGL and for an assignment I'm trying to write a function which takes as argument an object, let's say "objectA". ObjectA will not be rendered but if it overlaps with another object in the scene, let’s say “objectB”, the part of objectB which is inside objectA will disappear. So the effect is that there is a hole in ObjectB without modifying its mesh structure.
I've managed to let it work on my own render engine, based on ray tracing, which gives the following effect:
image initial scene:
image with objectA removed:
In the first image, the green sphere is "objectA" and the blue cube is "objectB".
So now I'm trying to program it in WebGL, but I'm a bit stuck. Because WebGL is based on rasterization rather than ray tracing, it has to be calculated in another way. A possibility could be to modify the Z-buffer algorithm, where the fragments with a z-value lying inside objectA will be ignored.
The algorithm that I have in mind works as follows: normally only the fragment with the smallest z-value will be stored at a particular pixel containing the colour and z-value. A first modification is that at a particular pixel, a list of all fragments belonging to that pixel is maintained. No fragments will be discarded. Secondly per fragment an extra parameter is stored containing the object where it belongs to. Next the fragments are sorted in increasing order according to their z-value.
Then, if the first fragment belongs to objectA, it will be ignored. If the next one belongs to objectB, it will be ignored as well. If the third one belongs to objectA and the fourth one to objectB, the fourth one will be chosen because it lies outside objectA.
So the first fragment belonging to objectB will be chosen with the constraint that the amount of previous fragments belonging to objectA is even. If it is uneven, the fragment will lie inside objectA and will be ignored.
Is this somehow possible in WebGL? I've also tried to implement it via a stencil buffer, based on this blog:
WebGL : How do make part of an object transparent?
But this is written for OpenGL. I transformed the code instructions to WebGL code instructions, but it didn't work at all. But I'm not sure whether it will work with a 3D object instead of a 2D triangle.
Thanks a lot in advance!
Why wouldn't you write raytracer inside the fragment shader (aka pixel shader)?
So you would need to render a fullscreen quad (two triangles) and then the fragment shader would be responsible for raytracing. There are plenty of resources to read/learn from.
This links might be useful:
Distance functions - by iq
How shadertoy works
Simple webgl raytracer
EDIT:
Raytracing and SDFs (signed distance functions aka constructive solid geometry (CSGs)) are good way to handle what you need and how is generally achieved to intersect objects. Intersections, and boolean operators in general, for mesh geometry (i.e. made of polygons) is not done during the rendering, rahter it uses special algorithms that do all the processing ahead of rendering, so the resulting mesh actually exists in the memory and its topology is actually calculated and then just rendered.
Depending on the specific scenario that you have, you might be able to achieve the effect under some requirements and restrictions.
There are few important things to take into account: depth peeling (i.e. storing depth values of multiple fragments per single pixel, triangle orientation (CW or CCW) and polygon face orientation (front-facing or back-facing).
Say, for example, that both of your polygons are convex, then rendering backfacing polygons of ObjectA, then of ObjectB, then frontfacing polygons of A, then of B might achieve the desired effect (I'm not including full calculations for all cases of overlaps that can exist).
Under some other sets of restrictions you might be able to achieve the effect.
In your specific example in question, you have shown frontfacing faces of the cube, then in the second image you can see the backface of the cube. That already implies that you have at least two depth values per pixel stored somehow.
There is also a distinction between intersecting in screen-space, or volumes, or faces. Your example works with faces and is the hardest (there are two cases: the one you've shown where mesh A's pixels who are inside mesh B are simply discarded (i.e. you drilled a hole inside its surface), and there is a case where you do boolean operation where you never put a hole in the surface, but in the volume) and is usually done with algorithm that computes output mesh. SDFs are great for volumes. Screen-space is achieved by simply using depth test to discard some fragments.
Again, too many scenarios and depends on what you're trying to achieve and what are the constraints that you're working with.

How to compute a 3d miniature model from a large set of 3d geometric models

i want to import a set of 3d geometries in to current scene, the imported geometries contains tons of basic componant which may represent an
entire building. The Product Manager want the entire building to be displayed
as a 3d miniature(colors and textures must corrosponding to the original building).
The problem: Is there any algortithms which can handle these large amount of datasin a reasonable time and memory cost.
//worst case: there may be a billion triangle surfaces in the imported data
And, by the way, i am considering another solotion: using a type of textue mapping:
1 take enough snapshots by the software render of the imported objects.
2 apply the images to a surface .
3 use some shader tricks to perform effects like bump-mapping---when the view posisition changed, the texture will alter and makes the viewer feels as if he was looking at a 3d scene.
----my modeller and render are ACIS and hoops, any ideas?
An option is to generate side views of the building at a suitable resolution, using the rendering engine and map them as textures to a parallelipipoid.
The next level of refinement is to obtain a bump or elevation map that you can use for embossing. Not the easiest to do.
If the modeler allows it, you can slice the volume using a 2D grid of "voxels" (actually prisms). You can do that by repeatedly cutting the model in two with a plane. And in every prism, find the vertex closest to the observer. This will give you a 2D map of elevations, with the desired resolution.
Alternatively, intersect parallel "rays" (linear objects) with the solid and keep the first endpoint.
It can also be that your modeler includes a true voxel model, or that rendering can be zone with a Z-buffer that you can access.

Resources