get facemesh position and rotation in world - position

I am trying to develop a filter where you have to eat elements in screen. My problem is that I cannot find the way to get the facemesh position and rotation in the world so that way I can compare the coordinates of facemesh with coordinates of the elements to eat. I try with worldtransform but allways returns 0 for my mesh. Is there any way to do that?
thanks so much

I don't know your scene tree configuration. Assuming you have everything inside Focal Distance
const FaceTracking = require('FaceTracking');
const Scene = require('Scene');
const R = require('Reactive');
const face = FaceTracking.face(0);
const focalDistance = -Scene.root.find('Focal Distance').transform.z.pinLastValue();
const mouthFocal = face.cameraTransform.applyTo(face.mouth.center).add(R.point(0, 0, focalDistance));
Adding focalDistance to Z is for transformation from Camera space.

Related

Edit SurfaceTool vertices based on coordinates

Godot Version: v4.0-alpha15
I have a terrain being generated using the SurfaceTool and the MeshInstance3D. I am also moving a purple decal acrossed the surface based on the 3D mouse position. Below is a screenshot of what this looks like.
I want to take the 3D mouse position and raise/lower the terrain on an action press. I found the MeshDataTool but am not quite sure if this allows for that and also not completely sure how to convert the 3D mouse position to the corresponding vertices.
At this point I am sort of completely stuck as there's not a whole lot of documentation that I could find that helps.
I appreciate the help in advance!
I was actually able to figure this out using the MeshDataTool as I mentioned in the original post.
I use the Vector3::distance_to method to get any vertex within 3 of the target position, then I use MeshDataTool::set_vertex in combination with Vector3 methods to add and subtract each of those vertex positions.
var target_position = Vector3(0, 0, 0)
var mdt = MeshDataTool.new()
var mesh = terrain_mesh_instance.mesh
mdt.create_from_surface(mesh, 0)
var points = get_vertex_id(mdt, target_position)
for i in points:
if Input.is_action_pressed("shift"):
mdt.set_vertex(i, mdt.get_vertex(i) + Vector3(0, 1 * delta,0))
elif Input.is_key_pressed(KEY_ALT):
var coords = mdt.get_vertex(i)
coords.y = 0
mdt.set_vertex(i, coords)
else:
mdt.set_vertex(i, mdt.get_vertex(i) + Vector3(0, -1 * delta,0))
# This fixes the normals so the shadows work properly
for face in mdt.get_face_count():
var vertex = mdt.get_face_vertex(face, 0)
var normal = mdt.get_face_normal(face)
mdt.set_vertex_normal(vertex, normal)
mesh.clear_surfaces()
mdt.commit_to_surface(mesh)

How to draw a border outline on a group of Goldberg polyhedron faces?

I have a Goldberg polyhedron that I have procedurally generated. I would like to draw an outline effect around a group of “faces” (let's call them tiles) similar to the image below, preferably without generating two meshes, doing the scaling in the vertex shader. Can anyone help?
My assumption is to use a scaled version of the tiles to write into a stencil buffer, then redraw those tiles comparing the stencil to draw the outline (as usual for this kind of effect), but I can't come up with an elegant solution to scale the tiles.
My best idea so far is to get the center point of the neighbouring tiles (green below) for each edge vertex (blue) and move the vertex towards them weighted by how many there are, which would leave the interior ones unmodified and the exterior ones moved inward. I think this works in principle, but I would need to generate two meshes as I couldn't do scaling this way in the vertex shader (as far as I know).
If it’s relevant this is how the polyhedron is constructed. Each tile is a separate object, the surface is triangulated with a central point and there is another point at the polyhedron’s origin (also the tile object’s origin). This is just so the tiles can be scaled uniformly and protrude from the polyhedron without creating gaps or overlaps.
Thanks in advance for any help!
EDIT:
jsb's answer was a simple and elegant solution to this problem. I just wanted to add some extra information in case someone else has the same problem.
First, here is the C# code I used to calculate these UVs:
// Use duplicate vertex count (over 4)
var vertices = mesh.vertices;
var uvs = new Vector2[vertices.Length];
for(int i = 0; i < vertices.Length; i++)
{
var duplicateCount = vertices.Count(s => s == vertices[i]);
var isInterior = duplicateCount > 4;
uvs[i] = isInterior ? Vector2.zero : Vector2.one;
}
Note that this works because I have not welded any vertices in my original mesh so I can count the adjoining triangles by just looking for duplicate vertices.
You can also do it by counting triangles like this (this would work with merged vertices, at least with how Unity's mesh data is laid out):
// Use triangle count using this vertex (over 4)
var triangles = mesh.triangles;
var uvs = new Vector2[mesh.vertices.Length];
for(int i = 0; i < triangles.Length; i++)
{
var triCount = triangles.Count(s => mesh.vertices[s] == mesh.vertices[triangles[i]]);
var isInterior = triCount > 4;
uvs[i] = isInterior ? Vector2.zero : Vector2.one;
}
Now on to the following problem. In my use case I also need to generate outlines for irregular tile patterns like this:
I neglected to mention this in the original post. Jsb's answer is still valid but the above code will not work as is for this. As you can see, when we have a tile that is only connected by one edge, the connecting vertices only "share" 2 interior triangles so we get an "exterior" edge. As a solution to this I created extra vertices along the the exterior edges of the tiles like so:
I did this by calculating the half way point along the vector between the original exterior tile vertices (a + (b - a) * 0.5) and inserting a point there. But, as you can see, the simple "duplicate vertices > 4" no longer works for determining which tiles are on the exterior.
My solution was to wind the vertices in a specific order so I know that every 3rd vertex is one I inserted along the edge like this:
Vector3 a = vertex;
Vector3 b = nextVertex;
Vector3 c = (vertex + (nextVertex - vertex) * 0.5f);
Vector3 d = tileCenter;
CreateTriangle(c, d, a);
CreateTriangle(c, b, d);
Then modify the UV code to test duplicates > 2 for these vertices (every third vertex starting at 0):
// Use duplicate vertex count
var vertices = mesh.vertices;
var uvs = new Vector2[vertices.Length];
for(int i = 0; i < vertices.Length; i++)
{
var duplicateCount = vertices.Count(s => s == vertices[i]);
var isMidPoint = i % 3 == 0;
var isInterior = duplicateCount > (isMidPoint ? 2 : 4);
uvs[i] = isInterior ? Vector2.zero : Vector2.one;
}
And here is the final result:
Thanks jsb!
One option that avoids a second mesh would be texturing:
Let's say you define 1D texture coordinates on the triangle vertices like this:
When rendering the mesh, use these coordinates to look up in a 1D texture which defines the interior and border color:
Of course, instead of using a texture, you can just as well implement this behavior in a fragment shader by thresholding the texture coordinate, conceptually:
if (u > 0.9)
fragColor = white;
else
fragColor = gray;
To update the outline, you would only need upload a new set of tex coords, which are just 1 for vertices on the outline and 0 everywhere else.
Depending on whether you want the outlines to extend only into the interior of the selected region or symmetrically to both sides of the boundary, you would need to specify the tex coords either per-corner or per-vertex, respectively.

Find original (x,y) new coordinates in fabric canvas

I am building a Warehouse map in 2D with fabricjs where I am displaying their racking systems as a series of rectangles.
As a first "layer/group", i add all bays as groups containing a rectangle and text, both positioned at the same (x,y). They also have both an angle set to fit their orientation in the space.
As a second "layer/group", i add groups containing a circle and text, representing the bay's number of issues. The (x,y) also fits the bays. This way, all my issues are always on top of the bays and the center of the group fits the rotated corner of the bay.
On the first paint, all is well aligned. Once they're shown on the page, the user can create new issues, so I am trying to position the issue group fitting the original (x,y), but since it can all be panned and zoomed, I am having a hard time positioning it where it should be.
I've been looking at their explanations about transforms, but I can't figure who the boss should be and thinking that having nested groups may also be why I am all mixed up.
By doing:
const gMatrix = matchingBay.group.calcTransformMatrix(false);
const targetToCanvas = fabric.util.transformPoint(matchingBay.aCoords.tl, gMatrix);
I am on the bay, but on the "group" corner, which is not what I am looking for. (x,y) will be one of the corners of the rectangle in the group, that may have been rotated.
By specifying the original (x,y) in this code will get me way off the actual painting zone.
So, my question is, how do I get the transformed (x,y) so I can add my issue group at those coordinates?
[EDIT]
As the suggestion of spring, it made me realize I can use the rotated rectangle's transforms to find its coordinates, so I tried:
const rect = matchingBay.getObjects()[BAY_RECTANGLE_INX];
const gMatrix = rect.calcTransformMatrix();
const targetToCanvas = fabric.util.transformPoint(rect.aCoords.bl, gMatrix);
Bottom left corner is where I wish to add the new Circle. After rotation, the bottom left is now the bottom right. But I am still off. As shown, the red circle should be on the corner. It looks like the rotation has not been applied.
qrDecompose gives me something that seems right:
{angle: -90, scaleX: 1, scaleY: -1, skewX: 0, skewY: 0, translateX: 6099.626027314769, translateY: 4785.016008065199 }
I realized that I was not thinking it the right way. Since I have the rectangle already in hands, I just had to get its own transformation and resolve the corner by my own, the following fixed my issue:
{
[...]
const rect = matchingBay.getObjects()[BAY_RECTANGLE_INX];
const gMatrix = rect.calcTransformMatrix();
const decomposed = fabric.util.qrDecompose(gMatrix);
const trans = fabric.util.transformPoint(new fabric.Point((decomposed.scaleX * -rect.width) / 2, (decomposed.scaleY * rect.height) / 2), gMatrix);
const top = trans.y;
const left = trans.x;
[...]
}
Since the matrix is bringing the center point of the rectangle, I can get the corner by substracting its width and height and then transforming the coordinates to find out where the matrix puts it.

distorted Mesh with threejs

Hi I am trying to draw a model in threejs.
I tried drawing it with lines resulting in a correct representation of the bim model.
Drawing it with a mesh results in a distorted model
My approach to drawing it with a Mesh is
let vertices = [];
for (let part of this.BUILDINGINFORMATION) {
for (let point of part.Polyhedron.Points) {
vertices.push(point);
}
}
let holes = [];
let triangles, mesh;
this.GEOMETRY= new THREE.Geometry();
this.GEOMETRY.vertices = vertices;
triangles = new THREE.ShapeUtils.triangulateShape(vertices, holes);
for (let i = 0; i < triangles.length; i++) {
this.GEOMETRY.faces.push(new THREE.Face3(triangles[i][0], triangles[i][1], triangles[i][2]));
}
let material = new THREE.MeshBasicMaterial();
let mesh = new THREE.Mesh(this.GEOMETRY, material);
as a disclaimer, I don't have any experience with threeJS or JS in general.
First of all, you should not combine all vertices from all parts into one vertex array.
Make separate arrays for each part, or maybe even each polyhedron.
With that being said, from the documentation, it seems that ShapeUtils.triangulateShape only works on 2D collections of points. And if I'm interpreting your first picture correctly, your points are actually 3D.
https://threejs.org/docs/#api/extras/ShapeUtils.triangulateShape
Triangulation of 3D points with K vertices per face
This thread might give you some ideas on how you could reduce each face to 2D.
However, did you try pushing the points you have directly as faces, creating a mesh for each part?

What does face list represent?

I know in mesh representation it is common to use three lists:
Vertex list, all vertices, this is easy to understand
Normal list, normals for each surface I guess?
And the face list, I have no idea what it does and I don't know how to calculate it.
For example, this is a mesh describing a triangular prism I found online.
double vertices[][] = {{0,1,-1},
{-0.5,0,-1},
{0.5,0,-1},
{0,1,-3},
{-0.5,0,-3},
{0.5,0,-3},
};
int faces[][] = {{0,1,2}, //front
{3,5,4}, //back
{1,4,5,2},//base
{0,3,4,1}, //left side
{0,2,5,3} //right side
};
double normals[][] = { {0,0,1}, //front face
{0,0,-1}, //back face
{0,-1,0}, //base
{-2.0/Math.sqrt(5),1.0/Math.sqrt(5),0}, //left
{2.0/Math.sqrt(5),1.0/Math.sqrt(5),0} //right
};
Why are there 4 elements in the base, left and right faces but only 3 at the front and back? How do I calculate them manually?
Usually, faces stores indices of each triangle in the vertices array. So the first face is a triangle consisting of vertices[0], vertices[1], vertices[2]. The second one consists of vertices[3], vertices[4], vertices[5] and so on.
For triangular meshes, a face is a triangle defined by 3 vertices. Normally, a mesh is composed by a list of n vertices and m faces. For example:
Vertices:
Point_0 = {0,0,0}
Point_1 = {2,0,0}
Point_3 = {0,2,0}
Point_4 = {0,3,0}
...
Point_n = {30,0,0}
Faces:
Face_0 = {Point_1, Point_4, Point_5}
Face_1 = {Point_2, Point_4, Point_7}
...
Face_m = {Point_1, Point_2, Point_n}
For the sake of brevity, you can define Face_0 as a set of indices: {1,4,5}.
In addition, the normal vector is computed as a cross product between the vertices of a face. By convention, the direction of the normal vector is directed outside the mesh. For example:
normal_face_0 = CrossProcuct ( (Point_4 - Point_1) , (Point_5 - Point_4) )
In your case, it is quite weird to see four indices in a face definition. Normally, there should be only 3 items in the array. Are you sure this is not a mistake?

Resources