distorted Mesh with threejs - graphics

Hi I am trying to draw a model in threejs.
I tried drawing it with lines resulting in a correct representation of the bim model.
Drawing it with a mesh results in a distorted model
My approach to drawing it with a Mesh is
let vertices = [];
for (let part of this.BUILDINGINFORMATION) {
for (let point of part.Polyhedron.Points) {
vertices.push(point);
}
}
let holes = [];
let triangles, mesh;
this.GEOMETRY= new THREE.Geometry();
this.GEOMETRY.vertices = vertices;
triangles = new THREE.ShapeUtils.triangulateShape(vertices, holes);
for (let i = 0; i < triangles.length; i++) {
this.GEOMETRY.faces.push(new THREE.Face3(triangles[i][0], triangles[i][1], triangles[i][2]));
}
let material = new THREE.MeshBasicMaterial();
let mesh = new THREE.Mesh(this.GEOMETRY, material);

as a disclaimer, I don't have any experience with threeJS or JS in general.
First of all, you should not combine all vertices from all parts into one vertex array.
Make separate arrays for each part, or maybe even each polyhedron.
With that being said, from the documentation, it seems that ShapeUtils.triangulateShape only works on 2D collections of points. And if I'm interpreting your first picture correctly, your points are actually 3D.
https://threejs.org/docs/#api/extras/ShapeUtils.triangulateShape
Triangulation of 3D points with K vertices per face
This thread might give you some ideas on how you could reduce each face to 2D.
However, did you try pushing the points you have directly as faces, creating a mesh for each part?

Related

How to draw a border outline on a group of Goldberg polyhedron faces?

I have a Goldberg polyhedron that I have procedurally generated. I would like to draw an outline effect around a group of “faces” (let's call them tiles) similar to the image below, preferably without generating two meshes, doing the scaling in the vertex shader. Can anyone help?
My assumption is to use a scaled version of the tiles to write into a stencil buffer, then redraw those tiles comparing the stencil to draw the outline (as usual for this kind of effect), but I can't come up with an elegant solution to scale the tiles.
My best idea so far is to get the center point of the neighbouring tiles (green below) for each edge vertex (blue) and move the vertex towards them weighted by how many there are, which would leave the interior ones unmodified and the exterior ones moved inward. I think this works in principle, but I would need to generate two meshes as I couldn't do scaling this way in the vertex shader (as far as I know).
If it’s relevant this is how the polyhedron is constructed. Each tile is a separate object, the surface is triangulated with a central point and there is another point at the polyhedron’s origin (also the tile object’s origin). This is just so the tiles can be scaled uniformly and protrude from the polyhedron without creating gaps or overlaps.
Thanks in advance for any help!
EDIT:
jsb's answer was a simple and elegant solution to this problem. I just wanted to add some extra information in case someone else has the same problem.
First, here is the C# code I used to calculate these UVs:
// Use duplicate vertex count (over 4)
var vertices = mesh.vertices;
var uvs = new Vector2[vertices.Length];
for(int i = 0; i < vertices.Length; i++)
{
var duplicateCount = vertices.Count(s => s == vertices[i]);
var isInterior = duplicateCount > 4;
uvs[i] = isInterior ? Vector2.zero : Vector2.one;
}
Note that this works because I have not welded any vertices in my original mesh so I can count the adjoining triangles by just looking for duplicate vertices.
You can also do it by counting triangles like this (this would work with merged vertices, at least with how Unity's mesh data is laid out):
// Use triangle count using this vertex (over 4)
var triangles = mesh.triangles;
var uvs = new Vector2[mesh.vertices.Length];
for(int i = 0; i < triangles.Length; i++)
{
var triCount = triangles.Count(s => mesh.vertices[s] == mesh.vertices[triangles[i]]);
var isInterior = triCount > 4;
uvs[i] = isInterior ? Vector2.zero : Vector2.one;
}
Now on to the following problem. In my use case I also need to generate outlines for irregular tile patterns like this:
I neglected to mention this in the original post. Jsb's answer is still valid but the above code will not work as is for this. As you can see, when we have a tile that is only connected by one edge, the connecting vertices only "share" 2 interior triangles so we get an "exterior" edge. As a solution to this I created extra vertices along the the exterior edges of the tiles like so:
I did this by calculating the half way point along the vector between the original exterior tile vertices (a + (b - a) * 0.5) and inserting a point there. But, as you can see, the simple "duplicate vertices > 4" no longer works for determining which tiles are on the exterior.
My solution was to wind the vertices in a specific order so I know that every 3rd vertex is one I inserted along the edge like this:
Vector3 a = vertex;
Vector3 b = nextVertex;
Vector3 c = (vertex + (nextVertex - vertex) * 0.5f);
Vector3 d = tileCenter;
CreateTriangle(c, d, a);
CreateTriangle(c, b, d);
Then modify the UV code to test duplicates > 2 for these vertices (every third vertex starting at 0):
// Use duplicate vertex count
var vertices = mesh.vertices;
var uvs = new Vector2[vertices.Length];
for(int i = 0; i < vertices.Length; i++)
{
var duplicateCount = vertices.Count(s => s == vertices[i]);
var isMidPoint = i % 3 == 0;
var isInterior = duplicateCount > (isMidPoint ? 2 : 4);
uvs[i] = isInterior ? Vector2.zero : Vector2.one;
}
And here is the final result:
Thanks jsb!
One option that avoids a second mesh would be texturing:
Let's say you define 1D texture coordinates on the triangle vertices like this:
When rendering the mesh, use these coordinates to look up in a 1D texture which defines the interior and border color:
Of course, instead of using a texture, you can just as well implement this behavior in a fragment shader by thresholding the texture coordinate, conceptually:
if (u > 0.9)
fragColor = white;
else
fragColor = gray;
To update the outline, you would only need upload a new set of tex coords, which are just 1 for vertices on the outline and 0 everywhere else.
Depending on whether you want the outlines to extend only into the interior of the selected region or symmetrically to both sides of the boundary, you would need to specify the tex coords either per-corner or per-vertex, respectively.

Finding closest triangle from a point using octree

I have a list of triangles in 3D space and a point described with (x,y,z) coordinates. I am writing a method for returning the closest triangle to that point.
The naive implementation I wrote initially was to loop through all the triangles, check the distance from that point and then return the one with the minimum distance. In most cases though the list of triangles I am working with consists of thousands or tens of thousands of elements, so I am looking at ways of optimising it.
I have been trying to make it work using an octree structure, so I have created an octree that stores all the triangles. I thought that a possible approach would be to find the closest cell of the octree from that point by calculating the distance between the point and the center of each cell, and then just comparing with the triangles inside that cell.
I am not sure though of how to retrieve the closest cell from the octree (it's the first time I'm using an octree). This is the method I have written so far:
public Octree getClosestCell(final Vec3D point) {
if (children != null) {
float minDist = Float.MAX_VALUE;
Octree closestCell = null;
for (int i = 0; i < 8; i++) {
final float dist = point.distanceTo(children[i].getCentroid());
if (dist < minDist) {
minDist = dist;
closestCell = children[i];
}
}
return closestCell.getClosestCell(point);
} else {
return this;
}
}
So to sum up, I have 2 questions:
Does the suggested approach sound like a good solution for optimising this problem?
Does the method above seem correct or there is a better way of retrieving the closest cell?

get facemesh position and rotation in world

I am trying to develop a filter where you have to eat elements in screen. My problem is that I cannot find the way to get the facemesh position and rotation in the world so that way I can compare the coordinates of facemesh with coordinates of the elements to eat. I try with worldtransform but allways returns 0 for my mesh. Is there any way to do that?
thanks so much
I don't know your scene tree configuration. Assuming you have everything inside Focal Distance
const FaceTracking = require('FaceTracking');
const Scene = require('Scene');
const R = require('Reactive');
const face = FaceTracking.face(0);
const focalDistance = -Scene.root.find('Focal Distance').transform.z.pinLastValue();
const mouthFocal = face.cameraTransform.applyTo(face.mouth.center).add(R.point(0, 0, focalDistance));
Adding focalDistance to Z is for transformation from Camera space.

What does face list represent?

I know in mesh representation it is common to use three lists:
Vertex list, all vertices, this is easy to understand
Normal list, normals for each surface I guess?
And the face list, I have no idea what it does and I don't know how to calculate it.
For example, this is a mesh describing a triangular prism I found online.
double vertices[][] = {{0,1,-1},
{-0.5,0,-1},
{0.5,0,-1},
{0,1,-3},
{-0.5,0,-3},
{0.5,0,-3},
};
int faces[][] = {{0,1,2}, //front
{3,5,4}, //back
{1,4,5,2},//base
{0,3,4,1}, //left side
{0,2,5,3} //right side
};
double normals[][] = { {0,0,1}, //front face
{0,0,-1}, //back face
{0,-1,0}, //base
{-2.0/Math.sqrt(5),1.0/Math.sqrt(5),0}, //left
{2.0/Math.sqrt(5),1.0/Math.sqrt(5),0} //right
};
Why are there 4 elements in the base, left and right faces but only 3 at the front and back? How do I calculate them manually?
Usually, faces stores indices of each triangle in the vertices array. So the first face is a triangle consisting of vertices[0], vertices[1], vertices[2]. The second one consists of vertices[3], vertices[4], vertices[5] and so on.
For triangular meshes, a face is a triangle defined by 3 vertices. Normally, a mesh is composed by a list of n vertices and m faces. For example:
Vertices:
Point_0 = {0,0,0}
Point_1 = {2,0,0}
Point_3 = {0,2,0}
Point_4 = {0,3,0}
...
Point_n = {30,0,0}
Faces:
Face_0 = {Point_1, Point_4, Point_5}
Face_1 = {Point_2, Point_4, Point_7}
...
Face_m = {Point_1, Point_2, Point_n}
For the sake of brevity, you can define Face_0 as a set of indices: {1,4,5}.
In addition, the normal vector is computed as a cross product between the vertices of a face. By convention, the direction of the normal vector is directed outside the mesh. For example:
normal_face_0 = CrossProcuct ( (Point_4 - Point_1) , (Point_5 - Point_4) )
In your case, it is quite weird to see four indices in a face definition. Normally, there should be only 3 items in the array. Are you sure this is not a mistake?

Check mouse in one Model or not?

In XNA 4.0 3D. I want to Drag and Drop one model 3D.So,I must check mouse in a Model or not. My problem is I don't know change position and rate this Model from 3D to 2D. It's related Matrix View and Matrix Projection of camera??
This is my code:
http://www.mediafire.com/?3835txmw3amj7pe
Check out this article on msdn: Selecting an Object with a Mouse.
From that article:
MouseState mouseState = Mouse.GetState();
int mouseX = mouseState.X;
int mouseY = mouseState.Y;
Vector3 nearsource = new Vector3((float)mouseX, (float)mouseY, 0f);
Vector3 farsource = new Vector3((float)mouseX, (float)mouseY, 1f);
Matrix world = Matrix.CreateTranslation(0, 0, 0);
Vector3 nearPoint = GraphicsDevice.Viewport.Unproject(nearsource,
proj, view, world);
Vector3 farPoint = GraphicsDevice.Viewport.Unproject(farsource,
proj, view, world);
// Create a ray from the near clip plane to the far clip plane.
Vector3 direction = farPoint - nearPoint;
direction.Normalize();
Ray pickRay = new Ray(nearPoint, direction);
For proj and view use your own projection and view matrices accordingly.
Now when you have your Ray, you need to have a BoundingBox or a BoundingSphere (or multiple) that are roughly encompassing your model.
A simple solution is to use BoundingSphere properties of ModelMesh for each mesh in your Model.Meshes.
foreach(ModelMesh mesh in model.Meshes)
{
if(Ray.Intersects(mesh.BoundingSphere))
{
//the mouse is over the model!
break;
}
}
Since BoundingSphere of each ModelMesh is going to encompass all vertices in that mesh, it might not be the most precise representation of the mesh if it is not roughly round (i.e. if it is very long). This means that the above code could be saying that the mouse intersects the object, when visually it is way off.
The alternative is to create your bounding volumes manually. You make instances of BoundingBox or BoundingSphere objects as suits your need, and manually change their dimensions and positions based on runtime requirements. This requires slightly more work, but it isn't hard.

Resources