What does face list represent? - graphics

I know in mesh representation it is common to use three lists:
Vertex list, all vertices, this is easy to understand
Normal list, normals for each surface I guess?
And the face list, I have no idea what it does and I don't know how to calculate it.
For example, this is a mesh describing a triangular prism I found online.
double vertices[][] = {{0,1,-1},
{-0.5,0,-1},
{0.5,0,-1},
{0,1,-3},
{-0.5,0,-3},
{0.5,0,-3},
};
int faces[][] = {{0,1,2}, //front
{3,5,4}, //back
{1,4,5,2},//base
{0,3,4,1}, //left side
{0,2,5,3} //right side
};
double normals[][] = { {0,0,1}, //front face
{0,0,-1}, //back face
{0,-1,0}, //base
{-2.0/Math.sqrt(5),1.0/Math.sqrt(5),0}, //left
{2.0/Math.sqrt(5),1.0/Math.sqrt(5),0} //right
};
Why are there 4 elements in the base, left and right faces but only 3 at the front and back? How do I calculate them manually?

Usually, faces stores indices of each triangle in the vertices array. So the first face is a triangle consisting of vertices[0], vertices[1], vertices[2]. The second one consists of vertices[3], vertices[4], vertices[5] and so on.

For triangular meshes, a face is a triangle defined by 3 vertices. Normally, a mesh is composed by a list of n vertices and m faces. For example:
Vertices:
Point_0 = {0,0,0}
Point_1 = {2,0,0}
Point_3 = {0,2,0}
Point_4 = {0,3,0}
...
Point_n = {30,0,0}
Faces:
Face_0 = {Point_1, Point_4, Point_5}
Face_1 = {Point_2, Point_4, Point_7}
...
Face_m = {Point_1, Point_2, Point_n}
For the sake of brevity, you can define Face_0 as a set of indices: {1,4,5}.
In addition, the normal vector is computed as a cross product between the vertices of a face. By convention, the direction of the normal vector is directed outside the mesh. For example:
normal_face_0 = CrossProcuct ( (Point_4 - Point_1) , (Point_5 - Point_4) )
In your case, it is quite weird to see four indices in a face definition. Normally, there should be only 3 items in the array. Are you sure this is not a mistake?

Related

How to draw a border outline on a group of Goldberg polyhedron faces?

I have a Goldberg polyhedron that I have procedurally generated. I would like to draw an outline effect around a group of “faces” (let's call them tiles) similar to the image below, preferably without generating two meshes, doing the scaling in the vertex shader. Can anyone help?
My assumption is to use a scaled version of the tiles to write into a stencil buffer, then redraw those tiles comparing the stencil to draw the outline (as usual for this kind of effect), but I can't come up with an elegant solution to scale the tiles.
My best idea so far is to get the center point of the neighbouring tiles (green below) for each edge vertex (blue) and move the vertex towards them weighted by how many there are, which would leave the interior ones unmodified and the exterior ones moved inward. I think this works in principle, but I would need to generate two meshes as I couldn't do scaling this way in the vertex shader (as far as I know).
If it’s relevant this is how the polyhedron is constructed. Each tile is a separate object, the surface is triangulated with a central point and there is another point at the polyhedron’s origin (also the tile object’s origin). This is just so the tiles can be scaled uniformly and protrude from the polyhedron without creating gaps or overlaps.
Thanks in advance for any help!
EDIT:
jsb's answer was a simple and elegant solution to this problem. I just wanted to add some extra information in case someone else has the same problem.
First, here is the C# code I used to calculate these UVs:
// Use duplicate vertex count (over 4)
var vertices = mesh.vertices;
var uvs = new Vector2[vertices.Length];
for(int i = 0; i < vertices.Length; i++)
{
var duplicateCount = vertices.Count(s => s == vertices[i]);
var isInterior = duplicateCount > 4;
uvs[i] = isInterior ? Vector2.zero : Vector2.one;
}
Note that this works because I have not welded any vertices in my original mesh so I can count the adjoining triangles by just looking for duplicate vertices.
You can also do it by counting triangles like this (this would work with merged vertices, at least with how Unity's mesh data is laid out):
// Use triangle count using this vertex (over 4)
var triangles = mesh.triangles;
var uvs = new Vector2[mesh.vertices.Length];
for(int i = 0; i < triangles.Length; i++)
{
var triCount = triangles.Count(s => mesh.vertices[s] == mesh.vertices[triangles[i]]);
var isInterior = triCount > 4;
uvs[i] = isInterior ? Vector2.zero : Vector2.one;
}
Now on to the following problem. In my use case I also need to generate outlines for irregular tile patterns like this:
I neglected to mention this in the original post. Jsb's answer is still valid but the above code will not work as is for this. As you can see, when we have a tile that is only connected by one edge, the connecting vertices only "share" 2 interior triangles so we get an "exterior" edge. As a solution to this I created extra vertices along the the exterior edges of the tiles like so:
I did this by calculating the half way point along the vector between the original exterior tile vertices (a + (b - a) * 0.5) and inserting a point there. But, as you can see, the simple "duplicate vertices > 4" no longer works for determining which tiles are on the exterior.
My solution was to wind the vertices in a specific order so I know that every 3rd vertex is one I inserted along the edge like this:
Vector3 a = vertex;
Vector3 b = nextVertex;
Vector3 c = (vertex + (nextVertex - vertex) * 0.5f);
Vector3 d = tileCenter;
CreateTriangle(c, d, a);
CreateTriangle(c, b, d);
Then modify the UV code to test duplicates > 2 for these vertices (every third vertex starting at 0):
// Use duplicate vertex count
var vertices = mesh.vertices;
var uvs = new Vector2[vertices.Length];
for(int i = 0; i < vertices.Length; i++)
{
var duplicateCount = vertices.Count(s => s == vertices[i]);
var isMidPoint = i % 3 == 0;
var isInterior = duplicateCount > (isMidPoint ? 2 : 4);
uvs[i] = isInterior ? Vector2.zero : Vector2.one;
}
And here is the final result:
Thanks jsb!
One option that avoids a second mesh would be texturing:
Let's say you define 1D texture coordinates on the triangle vertices like this:
When rendering the mesh, use these coordinates to look up in a 1D texture which defines the interior and border color:
Of course, instead of using a texture, you can just as well implement this behavior in a fragment shader by thresholding the texture coordinate, conceptually:
if (u > 0.9)
fragColor = white;
else
fragColor = gray;
To update the outline, you would only need upload a new set of tex coords, which are just 1 for vertices on the outline and 0 everywhere else.
Depending on whether you want the outlines to extend only into the interior of the selected region or symmetrically to both sides of the boundary, you would need to specify the tex coords either per-corner or per-vertex, respectively.

How to calculate correct plane-frustum intersection?

Question:
I need to calculate intersection shape (purple) of plane defined by Ax + By + Cz + D = 0 and frustum defined by 4 rays emitting from corners of rectangle (red arrows). The result shoud be quadrilateral (4 points) and important requirement is that result shape must be in plane's local space. Plane is created with transformation matrix T (planes' normal is vec3(0, 0, 1) in T's space).
Explanation:
This is perspective form of my rectangle projection to another space (transformation / matrix / node). I am able to calculate intersection shape of any rectangle without perspective rays (all rays are parallel) by plane-line intersection algorithm (pseudocode):
Definitions:
// Plane defined by normal (A, B, C) and D
struct Plane { vec3 n; float d; };
// Line defined by 2 points
struct Line { vec3 a, b; };
Intersection:
vec3 PlaneLineIntersection(Plane plane, Line line) {
vec3 ba = normalize(line.b, line.a);
float dotA = dot(plane.n, l.a);
float dotBA = dot(plane.n, ba);
float t = (plane.d - dotA) / dotBA;
return line.a + ba * t;
}
Perspective form comes with some problems, because some of rays could be parallel with plane (intersection point is in infinite) or final shape is self-intersecting. Its works in some cases, but it's not enough for arbitary transformation. How to get correct intersection part of plane wtih perspective?
Simply, I need to get visible part of arbitary plane by arbitary perspective "camera".
Thank you for suggestions.
Intersection between a plane (one Ax+By+Cx+D equation) and a line (two planes equations) is a matter of solving the 3x3 matrix for x,y,z.
Doing all calculations on T-space (origin is at the top of the pyramid) is easier as some A,B,C are 0.
What I don't know if you are aware of is that perspective is a kind of projection that distorts the z ("depth", far from the origin). So if the plane that contains the rectangle is not perpendicular to the axis of the fustrum (z-axis) then it's not a rectangle when projected into the plane, but a trapezoid.
Anyhow, using the projection perspective matrix you can get projected coordinates for the four rectangle corners.
To tell if a point is in one side of a plane or in the other just put the point coordinates in the plane equation and get the sign, as shown here
Your question seems inherently mathematic so excuse my mathematical solution on StackOverflow. If your four arrows emit from a single point and the formed side planes share a common angle, then you are looking for a solution to the frustum projection problem. Your requirements simplify the problem quite a bit because you define the plane with a normal, not two bounded vectors, thus if you agree to the definitions...
then I can provide you with the mathematical solution here (Internet Explorer .mht file, possibly requiring modern Windows OS). If you are thinking about an actual implementation then I can only direct you to a very similar frustum projection implementation that I have implemented/uploaded here (Lua): https://github.com/quiret/mta_lua_3d_math
The roadmap for the implementation could be as follows: creation of condition container classes for all sub-problems (0 < k1*a1 + k2, etc) plus the and/or chains, writing algorithms for the comparisions across and-chains as well as normal-form creation, optimization of object construction/memory allocation. Since each check for frustum intersection requires just a fixed amount of algebraic objects you can implement an efficient cache.

Given a list of points of a polygon how do find which ones are part of a concave angle?

I have a list of consecutive points and I need to find the coordinates of a polygon some size larger. I can calculate each of the points in the new polygon if it has convex angles, but I'm not sure how to adjust for when the angles are concave.
Concave angles can be treated in exactly the same way as convex ones: For each vertex you generate lines that are parallel to the two original segments but shifted by your offset value. Then the vertex is replaced with the intersection of these two lines.
The difficulty is that the resulting polygon can have intersections if the original one has one or more concave angles. There are different ways to handle these intersections. Generally they can produce inner contours (holes in the polygon) but maybe you are only interested in the outer contour.
In any case you have to find the intersection points first. If you don't find any, you are finished.
Otherwise find a start point of which you can be sure that it is on the outer contour. In many cases you can take the one with smallest X coordinate for that. Then trace the polygon contour until you get to the first intersection. Add the intersection to the polygon. If you are only interested in the outer contour, then skip all following vertices until you get back to the intersection point. Then continue to add the vertexes to the resulting polygon until you get to the next intersection and so on.
If you also need the inner contours (holes) it gets a bit more complicated, but I guess you can figure this out.
I also need to add that you should pe prepared for special cases like (almost) duplicate edges that cause numerical problems. Generally this is not a trivial task, so if possible, try to find a suitable polygon library.
For this problem I found a relatively simple solution for figuring out whether the calculated point was inside or outside the original polygon. Check to see whether the newly formed line intersects the original polygon's line. A formula can be found here http://www.geeksforgeeks.org/orientation-3-ordered-points/.
Suppose your polygon is given in counter-clockwise order. Let P1=(x1,y1), P2=(x2,y2) and P3=(x3,y3) be consecutive vertices. You want to know if the angle at P2 is “concave” i.e. more than 180 degrees. Let V1=(x4,y4)=P2-P1 and V2=(x5,y5)=P3-P2. Compute the “cross product” V1 x V2 = (x4.y5-x5.y4). This is negative iff the angle is concave.
Here is a code in C# that receives a list of Vector2D representing the ordered points of a polygon and returns a list with the angles of each vertex. It first checks if the points are clockwise or counterclockwise, and then it loops through the points calculating the sign of the cross product (z) for each triple of angles, and compare the value of the cross product to the clockwise function result to check if the calculated angle to that point needs to be the calculated angle or adjusted to 360-angle. The IsClockwise function was obtained in this discussion: How to determine if a list of polygon points are in clockwise order?
public bool IsClockwise(List<Vector2> vertices)
{
double sum = 0.0;
for (int i = 0; i < vertices.Count; i++)
{
Vector2 v1 = vertices[i];
Vector2 v2 = vertices[(i + 1) % vertices.Count];
sum += (v2.x - v1.x) * (v2.y + v1.y);
}
return sum > 0.0;
}
List<float> estimatePolygonAngles(List<Vector2> vertices)
{
if (vertices.Count < 3)
return null;
//1. check if the points are clockwise or counterclockwise:
int clockwise = (IsClockwise(vertices) ? 1 : -1);
List<float> angles = new List<float>();
List<float> crossProductsSigns = new List<float>();
Vector2 v1, v2;
//2. calculate the angles between each triple of vertices (first and last angles are computed separetely because index of the array):
v1 = vertices[vertices.Count - 1] - vertices[0];
v2 = vertices[1] - vertices[0];
angles.Add(Vector2.Angle(v1, v2));
crossProductsSigns.Add(Vector3.Cross(v1, v2).z > 0 ? 1 : -1);
for (int i = 1; i < vertices.Count-1; i++)
{
v1 = vertices[i-1] - vertices[i];
v2 = vertices[i+1] - vertices[i];
angles.Add(Vector2.Angle(v1, v2));
crossProductsSigns.Add(Vector3.Cross(v1, v2).z > 0 ? 1 : -1);
}
v1 = vertices[vertices.Count - 2] - vertices[vertices.Count - 1];
v2 = vertices[0] - vertices[vertices.Count - 1];
angles.Add(Vector2.Angle(v1, v2));
crossProductsSigns.Add(Vector3.Cross(v1, v2).z > 0 ? 1 : -1);
//3. for each computed angle, check if the cross product is the same as the as the direction provided by the clockwise function, if dont, the angle must be adjusted to 360-angle
for (int i = 0; i < vertices.Count; i++)
{
if (crossProductsSigns[i] != clockwise)
angles[i] = 360.0f - angles[i];
}
return angles;
}

GLSL cube signed distance field implementation explanation?

I've been looking at and trying to understand the following bit of code
float sdBox( vec3 p, vec3 b )
{
vec3 d = abs(p) - b;
return min(max(d.x,max(d.y,d.z)),0.0) +
length(max(d,0.0));
}
I understand that length(d) handles the SDF case where the point is off to the 'corner' (ie. all components of d are positive) and that max(d.x, d.y, d.z) gives us the proper distance in all other cases. What I don't understand is how these two are combined here without the use of an if statement to check the signs of d's components.
When all of the d components are positive, the return expression can be reduced to length(d) because of the way min/max will evaluate - and when all of the d components are negative, we get max(d.x, d.y, d.z). But how am I supposed to understand the in-between cases? The ones where the components of d have mixed signs?
I've been trying to graph it out to no avail. I would really appreciate it if someone could explain this to me in geometrical/mathematical terms. Thanks.
If you like to know how It works It's better do the following steps:
1.first of all you should know definitions of shapes
2.It's always better to consider 2D shape of them, because three dimensions may be complex for you.
so let me to explain some shapes:
Circle
A circle is a simple closed shape. It is the set of all points in a plane that are at a given distance from a given point, the center.
You can use distance(), length() or sqrt() to calculate the distance to the center of the billboard.
The book of shaders - Chapter 7
Square
In geometry, a square is a regular quadrilateral, which means that it has four equal sides and four equal angles (90-degree angles).
I describe 2D shapes In before section now let me to describe 3D definition.
Sphere
A sphere is a perfectly round geometrical object in three-dimensional space that is the surface of a completely round ball.
Like a circle, which geometrically is an object in two-dimensional space, a sphere is defined mathematically as the set of points that are all at the same distance r from a given point, but in three-dimensional space.
Refrence - Wikipedia
Cube
In geometry, a cube is a three-dimensional solid object bounded by six square faces, facets or sides, with three meeting at each vertex.
Refrence : Wikipedia
Modeling with distance functions
now It's time to understanding modeling with distance functions
Sphere
As mentioned In last sections.In below code length() used to calculate the distance to the center of the billboard , and you can scale this shape by s parameter.
//Sphere - signed - exact
/// <param name="p">Position.</param>
/// <param name="s">Scale.</param>
float sdSphere( vec3 p, float s )
{
return length(p)-s;
}
Box
// Box - unsigned - exact
/// <param name="p">Position.</param>
/// <param name="b">Bound(Scale).</param>
float udBox( vec3 p, vec3 b )
{
return length(max(abs(p)-b,0.0));
}
length() used like previous example.
next we have max(x,0) It called Positive and negative parts
this is mean below code is equivalent:
float udBox( vec3 p, vec3 b )
{
vec3 value = abs(p)-b;
if(value.x<0.){
value.x = 0.;
}
if(value.y<0.){
value.y = 0.;
}
if(value.z<0.){
value.z = 0.;
}
return length(value);
}
step 1
if(value.x<0.){
value.x = 0.;
}
step 2
if(value.y<0.){
value.y = 0.;
}
step 3
if(value.z<0.){
value.z = 0.;
}
step 4
next we have absolution function.It used to remove additional parts.
Absolution Steps
Absolution step 1
if(value.x < -1.){
value.x = 1.;
}
Absolution step 2
if(value.y < -1.){
value.y = 1.;
}
Absolution step 3
if(value.z < -1.){
value.z = 1.;
}
Also you can make any shape by using Constructive solid geometry.
CSG is built on 3 primitive operations: intersection ( ∩ ), union ( ∪ ), and difference ( - ).
It turns out these operations are all concisely expressible when combining two surfaces expressed as SDFs.
float intersectSDF(float distA, float distB) {
return max(distA, distB);
}
float unionSDF(float distA, float distB) {
return min(distA, distB);
}
float differenceSDF(float distA, float distB) {
return max(distA, -distB);
}
I figured it out a while ago and wrote about this extensively in a blog post here: http://fabricecastel.github.io/blog/2016-02-11/main.html
Here's an excerpt (see the full post for a full explanation):
Consider the four points, A, B, C and D. Let's crudely reduce the distance function to try and get rid of the min/max functions in order to understand their effect (since that's what's puzzling about this function). The notation below is a little sloppy, I'm using square brackets to denote 2D vectors.
// 2D version of the function
d(p) = min(max(p.x, p.y), 0)
+ length(max(p, 0))
---
d(A) = min(max(-1, -1), 0)
+ length(max([-1, -1], 0))
d(A) = -1 + length[0, 0]
---
d(B) = min(max(1, 1), 0)
+ length(max([1, 1], 0))
d(B) = 0 + length[1, 1]
Ok, so far nothing special. When A is inside the square, we essentially get our first distance function based on planes/lines and when B is in the area where our first distance function is inaccurate, it gets zeroed out and we get the second distance function (the length). The trick lies in the other two cases C and D. Let's work them out.
d(C) = min(max(-1, 1), 0)
+ length(max([-1, 1], 0))
d(C) = 0 + length[0, 1]
---
d(D) = min(max(1, -1), 0)
+ length(max([-1, 1], 0))
d(D) = 0 + length[1, 0]
If you look back to the graph above, you'll note C' and D'. Those points have coordinates [0,1] and [1,0], respectively. This method uses the fact that both distance fields intersect on the axes - that D and D' lie at the same distance from the square.
If we zero out all negative component of a vector and take its length we will get the proper distance between the point and the square (for points outside of the square only). This is what max(d,0.0) does; a component-wise max operation. So long as the vector has at least one positive component, min(max(d.x,d.y),0.0) will resolve to 0 leaving us with only the second part of the equation. In the event that the point is inside the square, we want to return the first part of the equation (since it represents our first distance function). If all components of the vector are negative it's easy to see our condition will be met.
This understanding should tranlsate back into 3D seamlessly once you wrap your head around it. You may or may not have to draw a few graphs by hand to really "get" it - I know I did and would encourage you to do so if you're dissatisfied with my explanation.
Working this implementation into our own code, we get this:
float distanceToNearestSurface(vec3 p){
float s = 1.0;
vec3 d = abs(p) - vec3(s);
return min(max(d.x, max(d.y,d.z)), 0.0)
+ length(max(d,0.0));
}
And there you have it.

shade border of 2D polygon differently

we are programming a 2D game in XNA. Now we have polygons which define our level elements. They are triangulated such that we can easily render them. Now I would like to write a shader which renders the polygons as outlined textures. So in the middle of the polygon one would see the texture and on the border it should somehow glow.
My first idea was to walk along the polygon and draw a quad on each line segment with a specific texture. This works but looks strange for small corners where the textures are forced to overlap.
My second approach was to mark all border vertices with some kind of normal pointing out of the polygon. Passing this to the shader would interpolate the normals across edges of the triangulation and I could use the interpolated "normal" as a value for shading. I could not test it yet but would that work? A special property of the triangulation is that all vertices are on the border so there are no vertices inside the polygon.
Do you guys have a better idea for what I want to achieve?
Here A picture of what it looks right now with the quad solution:
You could render your object twice. A bigger stretched version behind the first one. Not that ideal since a complex object cannot be streched uniformly to create a border.
If you have access to your screen buffer you can render your glow components into a rendertarget and align a full-screen quad to your viewport and add a fullscreen 2D silhouette filter to it.
This way you gain perfect control over the edge by defining its radius, colour, blur. With additional output values such as the RGB values from the object render pass you can even have different advanced glows.
I think rendermonkey had some examples in their shader editor. Its definetly a good starting point to work with and try out things.
Propaply you want calclulate new border vertex list (easy fill example with triangle strip with originals). If you use constant border width and convex polygon its just:
B_new = B - (BtoA.normalised() + BtoC.normalised()).normalised() * width;
If not then it can go more complicated, there is my old but pretty universal solution:
//Helper function. To working right, need that v1 is before v2 in vetex list and vertexes are going to (anti???) cloclwise!
float vectorAngle(Vector2 v1, Vector2 v2){
float alfa;
if (!v1.isNormalised())
v1.normalise();
if (!v2.isNormalised())
v2.normalise();
alfa = v1.dotProduct(v2);
float help = v1.x;
v1.x = v1.y;
v1.y = -help;
float angle = Math::ACos(alfa);
if (v1.dotProduct(v2) < 0){
angle = -angle;
}
return angle;
}
//Normally dont use directly this!
Vector2 calculateBorderPoint(Vector2 vec1, Vector2 vec2, float width1, float width2){
vec1.normalise();
vec2.normalise();
float cos = vec1.dotProduct(vec2); //Calculates actually cosini of two (normalised) vectors (remember math lessons)
float csc = 1.0f / Math::sqrt(1.0f-cos*cos); //Calculates cosecant of angle, This return NaN if angle is 180!!!
//And rest of the magic
Vector2 difrence = (vec1 * csc * width2) + (vec2 * csc * width1);
//If you use just convex polygons (all angles < 180, = 180 not allowed in this case) just return value, and if not you need some more magic.
//Both of next things need ordered vertex lists!
//Output vector is always to in side of angle, so if this angle is.
if (Math::vectorAngle(vec1, vec2) > 180.0f) //Note that this kind of function can know is your function can know that angle is over 180 ONLY if you use ordered vertexes (all vertexes goes always (anti???) cloclwise!)
difrence = -difrence;
//Ok and if angle was 180...
//Note that this can fix your situation ONLY if you use ordered vertexes (all vertexes goes always (anti???) cloclwise!)
if (difrence.isNaN()){
float width = (width1 + width2) / 2.0; //If angle is 180 and border widths are difrent, you cannot get perfect answer ;)
difrence = vec1 * width;
//Just turn vector -90 degrees
float swapHelp = difrence.y
difrence.y = -difrence.x;
difrence.x = swapHelp;
}
//If you don't want output to be inside of old polygon but outside, just: "return -difrence;"
return difrence;
}
//Use this =)
Vector2 calculateBorderPoint(Vector2 A, Vector2 B, Vector2 C, float widthA, float widthB){
return B + calculateBorderPoint(A-B, C-B, widthA, widthB);
}
Your second approach can be possible...
mark the outer vertex (in border) with 1 and the inner vertex (inside) with 0.
in the pixel shader you can choose to highlight, those that its value is greater than 0.9f or 0.8f.
it should work.

Resources