Define a non-cartesian mesh in fipy - python-3.x

I am trying to simulate the elementary unit of a 2D system that have a P6mm symmetry in fipy and I would like to define a non-cartesian mesh that describes the system described below. Yet,
mesh = fipy.Grid2D(nx = 10, ny = 10, dx = 1., dy = 1.)
only returns uniform meshes. I thought of changing the FaceVariable, but it seems it only accepts Boolean variables. I could also simulate a cartesian system equivalent to this one, but there would be redondant data. Would someone have a better approach?
Alternatively, I could define my system this way. Is there any objection in doing so ?

FiPy does not provide a mesh class for what you want. UniformGrid2D could be used as the starting point for an equilateral triangular mesh (or a hexahedral mesh, if really required). StackOverflow isn't the right venue for that, though. Please open an issue if you'd like to pursue that.
Gmsh will produce structured, triangular meshes, but I'm not sure it can be forced to produce regular, equilateral triangles.

Related

Physically Based Shading, IBL, Half Vector, and NDotR vs NDotV

I'm trying to figure out some simple concepts about image based lighting for PBR. In many documents and code, I've seen the light direction (FragToLightDir) being set to the reflection vector (reflect(EyeToFragDir,Normal)). Then they set the half vector to the mid-way point between the light and view direction: HalfVecDir = normalize(FragToLightDir+FragToEyeDir); But doesn't this just result in the half vector being identical to the surface normal? If so, this would mean that terms like NDotH are always 1.0. Is this correct?
Here is another source of confusion for me. I'm trying to implement specular cube maps from the app Lys, using their algorithm for generating the correct roughness value to use for mip-level sampling based on roughness (here: https://docs.knaldtech.com/doku.php?id=specular_lys#pre-convolved_cube_maps_vs_path_tracers in the section Pre-convolved Cube Maps vs Path Tracers). In this document, they ask us to use NDotR as a scalar. But what is this NDotR in respect to IBL? If it means dot(Normal,ReflectDir), then isn't that exactly equivalent to dot(Normal,FragToEyeDir)? If I use either of these dot product results, the final result is too glossy at grazing angles (when compared to their more simplistic conversion using BurleyToMipSimple()), which makes me think I'm misunderstanding something about this process. I've tested the algorithm using NDotH, and it looks correct, but isn't this simply canceling out the rest of the math, since NDotH==1.0? Here is my very simple function to extract the mip level using their suggested logic:
float computeSpecularCubeMipTest(float perc_ruf)
{
//float n_dot_r = dot( Normal, Reflect );
float specular_power = ( 2.0 / max( EPSILON, perc_ruf*perc_ruf*perc_ruf*perc_ruf ) ) - 2.0;
specular_power /= ( 4.0 * max( NDotR, EPSILON ) );
return sqrt( max( EPSILON, sqrt(2.0/( specular_power + 2.0 )))) * MipScaler;
}
I realize this is an esoteric subject. Since everyone is using popular game engines these days, no one is forced to understand this madness! But I appreciate any advice on how to go about this.
Edit: Just to make sure I'm clear, I'm referring to pure image based lighting, with no directional lights, no spot lights, etc. Just a cube map that lights the whole scene, similar to the lighting in apps like Substance Painter and Blender's Viewport shading mode.
I'm not familiar with this particular app, but it looks like you're on the right track here. Part of the advantage of pre-convoluting the cube maps is to customize each pixel to be the light source for a particular reflection vector, so indeed NdotV is identical to NdotR as you've noticed. The R still needs to be calculated, for the texture lookup, so it doesn't matter much which one you use for the dot. There is no such thing as H or NdotH used for IBL lookups; those are only for point lights.
If the grazing angles look wrong, perhaps there's a Fresnel term missing somewhere? Reflections start to work differently at those angles.
For what it's worth, the glTF Sample Viewer is using NdotV for its specular IBL lookup.

Computer graphics: polygon mesh

So a polygon mesh is defined as the following:
class Triangle{
int vertices[3]; //vertex indices
float nx, ny, nz; //face-plane normal
};
Is this a convenient way to represent a mesh used with flat shading? Explain
Suggest an object for which this is a good mesh format when used with Gouraud shading. Explain
Suggest an object for which this is a bad mesh format when used with Gouraud shading. Explain
So for 1, I said yes because the face plane normal can be easily converted to a point in the middle of the face. I read somewhere that normals don't have positions?
For 2 I said a ball; more gentle angles
And 3 a box; steeper angles.
I don't know, I don't think I really understand what the normal vector is.
mostly yes
from geometry computations is this OK however from rendering aspect having triangles in indices form only can be sometimes problematic (depends on the rendering engine, HW, etc). Usually is faster to have the triangle points directly in vector form instead of just indexes sometimes triangle contains both... However that is wasting space.
depends on how you classify what is OK and what not.
smooth objects like sphere will look like this
while flat side meshes like cube will be rendered without visible distortions in shape (but with flat shaded like colors only so lighting will be corrupted)
So answer to this is depend on what you want to achieve less lighting error, or better shape recognition or what. Basically using 1 normal for face will turn Gourard into flat shading.
Lighting can be improved by dividing big flat surfaces into more triangles
is unanswerable exactly for the same reasons as #2
So if you want to answer #2,#3 you need to clarify what it means good and bad ...

Triangulate camera position and orientation in regards to known objects

I made an object tracker that calculates the position of an object recorded in a live camera feed using stereoscopic cameras. The math was simple, once you know the camera distance and orientation. However, now I thought it would be nice to allow me to quickly extract all these parameters, so when I change my setup or cameras I will be able to quickly calibrate it again.
To calculate the object position I made some simplifications/assumptions, which made the math easier: the cameras are in the same YZ plane, so there is only a distance in x between them. Their tilt is also just in the XY plane.
To reverse the triangulation I thought a test pattern (square) of 4 points of which I know the distances to each other would suffice. Ideally I would like to get the cameras' positions (distances to test pattern and each other), their rotation in X (and maybe Y and Z if applicable/possible), as well as their view angle (to translate pixel position to real world distances - that should be a camera constant, but in case I change cameras, it is quite a bit to define accurately)
I started with the same trigonometric calculations, but always miss parameters. I am wondering if there is an existing solution or a solid approach. If I need to add parameter (like distances, they are easy enough to measure), it's no problem (my calculations didn't give me any simple equations with that possibility though).
I also read about Homography in opencv, but it seems it applies to 2D space only, or not?
Any help is appreciated!

Algorithm for cutting a mesh using another mesh

I am looking for an algorithm that given two meshes could clip one using another.
The simplest form of this is clipping a mesh using a plane. I've already implemented that by following something similar to what is described here.
What it does is basically inspecting all mesh vertices and triangles with respect to the plane (the plane's normal and point are given). If the triangle is completely above the plane, it is left untouched. If it falls completely below the plane, it is discarded. If some of the edges of the triangle intersect with the plane, the intersecting points with the plane are calculated and added as the new vertices. Finally a cap is generated for the hole on the place the mesh was cut.
The problem is that the algorithm assumes that the plane is unlimited, therefore whatever is in its path is clipped. In the simplest form, I need an extension of this without the assumption of a plane of "infinite" size.
To clarify, imagine that we have a 3D model of a desk with 2 boxes on it. The boxes are adjacent (but not touching or stacked). The user will define a cutting plane of a limited width and height underneath the first box and performs the cut. We end up with a desk model (mesh) with a box on it and another box (mesh) that can be freely moved around/manipulated.
In the general form, I'd like the user to be able to define a bounding box for the box he/she wants to separate from the desk model and perform the cut using that bounding box.
If I could extend the algorithm I already have to an algorithm with limited-sized planes, that would be great for now.
What you're looking for are constructive solid geometry/boolean algorithms with arbitrary meshes. It's considerably more complex than slicing meshes by an infinite plane.
Among the earliest and simplest research in this area, and a good starting point, is Constructive Solid Geometry for Polyhedral Objects by Trumbore and Hughes.
http://cs.brown.edu/~jfh/papers/Laidlaw-CSG-1986/main.htm
From the original paper:
More elaborate solutions extend upon this subject with a variety of data structures.
The real complexity of the operation lies in the slicing algorithm to slice one triangle against another. The nightmare of implementing robust CSG lies in numerical precision. It's easy when you involve objects far more complex than a cube to run into cases where a slice is made just barely next to a vertex (at which point you have the tough decision of merging the new split vertex or not prior to carrying out more splits), where polygons are coplanar (or almost), etc.
So I suggest initially erring on the side of using very high-precision floating point numbers, possibly even higher than double precision to focus on getting something working correctly and robustly. You can optimize later (first pass should be to use an accelerator like an octree/kd-tree/bvh), but you'll avoid many headaches this way in your first iteration.
This is vastly simpler to implement at render time if you're focusing on a raytracer rather than a modeling software, e.g. With raytracers, all you have to do to do this kind of arbitrary clipping is pretend that an object used to subtract from another has its polygons flipped in the culling process, e.g. It's easy to solve robustly at the ray level, but quite a bit harder to do robustly at the geometric level.
Another thing you can do to make your life so much easier if you can afford it is to voxelize your object, find subtractions/additions/unions of voxels, and then translate the voxels back into a mesh. This is so much easier to make robust, but harder to do efficiently and the voxel->polygon conversion can get quite involved if you want better results than what marching cubes provide.
It's a really tough area to do extremely well and requires perseverance, and thus the reason for the existence of things like this: http://carve-csg.com/about.
If someone is interested, currently there is a solution for this problem in CGAL library. It allows clipping one triangular mesh using another mesh as bounding volume. The usage example can be found here.

Point Cloud - Principal Axes - Use of Inertia

I have got point clouds of different primitive objects (cone, plane, torus, cylinder, sphere, ellipsoid). The all vary in orientation, position and scaling. Furthermore all of them are initialized with a unique set of parameters (e.g. height, radius, etc.) so that their shape can be quiet different (some cones are tall, others are small and fat).
Now to my question:
I am trying to find the objects "principal components". Using PCA doesn't lead to good results, since rotated primitives can have their main variation in any direction (which doesn't have to be necessarily along the length of the objects).
The only chance that I see is to use somehow the symmetry of my primitives. Isn't there a method based on inertia? Maybe some way to find the main symmetry axis and two others perpendicular to it?
Can you give me some advice or point me to papers or implementations (maybe even python)?
Thanks a lot, Merlin.
PS: This is what I get if I only apply a PCA. Especially for cones this doesn't really work. Only cones that are almost identical in shape share the same orientation, but I need them all to point in one direction (e.g. up).
So you got cones and just need to rotate them all in the same direction?
If so you can fit a triangle to them and point the peak (e.g with the perpendicular bisectors of the sides) to your main axis.
You have an interesting problem. Normally used shape descriptors (VFH) that are invariant to shape but not pose (which is what you would want, really) would not be invariant to stretching in the shape.
I think to succeed at this you need to be clearer about the invariants that you are trying to maintain when a shape changes. Is it a topological invariant? If so, then here is a good starting point: https://www.google.com.tr/search?q=topologically+invariant+shape+descriptor
I decided to just stick to simple PCA since it's the only method that is totally generic and doesn't depend on prior (expert) knowledge about the data.

Resources