I am studying texture of computer graphics. Can anyone explain me what is parametric and non parametric surface and what is the difference between these two?
Thanks...
A parametric surface is defined by equations that generate vertex coordinates as a function of one or more free variables.
In the one-dimensional case it is customary to define parametric curves (e.g. Bezier, Lissajous, or any of several other types) of curves using free variable t often defined on the interval [0,1] which can be thought of as a sort of fractional arc length. An equation is specified which generates each coordinate value as a function of t. As a result, the curve can be rendered to arbitrary precision by evaluating as many vertex points as desired along the defined interval of t values.
The alternative is a nonparametric curve which is simply defined as a specific set of vertices which are generally connected with straight lines. Curves defined nonparametrically don't hold up well to scaling and zooming as eventually the limitations of the defining geometry become apparent.
Parametric surfaces are the higher-dimensional equivalents of parametric curves, where two or more free variables and corresponding functions define the vertices of a mesh.
Related
I don't know much about geometric algorithms, but I just wanted to know if given a mesh, is there an algorithm that outputs a boolean which is true when the mesh is actually a hollow cylinder (like a straight pipe) or else, it will be false. Any reference to the algorithm if it exists would be helpful.
EDIT: I am relatively sure that the body has a surface mesh, basically the only the surfaces of the object is meshed and it uses triangles for meshing.
A key problem is to find the axis of the cylindre. If it is known, you are done. If it is unknown but the cylindre is known to be without holes and delimited by two perpendicular bases, the main axis of inertia can be used. Otherwise, it should be constructible from the coordinates of five points (A Cylinder of Revolution on Five Points http://www.cim.mcgill.ca/~paul/12ICGG6Aw.pdf).
When you know the axis, it suffices to check that the distance of every point to the axis is constant. In case the mesh describes a solid (i.e. a surface with thickness), make sure to pick the points on the outer surface, for example using the orientation of the normals. There will be two distances and the lateral faces must be ignored.
I intuitively thought that if you linearly interpolated from two orthogonal vectors to another two orthogonal vectors that the resulting two vectors would also be orthogonal to each other. I assumed that if you have basis axes X, Y, Z, and these are unit vectors orthogonal to each other, that you can interpolate through a rotation and the result will still be three orthogonal vectors. Apparently that's not case (question I asked on Mathematics Stack Exchange.
So my plan to average tangents and normals to create a smooth-looking was not possible. However I just thought, when the vertex shader sends values to the fragment/pixel shader they are given 'linearly interpolated' values, right? Meaning if I have normal and tangent vectors which are orthogonal, when they get to the fragment/pixel shader they are no longer orthogonal to each other, right? Isn't this a problem?
normals and tangents might not be orthogonal after interpolation
that is because interpolation is interpolating position not angle. That means if you linearly interpolate position the angle usually changes non linearly. So in most cases your interpolated vectors does not correspond to each other anymore (except starting and ending value).
smooth TBN
that is possible but you need to ortogonalise the vectors again like:
t = cross(b,n)
b = cross(n,t)
t=normalize(t);
b=normalize(b);
n=normalize(n);
However in most cases the dis-orthogonality is not that bad and stuff works
also without this step.
I'm reading this famous article and I can't seem to understand the BRDF concept, especially the bold part:
The surface’s response to light is quantified by a function called the
BRDF (Bidirectional Reflectance Distribution Function), which we will
denote as f(l, v). Each direction (incoming and outgoing) can be
parameterized with two numbers (e.g. polar coordinates), so the
overall dimensionality of the BRDF is four.
To which directions is the author referring? Also, if this is 3d, then how can a direction be parameterized with two numbers and not three?
A BRDF describes the light reflectance characteristics of a surface. For every pair of incoming (l) and outgoing (v) directions, the BRDF tells you how much light will be reflected along v. Since we are in surface space, two polar coordinates are sufficient to define the entire hemisphere over the reflection point. The following image from anu.edu.au illustrates this concept:
I understand that to create a shape (let's say a 3D sphere for an example) that I have to first find the vertex locations of the shape and second, use the parametric equation in order to create the x, y, z points of the triangle meshes. I am currently looking at a sample code to create shapes and it appears that after using the parametric equation in order to find the vectors of the triangle meshes, unit normals to the sphere at the vertices are found.
I understand why regular vectors in the first step are used to create the 3D shape and that a normal vector is perpendicular to the shape object, but I don't understand why the unit normal vectors at the vertices are used to create the shapes? What's the purpose of finding the normal of the vectors?
I am not sure I totally understand your question, but one very important use for normals in computer graphics is calculating reflections. For instance, if you're writing a simple raytracer, Lambertian reflectance is quite easy to compute if you know the normal vector where your camera ray intersects a surface. Normals are similarly required for (off the top of my head) the majority of calculations involved in more complex rendering techniques.
I have a list of cubic bezier curves in 3D, such that the curves are connected to each other and closes a cycle.
I am looking for a way to create a surface from the bezier curves. Eventually i want to triangulate the surface and present it in a graphic application.
Is there an algorithm for surfacing a closed path of cubic bezier segments?
It looks like you only know part of the details of the surface (given by the Bezier curves) and you've to extrapolate the surface out of it. As a simple example I'm imagining a bunch of circles in 3D with the center and radius that will be reconstructed into a sphere.
If this is the case you can use level sets. With level sets, you define a bunch of input parameters that defines the force exerted by the external factors on your surface and the 'tension' of the surface.
Crudely put, level sets define the behaviour of surface as they expand(or contract ) over time. As it expands it tries to maintain it's smoothness while meeting other boundary conditions - like 'sticking' to the circles in this case. So if you want a sphere from bunch of circles, the circles will exert a great force, while the surface will also be very tense.
Physbam has an open source implementation of level sets.
CGAL and PCL also provide a host of methods that generate surfaces from things such as points sets and implicit surface. You may be able to adapt one of them for your use.
You can look into the algorithms they use if you want to implement one on your own. I think at least one of them use the Poisson Surface Reconstruction algorithm.