Questions about Catmull-Clark - graphics

I have a few questions regarding the algorithm
1) Can the input mesh be anything (i.e, triangle mesh, hexagonal mesh) ? According to this website, Catmull-Clark works for arbitrary polygon meshes:
The paper describes the mathematics behind the rules for meshes with quads only, and then goes on to generalize this without proof for meshes with arbitrary polygons.
However, I have only seen it used in the context of quads. For example, edgepoints are calculated using two facepoints and the original endpoints. I can envisage scenarios where, in a non-quadrilateral mesh, there will be more than two facepoints.
2) Assuming (1) is true, according to Wikipedia, it says:
The new mesh will consist only of quadrilaterals, which in general will not be planar. The new mesh will generally look smoother than the old mesh.
Does this mean that if I put in a triangle or hexagonal mesh, the result will be a quadrilateral mesh? If so, why?

Sure. Why didn't you believe the website?
Yes. Every face produced by Catmull-Clark will have one corner from the original face, two corners at edge points, and one corner at the face point; and will hence be a quadrilateral.

Related

Computer graphics: polygon mesh

So a polygon mesh is defined as the following:
class Triangle{
int vertices[3]; //vertex indices
float nx, ny, nz; //face-plane normal
};
Is this a convenient way to represent a mesh used with flat shading? Explain
Suggest an object for which this is a good mesh format when used with Gouraud shading. Explain
Suggest an object for which this is a bad mesh format when used with Gouraud shading. Explain
So for 1, I said yes because the face plane normal can be easily converted to a point in the middle of the face. I read somewhere that normals don't have positions?
For 2 I said a ball; more gentle angles
And 3 a box; steeper angles.
I don't know, I don't think I really understand what the normal vector is.
mostly yes
from geometry computations is this OK however from rendering aspect having triangles in indices form only can be sometimes problematic (depends on the rendering engine, HW, etc). Usually is faster to have the triangle points directly in vector form instead of just indexes sometimes triangle contains both... However that is wasting space.
depends on how you classify what is OK and what not.
smooth objects like sphere will look like this
while flat side meshes like cube will be rendered without visible distortions in shape (but with flat shaded like colors only so lighting will be corrupted)
So answer to this is depend on what you want to achieve less lighting error, or better shape recognition or what. Basically using 1 normal for face will turn Gourard into flat shading.
Lighting can be improved by dividing big flat surfaces into more triangles
is unanswerable exactly for the same reasons as #2
So if you want to answer #2,#3 you need to clarify what it means good and bad ...

What formula or algorithm can I use to draw a 3D Sphere without using OpenGL-like libs?

I know that there are 4 techniques to draw 3D objects:
(1) Wireframe Modeling and rendering, (2) Additive Modeling, (3) Subtractive Modeling, (4) Splines and curves.
Then, those models go through hidden surface removal algorithm.
Am I correct?
Be that way, What formula or algorithm can I use to draw a 3D Sphere?
I am using a low-level library named WinBGIm from colorado university.
there are 4 techniques to draw 3D objects:
(1) Wireframe Modeling and rendering, (2) Additive Modeling, (3) Subtractive Modeling, (4) Splines and curves.
These are modelling techniques and not rendering techniques. They allow you to mathematically define your mesh's geometry. How you render this data on to a 2D canvas is another story.
There are two fundamental approaches to rendering 3D models on a 2D canvas.
Ray Tracing
The basic idea of ray tracing is to pass a ray from the camera's origin, through the point on the canvas whose colour needs to be determined. Determine which models get hit by it and pick the closest one, determine how it's lit to compute the colour there. This is done by further tracing rays from the hit point to all the light sources in the scene. If you notice, this approach eliminates the need to use hidden surface determination algorithms like the back face culling, z-buffer, etc. since the basic idea is rooted on a hidden surface algorithm (ray tracing).
There are packages, libraries, etc. that help you do this. However, it's common that ray tracers are written from scratch as a college-level project. However, this approach takes more time to render (not to code), but the results are generally more pleasing than the below one. This approach is more popular when you want to render non-interactive visuals like movies.
Rasterization
This approach takes primitives (triangles and quads) that define the models in the scene and sample them at regular intervals (screen pixels they cover) and write it on to a colour buffer. Here hidden surface is usually eliminated using the Z-buffer; a buffer that stores the z-order of the fragment and the closer one wins, when writing to the colour buffer.
Rasterization is the more popular approach with cheap hardware support for it available on most modern computers due to years of research and money that has gone in to it. Libraries like OpenGL and Direct3D are readily available to facilitate development. Although the results are less pleasing than ray tracing, it's faster to render and thus is widely used in interactive, real-time rendering like games.
If you want to not use those libraries, then you have to do what is commonly known as software rendering i.e. you will end up doing what these libraries do.
What formula or algorithm can I use to draw a 3D Sphere?
Depends on which one of the above you choose. If you simply rasterize a 3D sphere in 2D with orthographic projection, all you have to do is draw a circle on the canvas.
If you are looking for hidden lines removal (drawing the edges rather than the inside of the faces), the solution is easy: "back face culling".
Every edge of your model belongs to two faces. For every face you can compute the normal vector and check if it is facing to the observer (by the sign of the dot product of the normal and the direction of the projection line); in other words, if the observer is located in the outer half-space defined by the plane of the face. Then an edge is wholly visible if and only if it belongs to at least one front face.
Usual discretization of the sphere are made by drawing equidistant parallels and meridians. It may be advantageous to adjust the spacing of the parallels so that all tiles are about the same area.

Algorithm for cutting a mesh using another mesh

I am looking for an algorithm that given two meshes could clip one using another.
The simplest form of this is clipping a mesh using a plane. I've already implemented that by following something similar to what is described here.
What it does is basically inspecting all mesh vertices and triangles with respect to the plane (the plane's normal and point are given). If the triangle is completely above the plane, it is left untouched. If it falls completely below the plane, it is discarded. If some of the edges of the triangle intersect with the plane, the intersecting points with the plane are calculated and added as the new vertices. Finally a cap is generated for the hole on the place the mesh was cut.
The problem is that the algorithm assumes that the plane is unlimited, therefore whatever is in its path is clipped. In the simplest form, I need an extension of this without the assumption of a plane of "infinite" size.
To clarify, imagine that we have a 3D model of a desk with 2 boxes on it. The boxes are adjacent (but not touching or stacked). The user will define a cutting plane of a limited width and height underneath the first box and performs the cut. We end up with a desk model (mesh) with a box on it and another box (mesh) that can be freely moved around/manipulated.
In the general form, I'd like the user to be able to define a bounding box for the box he/she wants to separate from the desk model and perform the cut using that bounding box.
If I could extend the algorithm I already have to an algorithm with limited-sized planes, that would be great for now.
What you're looking for are constructive solid geometry/boolean algorithms with arbitrary meshes. It's considerably more complex than slicing meshes by an infinite plane.
Among the earliest and simplest research in this area, and a good starting point, is Constructive Solid Geometry for Polyhedral Objects by Trumbore and Hughes.
http://cs.brown.edu/~jfh/papers/Laidlaw-CSG-1986/main.htm
From the original paper:
More elaborate solutions extend upon this subject with a variety of data structures.
The real complexity of the operation lies in the slicing algorithm to slice one triangle against another. The nightmare of implementing robust CSG lies in numerical precision. It's easy when you involve objects far more complex than a cube to run into cases where a slice is made just barely next to a vertex (at which point you have the tough decision of merging the new split vertex or not prior to carrying out more splits), where polygons are coplanar (or almost), etc.
So I suggest initially erring on the side of using very high-precision floating point numbers, possibly even higher than double precision to focus on getting something working correctly and robustly. You can optimize later (first pass should be to use an accelerator like an octree/kd-tree/bvh), but you'll avoid many headaches this way in your first iteration.
This is vastly simpler to implement at render time if you're focusing on a raytracer rather than a modeling software, e.g. With raytracers, all you have to do to do this kind of arbitrary clipping is pretend that an object used to subtract from another has its polygons flipped in the culling process, e.g. It's easy to solve robustly at the ray level, but quite a bit harder to do robustly at the geometric level.
Another thing you can do to make your life so much easier if you can afford it is to voxelize your object, find subtractions/additions/unions of voxels, and then translate the voxels back into a mesh. This is so much easier to make robust, but harder to do efficiently and the voxel->polygon conversion can get quite involved if you want better results than what marching cubes provide.
It's a really tough area to do extremely well and requires perseverance, and thus the reason for the existence of things like this: http://carve-csg.com/about.
If someone is interested, currently there is a solution for this problem in CGAL library. It allows clipping one triangular mesh using another mesh as bounding volume. The usage example can be found here.

Sphere segment differences

Huh, you can build a sphere from squares, triangles, hexagons and so on and so forth, but I was wondering... which option is the most viable one?
Well, once again that's a question that differs a lot by preference and so on, but I was thinking more of what is easier to process for a computer.
Like, there will be different amount of segments when the sphere is built from triangles, squares or hexagons.
The idea behind this is to get the shape, which uses the least segments to form a sphere.
Optional: which shape would provide the best connectivity? Like, with squares, you can form topmost points with triangles, all connecting to one point. But probably there are shapes that can provide seamless results, that all the sphere consists of only 1 shape.
Triangles are used for tessellation of this type. You can form anything from them without any gaps/overlap.

Graphics-Related Question: Mesh and Geometry

What's the difference between mesh and geometry? Aren't they the same? i.e. collection of vertices that form triangles?
A point is geometry, but it is not a mesh. A curve is geometry, but it is not a mesh. An iso-surface is geometry, but it is not... enfin you get the point by now.
Meshes are geometry, not the other way around.
Geometry in the context of computing is far more limited that geometry as a branch of mathematics. There are only a few types of geometry typically used in computer graphics. Sprites are used when rendering points (particles), line segments are used when rendering curves and meshes are used when rendering surface-like geometry.
A mesh is typically a collection of polygons/geometric objects. For instance triangles, quads or a mixture of various polygons. A mesh is simply a more complex shape.
From Wikipedia:
Geometry is a part of mathematics
concerned with questions of size,
shape, and relative position of
figures and with properties of space
IMO a mesh falls under that criteria.
In the context implied by your question:
A mesh is a collection of polygons arranged in such a way that each polygon shares at least one vertex with another polygon in that collection. You can reach any polygon in a mesh from any other polygon in that mesh by traversing the edges and vertices that define those polygons.
Geometry refers to any object in space whose properties may be described according to the principles of the branch of mathematics known as geometry.
That the term "geometry" has different meanings mathematically and in rendering. In rendering it usually denotes what is static in a scene (walls, etc.) What is widely called a "mesh" is a group of geometrical objects (basically triangles) that describe or form an "object" in the scene - pretty much like envalid said it, but usually a mesh forms a single object or entity in a scene. Very often that is how rendering engines use the term: The geometrical data of each scene element (object, entity) composes that element's mesh.
Although this is tagged in "graphics", I think the answer connects with the interpretation from computational physics. There, we usually think of the geometry as an abstraction of the system that is to be represented/simulated, while the mesh is an approximation of the geometry - a compromise we usually have to make to be able to represent the spatial domain within the finite memory of the machine.
You can think of them basically as regular or unstructured sets of points "sprayed" on a surface or within a volume in space.
To be able to do visualization/simulation, it is also necessary to determine the neighbors of each point - for example using Delaunay triangulation which allows you to group sets of points into elements (for which you can solve algebraic versions of the equations describing your system).
In the context of surface representation in computer graphics, I think all major APIs (e.g. OpenGL) have functions which can display these primitives (which can be triangles as given by Delaunay, quads or maybe some other elements).

Resources