PLY file specifications with texture coordinates - graphics

I need to read PLY files (Stanford Triangle Format) with embedded texture for some purpose. I saw several specification of PLY files, but could not find a single source specifying the syntax for texture mapping. There seems to be so many libraries which reads PLY file, but most of them seems not to support texture (they just crashes; I tried 2-3 of them).
Following is in the header for a ply file with texture:
ply
format binary_little_endian 1.0
comment TextureFile Parameterization.png
element vertex 50383
property float x
property float y
property float z
property float nx
property float ny
property float nz
element face 99994
property list uint8 int32 vertex_index
property list uint8 float texcoord
end_header
What I don't understand is the line property list uint8 float texcoord. Also the list corresponding to a face is
3 1247 1257 1279 6 0.09163 0.565323 0.109197 0.565733 0.10888 0.602539 6 9 0.992157 0.992157 0.992157 0.992157 0.992157 0.992157 0.992157 0.992157 0.992157`.
What is this list; what is the format? While I understand that PLY gives you the opportunity to define your own properties for the elements, but the handling textures seems to be pretty much a standard and quite a few applications (like the popular Meshlab) seems to open textured PLY files using the above syntax.
I want to know what is the standard syntax followed for reading textured PLY files and if possible the source from where this information is found.

In PLY files faces often contain lists of values and these lists can vary in size. If it's a triangular face, expect three values, a quad = 4 and so on up to any arbitrary n-gon. A list is declared in a line like this:
property list uint8 int32 vertex_index
This is a list called 'vertex_index'. It will always consist of an 8-bit unsigned integer (that's the uint8) that is the size N, followed by N 32-bit integers (that's the int32).
In the example line this shows up right away:
3 1247 1257 1279
This says, here comes 3 values and then it gives you the three.
Now the second list is where the texture coordinates should be:
property list uint8 float texcoord
It's just like the first list in that the size comes first (as an unsigned byte) but this time it will be followed by a series of 32-bit floats instead of integers (makes sense for texture coordinates). The straightforward interpretation is that there will be a texture coordinate for each of the vertices listed in vertex_index. If we assume these are just 2d texture coordinates (a pretty safe assumption) we should expect to see the number 6 followed by 6 floating point values ... and we do:
6 0.09163 0.565323 0.109197 0.565733 0.10888 0.602539
These are the texture coordinates that correspond with the three vertices already listed.
Now, for a face, that should be it. I don't know what the stuff is on the rest of the line. According to your header the rest of the file should be binary so I don't know how you got it as a line of ascii text but the extra data on that line shouldn't be there (also according to the header which fully defines a face).

Let me add to #OllieBrown's response, as further info for anyone coming across this, that the format above uses per-face texture coordinates, also called wedge UVs. What this means is that if you are sharing vertices, there is a chance that a shared vertex(basically a vert index being used in multiple adjacent triangles), might have different UVs based on the triangle it takes part in. That usually happens when a vertex is on a UV seam or where UVs meet the texture borders. Typically that means duplicating vertices since GPUs require per-vertex attributes. So a shared vertex ends up as X vertices overlapping in space(where X is the number of triangles they are shared by), but have different UVs based on the triangle they take part in. One advantage to keeping data like that on disk is that since this is a text format, it reduces the amount of text you need, therefore reduced disk size. OBJ has that as well, although it keeps a flat UV array and uses indexing into that array instead, regardless of whether it's per-vertex or per-face UVs.
I also can't figure out what the 6 9 <9*0.992157> part is (although the 9 part seems like 3 vector3s which have the same value for all 3 axes), but Paul Bourke's code here has this description of the setup_other_props function:
/******************************************************************************
Make ready for "other" properties of an element-- those properties that
the user has not explicitly asked for, but that are to be stashed away
in a special structure to be carried along with the element's other
information.
Entry:
plyfile - file identifier
elem - element for which we want to save away other properties
******************************************************************************/
void setup_other_props(PlyFile *plyfile, PlyElement *elem)
From what I understand, it's possible to keep data that are not part of the header, per element. These data are supposed to be kept and stored, but not interpreted for use in every application. Bourke's description of the format speaks about backwards compatibility with older software, so this might be a case of a custom format that only some applications understand but the extra info shouldn't hinder an older application that doesn't need them from understanding and/or rendering the content.

Related

Relative risk estimation in spatstat

I am running into problems when computing the relative risk estimation (relrisk.ppp) of two point patterns: One with four marks in a rectangular region and the other with two marks in a circular region.
For the first pattern with four marks, I am able to get the relative risk and the resulting object in a large imlist with 4 elements corresponding to each mark.
However, for the second pattern, it gives a list of 10 elements, of which the first matrix v is empty with NA entries. I am breaking my head on what possibly could be wrong when the created point pattern objects seems to be identical. Any help will be appreciated. Thanks.
For your first dataset, the result is a list of image objects (a list of four objects of class im). For your second dataset, the result of relrisk.ppp is a single image (object of class im). This is the default behaviour when there are only two possible types of points (two possible mark values). See help(relrisk.ppp).
In all cases, you should just be able to plot and print the resulting object. You don't need to examine the internal data of the image.
More explanation: when there are only two possible types of points, the default behaviour of relrisk.ppp is to treat them as case-control data, where the points belonging to the first type are treated as controls (e.g. non-infected people), and the points of the second type are treated as cases (e.g. infected people). The ratio of intensities (cases divided by controls) is estimated as an image.
If you don't want this to happen, set the argument casecontrol=FALSE and then relrisk.ppp will always return a list of images, with one image for each possible mark. Each image gives the spatially-varying probability of that type of point.
It's all explained in help(relrisk.ppp) or in the book.

Why is a normal vector necessary for STL files?

STL is the most popular 3d model file format for 3d printing. It records triangular surfaces that makes up a 3d shape.
I read the specification the STL file format. It is a rather simple format. Each triangle is represented by 12 float point number. The first 3 define the normal vector, and the next 9 define three vertices. But here's one question. Three vertices are sufficient to define a triangle. The normal vector can be computed by taking the cross product of two vectors (each pointing from a vertex to another).
I know that a normal vector can be useful in rendering, and by including a normal vector, the program doesn't have to compute the normal vectors every time it loads the same model. But I wonder what would happen if the creation software include wrong normal vectors on purpose? Would it produce wrong results in the rendering software?
On the other hand, 3 vertices says everything about a triangle. Include normal vectors will allow logical conflicts in the information and increase the size of file by 33%. Normal vectors can be computed by the rendering software under reasonable amount of time if necessary. So why should the format include it? The format was created in 1987 for stereolithographic 3D printing. Was computing normal vectors to costly to computers back then?
I read in a thread that Autodesk Meshmixer would disregard the normal vector and graph triangles according to the vertices. Providing wrong normal vector doesn't seem to change the result.
Why do Stereolithography (.STL) files require each triangle to have a normal vector?
At least when using Cura to slice a model, the direction of the surface normal can make a difference. I have regularly run into STL files that look just find when rendered as solid objects in any viewer, but because some faces have the wrong direction of the surface normal, the slicer "thinks" that a region (typically concave) which should be empty is part of the interior, and the slicer creates a "top layer" covering up the details of the concave region. (And this was with an STL exported from a Meshmixer file that was imported from some SketchUp source).
FWIW, Meshmixer has a FlipSurfaceNormals tool to help deal with this.

Extracting relevant information from .obj 3d file

I have generated a .obj file from a scan of a 3d scanner. However, I am not sure how to interpret all this data. I have looked on wikipedia and understood the general structure of the .ibj file. My goal is to extract some information about the colour and I am not sure how to do that. What do the numbers in the vt line represent and how can I use those to come up with a colour? My end objective is to scan a foot and cancel out the floor "portion" of the scan. When scanning the foot, the floor is also part of the scan and I would like to disregard the floor and concentrate on the foot. Here is a small part of the .obj file:
Look s like Wavefront obj ASCII fileformat ... so google a bit and you will find tons of descriptions. In Your example:
v x y z means point coordinate (vertex) [x,y,z]
vn nx,ny,nz means normal vector (nx,ny,nz) of last point [x,y,z]
vt tx,ty means texture coordinate [tx,ty]
Vertexes are the points of the polygonal mesh. Normals are used for lighting computations (shading) so if you do not use it you can skip it. The color is stored in some texture image and you will pick it as a pixel at [tx,ty] the range is tx,ty=<-1,+1> or <0,+1> so you need to rescale to image resolution.
So you need to read all this data to some table and then find section with faces (starts with f):
f v1,v2,v3 means render polygon with 3 vertexes where v1,v2,v3 are index of vertex from the table. Beware the indexing starts from 1 so for C++ style arrays you need to decrement the indexes by 1.
There are a lot of deviations so without an example is hard to elaborate further (your example shows only the start of vertex table).

How to create holes in objects without modifying the mesh structure in WebGL?

I'm new to WebGL and for an assignment I'm trying to write a function which takes as argument an object, let's say "objectA". ObjectA will not be rendered but if it overlaps with another object in the scene, let’s say “objectB”, the part of objectB which is inside objectA will disappear. So the effect is that there is a hole in ObjectB without modifying its mesh structure.
I've managed to let it work on my own render engine, based on ray tracing, which gives the following effect:
image initial scene:
image with objectA removed:
In the first image, the green sphere is "objectA" and the blue cube is "objectB".
So now I'm trying to program it in WebGL, but I'm a bit stuck. Because WebGL is based on rasterization rather than ray tracing, it has to be calculated in another way. A possibility could be to modify the Z-buffer algorithm, where the fragments with a z-value lying inside objectA will be ignored.
The algorithm that I have in mind works as follows: normally only the fragment with the smallest z-value will be stored at a particular pixel containing the colour and z-value. A first modification is that at a particular pixel, a list of all fragments belonging to that pixel is maintained. No fragments will be discarded. Secondly per fragment an extra parameter is stored containing the object where it belongs to. Next the fragments are sorted in increasing order according to their z-value.
Then, if the first fragment belongs to objectA, it will be ignored. If the next one belongs to objectB, it will be ignored as well. If the third one belongs to objectA and the fourth one to objectB, the fourth one will be chosen because it lies outside objectA.
So the first fragment belonging to objectB will be chosen with the constraint that the amount of previous fragments belonging to objectA is even. If it is uneven, the fragment will lie inside objectA and will be ignored.
Is this somehow possible in WebGL? I've also tried to implement it via a stencil buffer, based on this blog:
WebGL : How do make part of an object transparent?
But this is written for OpenGL. I transformed the code instructions to WebGL code instructions, but it didn't work at all. But I'm not sure whether it will work with a 3D object instead of a 2D triangle.
Thanks a lot in advance!
Why wouldn't you write raytracer inside the fragment shader (aka pixel shader)?
So you would need to render a fullscreen quad (two triangles) and then the fragment shader would be responsible for raytracing. There are plenty of resources to read/learn from.
This links might be useful:
Distance functions - by iq
How shadertoy works
Simple webgl raytracer
EDIT:
Raytracing and SDFs (signed distance functions aka constructive solid geometry (CSGs)) are good way to handle what you need and how is generally achieved to intersect objects. Intersections, and boolean operators in general, for mesh geometry (i.e. made of polygons) is not done during the rendering, rahter it uses special algorithms that do all the processing ahead of rendering, so the resulting mesh actually exists in the memory and its topology is actually calculated and then just rendered.
Depending on the specific scenario that you have, you might be able to achieve the effect under some requirements and restrictions.
There are few important things to take into account: depth peeling (i.e. storing depth values of multiple fragments per single pixel, triangle orientation (CW or CCW) and polygon face orientation (front-facing or back-facing).
Say, for example, that both of your polygons are convex, then rendering backfacing polygons of ObjectA, then of ObjectB, then frontfacing polygons of A, then of B might achieve the desired effect (I'm not including full calculations for all cases of overlaps that can exist).
Under some other sets of restrictions you might be able to achieve the effect.
In your specific example in question, you have shown frontfacing faces of the cube, then in the second image you can see the backface of the cube. That already implies that you have at least two depth values per pixel stored somehow.
There is also a distinction between intersecting in screen-space, or volumes, or faces. Your example works with faces and is the hardest (there are two cases: the one you've shown where mesh A's pixels who are inside mesh B are simply discarded (i.e. you drilled a hole inside its surface), and there is a case where you do boolean operation where you never put a hole in the surface, but in the volume) and is usually done with algorithm that computes output mesh. SDFs are great for volumes. Screen-space is achieved by simply using depth test to discard some fragments.
Again, too many scenarios and depends on what you're trying to achieve and what are the constraints that you're working with.

C/C++ Delaunay lightweight library that preserves input order

Unfortunately I cannot find a C++ (or C or C#) library for performing Delaunay triangulations on a set of points (2D or 2.5D) which is able to deliver the output in an input-aware manner.
That is, given a set of points P_1, P_2, .. P_N, the output should consist of a set of triplets (a triangle soup) (i_a, i_b, i_c), where i_a, i_b and i_c are the indices of the P_i points (hence numbers between 1 and N). I've tried Fade2D, but I've found it very wasteful in terms of how it handles input (one has to pack vertices in its own point2d structure), and the output disregards whatever indexing the input had, delivering a set of coordinates together with another ordering of these vertices.
I'm the author of Fade2D and this is a late answer, I was not aware of your question. You do not need to pack your coordinates into the Point2 class before you insert them. There is also an insert method that takes an array of coordinates:
void Fade2D::insert(int numPoints,double * aCoordinates,Point2 ** aHandles);
This method takes an array of coordinates (x0,y0,x1,y1,...,xn,yn) and returns a vector of Point2* pointers that has exactly the same order. That's virtually no overhead. For your convenience, you can use
Point2::setCustomIndex() and
Point2::getCustomIndex()
to store and retrieve your own indices.

Resources