texture coordinates for point in polygon - graphics

I have a polygon, where each vertex has texture coordinates. How can I calculate texture coord for any point lying in the polygon? (I know points coordinates.)
Thanks
p.s. sorry for my english...

I you don't care about perspective than it's simple enough. simply find the barycentric coordinates of the point and do a linear interpolation of the vertices texture coordinates. Here's a nice tutorial about this
If however you're trying to do this in a scene where with a perspective projection, you'll probably want to worry about perspective correction as well. Here's some reading about this issue

Related

Circle triangulation

I am currently experimenting with openGL, and I'm drawing a lot of circles that I have to break down to triangles (triangulate a circle).
I've been calculating the vertices of the triangles by having an angle that is incremented, and using cos() and sin() to get the x and y values for one vertex.
I searched a bit on the internet about the best and most efficient way of doing this, and even though there's not much information avaliable realized that thin and long triangles (my approach) are not very good. A better approach would be to start with an equilateral triangle and then repeatedly add triangles that cover the larges possible area that's not yet covered.
left - my method; right - new method
I am wondering if this is the most efficient way of doing this, and if yes, how would that be implemented in actual code.
The website where I found the method: link
both triangulations has their pros and cons
The Triangle FAN has equal sized triangles which sometimes looks better with textures (and other interpolated stuff) and the code to generate is simple for loop with parametric circle equation.
The increasing detail mesh has less triangles and can easily support LOD which might be faster. However number of points is not arbitrary (3,6,12,24,48,...). The code is slightly more complicated you could:
start with equilateral triangle remembering circumference edges
so add triangle (p0,p1,p2) to mesh and edges (p0,p1),(p1,p2),(p2,p0) to circumference.
for each edge (p0,p1) of circumference
compute:
p2 = 0.5*(p0+p1); // mid point
p2 = r*p2/|p2|; // normalize it to circle circumference assuming (0,0) is center
add triangle (p0,p1,p2) to mesh and replace p0,p1 edge with (p0,p2),(p2,p1) edges
note that r/|p2| will be the same for all edges in current detail level so no need to compute expensive |p2| over and over again.
goto #2 until you have enough dense triangulation
so 2 for loops and few dynamic lists (points,triangles,circumference_edges,and some temps if not doing this inplace). Also this method does not need goniometrics at all (can be modified to generate the triangle fan too).
Here similar stuff:
sphere triangulation using similar technique

Sphere and nonuniform object intersection

I have two objects: A sphere and an object. Its an object that I created using surface reconstruction - so we do not know the equation of the object. I want to know the intersecting points on the sphere when the object and the sphere intersect. If we had a sphere and a cylinder, we could solve for the equation and figure out the area and all that but the problem here is that the object is not uniform.
Is there a way to find out the intersecting points or area on the sphere?
I'd start by finding the intersection of triangles with the sphere. First find the intersection of each triangle's plane and the sphere, which gives a circle. Then find the circle's intersection/s with the triangle edges in 2D using line/circle tests. The result will be many arcs which I guess you could approximate with lines. I'm not really sure where to go from here without knowing the end goal.
If it's surface area you're after, maybe a numerical approach would be better. I'd cover the sphere in points and count the number inside the non-uniform object. To find if a point is inside, maybe trace outwards and count the intersections with the surface (if it's odd, the point is inside). You could use the stencil buffer for this if you wanted (similar to stencil shadows).
If you want the volume of intersection a quick google search gives "carve", a mesh based CSG library.
Starting with triangles versus the sphere will give you the points of intersection.
You can take the arcs of intersection with each surface and combine them to make fences around the sphere. Ideally your reconstructed object will be in winged-edge format so you could just step from one fence segment to the next, but with reconstructed surfaces I guess you might need to apply some slightly fuzzy logic.
You can determine which side of each fence is inside the reconstructed object and which side is out by factoring in the surface normals along the fence.
You can then cut the sphere along the fences and add the internal bits to the display.
For the other side of things you could remove any triangle completely inside the sphere and cut those that intersect.

How to draw a cylinder geometry using webgl?

I need to draw a cylinder geometry using webgl, but don't know how to realize it. The parameters may be radius,subdivisions and two central point of bottom faces.Any ideas will be appreciated,thanks~
Fundamentally, you will build it with triangles. It would be easiest to think of it more as an "n-sided" prism. The top and bottom faces will need to be made of triangle "fans", where each triangle shares one point in the center.
You will need to use simple math (including trigonometry) to calculate the locations of the points for each triangle.
If you don't know how to draw triangles with WebGL, check out NeHe's excellent WebGL guide at learningwebgl.com.

How to draw the heightmap onto the screen?

I'm using DirectX10 to simulate a water surface, and I'm now with a height map,which is a 2D array of the heights(y) at the points (x,z). But to draw it on the screen, I must turn it into a mesh or have a index to draw triangle topology.
But the data is too large to do it manually. Are there any methods for me to draw it on the screen. I hope it's easy to implement. If there is function included in DirectX10 which can make it, the it's the best one for me.
Create a mesh that format a grid of squares (each made of two triangles) and set all vertices y = 0. In the vertex shader sample the heightmap and add the value stored in the heightmap to the y of the vertice.
This might help you.
P.S: If the area you want it to cover is too big you should take a look at terrain LOD techniques (should work the same for water).
I'm sure you can make a mesh out of it. I doubt you can generate the heightmap for a water surface that is too large to "meshify".
Why are you looking at Diamond square. For a 512x512 heightmap all you need to do is define a set of point and then generate the triangles for it. Its really very simple.

Minimize Polygon Vertices

What is a good algorithm for reducing the number of vertices in a polygon without changing the way it looks very much?
Input: A polygon, represented as a list of points, with way too many verticies: raw input from the mouse, for example.
Output: A polygon with much fewer verticies that still looks a lot like the original: something usable for collision detection, for example (not necessarily convex).
Edit: The solution to this would be similar to finding a multi-segmented line of best fit on a graph. It's called Segmented Least Squares in my algorithms book.
Edit2: The Douglas Peucker Algorithm is what I really want.
Edit: Oh look, Simplifying Polygons
You mentioned collision detection. You could go really simple and calculate a bounding convex hull around it.
If you cared about the concave areas, you can calculate a concave hull by taking the centroid of your polygon, and choosing a point to start. From the starting point rotate around the centroid, finding each vertex you want to keep, and assigning that as the next vertex in the bounding hull. The complexity of the algorithm would come in how you determined which vertices to keep, but I'm sure you thought of that already. You can throw all your vertices into buckets based on their location relative to the centroid. When a bucket gets more than an arbitrary number of vertices full, you can split it. Then take the mean of the vertices in that bucket as the vertex to use in your bounding hull. Or, forget the buckets, and when you're moving around the centroid, only choose a point if its more than a given distance from the last point.
Actually, you could probably just use all the vertices in your polygon as "cloud of points" and calculate the concave hull around that. I'll look for an algorithm link. Worst case on this would be a completely convex polygon.
Another alternative is to start with a bounding rectangle. For each vertex on the rectangle, find the distance from the point to the polygon. For the farthest vertex, split it into two more vertices and move them in some. Repeat until some proportion of either vertices or area is met. I'd have to think about the details of this one a little more.
If you care about the polygon actually looking similar, even in the case of a self-intersecting polygon, then another approach would be required, but it doesn't sound like thats necessary since you asked about collision detection.
This post has some details about the convex hull part.
There's a lot of material out there. Just google for things like "mesh reduction", "mesh simplification", "mesh optimization", etc.

Resources