simple 3d interpolation like maybe sponge deformation or heat conduction - graphics

I have faced with a problem which I have no clue even to find a proper keyword to search. So I ask a question here to expect even some keyword or tag.
The background is very complex. But the result I wanna achieve can be described as a simple scene.
Suppose I have a cube made of glass. The cube is full of sponge. And there's a person in the sponge. Now the person does some movement or action. Then of course the sponge is deformed. This person is described as a geometry. I know the person's original pose, which means I know the original geometry. And I also know the deformed geometry. I prefer to describe the sponge as points or grids in the cube. I know that finite element method can do this accurately. But Is there any interpolation method to calculate how the sponge's points will be?
I donot expect any accurate deformation. I just expect that some falloff to show the pinch or stretch.
Any keyword are welcome. Thx so much.

'cause the structure of my scene is fixed, I choose simple KNN to implement this feature. As structure is fixed, I create a kdtree at the very beginning. Then deform other points based on KNN.

Related

Coarsening a 2.5D triangulation

I have a 2D-delaunay-triangulation where each vertex is labeled with an elevation. I now want to remove vertices from the triangulation without making big changes to the form (analogous to douglas-peucker for polylines).
There are a lot of mesh-coarsening algorithms for 3D-meshes. But isn't there something simpler for my task?
Do not remove points from your existing model. Instead construct a second one. Start with a few convex hull points and then refine the new model in a divide and conquer style until comparison with the original model yields that the specified error bound is kept. I have implemented it like that in the Fade library and it works well. You can try my 2.5D Douglas-Peucker implementation if you want, the student license is free.
But best possible output quality requires also that feature lines are detected, simplified and conserved. This is more involved, I work on that topic and hope that I can provide results soon.

How to calculate a pixels world space position on an image plane formed by a virtual camera?

First, this Calculating camera ray direction to 3d world pixel helped me a bit in understanding what the virtual camera setup is like. I don't understand how the vectors work in this setup, and I thought normalized device coordinates had to be used which led me to this page http://www.scratchapixel.com/lessons/3d-basic-lessons/lesson-6-rays-cameras-and-images/building-primary-rays-and-rendering-an-image/. What I am trying to do is build a ray tracer, and as the question states, find out the pixels position in order to shoot out a ray. What I really, really really would like, is an actually example showing a virtual camera setup, screen resolution and how to calculate a pixels position, then transform to world space coordinates. Experts!, Thank you for your help! :D
Multiply a matrix by the coordinates. What matrix? There are lots of choices. For example XNA uses a projection matrix, view matrix and world matrix. Applying all of them transforms pixel coordinates into world coordinates or vice versa. Breaking it down this way helps to understand the different transformations going on so you can more easily construct the matrices.
Isn't this webpage providing you already with 4 pages of explanation on how these rays are built? It seems like you haven't made the effort to read the content of the link you are referring to. I would suggest you read it first, try to understand it, maybe look at the source code they provide and come back with a real question regarding what you potentially don't understand.
It's all there, and I am not going to re-write what these people seem to have put a lot of energy already to explain! (nor should anybody else really ...).

Emulating a perspective rectangle on 2D

So, I'm currently developing a puzzle game of sorts, and I came upon something I'm not sure how to approach.
As you can see from the screenshot below, the text on the sides next to the main square is distorted along the diagonal of the quadrilateral. This is because this is not a screenshot of a 3D environment, but rather a 2D environment where the squares have been stretched in such a way that it looks like it's 3D.
I have tried using 3D perspective and changing depths, and while it solves the issue of the distorted sides, I was wondering if it's possible to fix this issue without doing 3D perspectives. Mainly because the current mesh transformation scheme took a while to get to, and converting that to something that works on 3D space is extra effort that might be avoidable.
I have a feeling this is unavoidable, but I'm curious if anyone knows a solution. I'm currently using OpenGL ES 1.
Probably not the answer you wanted, but I'd go with the 3d transformation because it will save you not only this distortion, but will simplify many other things down the road and give you opportunities to do nice effects.
What you are lacking in this scene is "perspective-correct interpolation", which is slightly non-linear, and is done automatically when you provide coordinates with depth information.
It may be possible to emulate it another way (though your options are limited since you do not have shaders available) but they will all likely be less efficient than using the dedicated functionality of your GPU. I recommend that you switch to using 3D coordinates.
Actually, I just found the answer. Turns out there's a Q coordinate which you can use to play around with trapezoidal texture distortion:
texture mapping a trapezoid with a square texture in OpenGL
http://www.xyzw.us/~cass/qcoord/
http://hacksoflife.blogspot.com.au/2008/08/perspective-correct-texturing-in-opengl.html
Looks like it won't be as correct as doing it 3D, but I suppose it will be easier for my use right now.

What's the opposite of tesselation?

From what I understand, taking a polygon and breaking it up into composite triangles is called "tesselation". What's the opposite process called and can anyone link me to an algorithm for it?
Essentially, I have a list of 2D triangles and I need an algorithm to recombine them into a polygon.
Thanks!
I think you need to transform your triangles as a half edge data structure, and then you should be able to easily find the half edges which have no opposite.
It's called mesh decimation. Here is some code I wrote to do this for a class. Tibur is correct that the half edge data structure makes this much more efficient.
http://www.cs.virginia.edu/~mjh7v/advgfx/proj1/
The thing that you are calling tessellation is actually called triangulation. The thing you are searching for is tessellation (you may have heard of it referred to as tiling).
If you are more specific about the problem you are trying to solve (e.g. do you know the shape of the final polygon?) I can try to recommend some more specific algorithms.

How do I arbitrarily distort a textured polygon?

I'd like to write a program that lets me arbitrarily distort a textured polygon by dragging its vertices. I want the texture to distort fluidly and without overlap, assuming the new polygon doesn't intersect itself. I should also be able to repeat the process with the new shape, and with a minimum amount of loss.
Are there any algorithms for doing this?
It sounds like you might want a variation on the Schwarz-Christoffel mapping. This is a type of conformal mapping that can be used to warp a polygon to and from a simpler region, like a disk; although I have not implemented it, apparently it is computationally tractable.
For your application, you would set up a map from the original polygon to the simpler region, and compute the inverse map to the modified polygon; combining the two should give you a nice conformal mapping from the original to the modified polygon.
Conformal mappings are nice and smooth, but they can sometimes behave in unintuitive ways; I can imagine that an animated version might yield some entertaining "slidy" effects. The conformal mapping will preserve local angles in the interior of the polygon; this means that the size distortion very near a modified vertex can be severe.
People have been working on solutions to this problem for the past decade or two, and the state of the art keeps on getting better and better (but the math gets harder as well). A good place to start (and sort of where I stopped following it) is the work http://www.cs.technion.ac.il/~weber/Publications/Complex-Coordinates/
Read the paper there, and look up the papers in the references. One of them should give you an algorithm that you're willing to implement.
The simplest method I can think of is to triangulate the input polygon (using an ear clipping method, or something similarly good) and then move the points. Then you can use a barycentric mapping from the original polygon to the new space.
If you're looking for something more robust, you might look at mean value coordinates.

Resources