How To Texture A Quadtree - quadtree

I am attempting to apply one solid texture to a quadtree but I am having a problem. How my quadtree works is by creating a new mesh each time there is a subdivision. So the tree starts as one mesh, then when it splits its 4 meshes, so on so forth.
Now I am trying to apply a consistent texture to quadtree where each split still draws the same texture fully. The pictures below give a good example
Before Split:
After Split:
What I want is the texture to look like the before split picture even after the split. I can't seem to figure out the UV-mapping for it though. Is there a simple way to do this?
I have tried taking the location and modifying it's value based on the scale of the new mesh. This has proven unfruitful though and I'm really not sure what to do.
Any help is advice is greatly appreciated, thanks.

Stumbled on this...so it might be too late to help you. But if you are still thinking about this:
I think your problem is that you are getting a little confused about what a quadtree is. A quadtree is a spatial partition of a space. Think of it as a 2 dimensional b-tree. You don't texture a quadtree, you just use it to quickly figure out what lies within an arbitrary bounded region.
I suppose that you could use it to determine texture offsets for texture alignment, but that sounds like an odd use of a quadtree, and I suspect that there is probably a much easier way to solve your problem. (Perhaps use the world space coords % texture size to get the offset needed to seamlessly render the texture across multiple triangles?

Related

Create a polygon from a texture

Let's say I've got a rgba texture, and a polygon class , which constructor takes vector array of verticies coordinates.
Is there some way to create a polygon of this texture, for example, using alpha channel of the texture ...?
in 2d
Absolutely, yes it can be done. Is it easy? No. I haven't seen any game/geometry engines that would help you out too much either. Doing it yourself, the biggest problem you're going to have is generating a simplified mesh. One quad per pixel is going to generate a lot of geometry very quickly. Holes in the geometry may be an issue if you're tracing the edges and triangulating afterwards. Then there's the issue of determining what's in and what's out. Alpha is the obvious candidate, but unless you're looking at either full-on or full-off, you may be thinking about nice smooth edges. That's going to be hard to get right and would probably involve some kind of marching squares over the interpolated alpha. So while it's not impossible, its a lot of work.
Edit: As pointed out below, Unity does provide a method of generating a polygon from the alpha of a sprite - a PolygonCollider2D. In the script reference for it, it mentions the pathCount variable which describes the number of polygons it contains, which in describes which indexes are valid for the GetPath method. So this method could be used to generate polygons from alpha. It does rely on using Unity however. But with the combination of the sprite alpha for controlling what is drawn, and the collider controlling intersections with other objects, it covers a lot of use cases. This doesn't mean it's appropriate for your application.

How to calculate a pixels world space position on an image plane formed by a virtual camera?

First, this Calculating camera ray direction to 3d world pixel helped me a bit in understanding what the virtual camera setup is like. I don't understand how the vectors work in this setup, and I thought normalized device coordinates had to be used which led me to this page http://www.scratchapixel.com/lessons/3d-basic-lessons/lesson-6-rays-cameras-and-images/building-primary-rays-and-rendering-an-image/. What I am trying to do is build a ray tracer, and as the question states, find out the pixels position in order to shoot out a ray. What I really, really really would like, is an actually example showing a virtual camera setup, screen resolution and how to calculate a pixels position, then transform to world space coordinates. Experts!, Thank you for your help! :D
Multiply a matrix by the coordinates. What matrix? There are lots of choices. For example XNA uses a projection matrix, view matrix and world matrix. Applying all of them transforms pixel coordinates into world coordinates or vice versa. Breaking it down this way helps to understand the different transformations going on so you can more easily construct the matrices.
Isn't this webpage providing you already with 4 pages of explanation on how these rays are built? It seems like you haven't made the effort to read the content of the link you are referring to. I would suggest you read it first, try to understand it, maybe look at the source code they provide and come back with a real question regarding what you potentially don't understand.
It's all there, and I am not going to re-write what these people seem to have put a lot of energy already to explain! (nor should anybody else really ...).

Emulating a perspective rectangle on 2D

So, I'm currently developing a puzzle game of sorts, and I came upon something I'm not sure how to approach.
As you can see from the screenshot below, the text on the sides next to the main square is distorted along the diagonal of the quadrilateral. This is because this is not a screenshot of a 3D environment, but rather a 2D environment where the squares have been stretched in such a way that it looks like it's 3D.
I have tried using 3D perspective and changing depths, and while it solves the issue of the distorted sides, I was wondering if it's possible to fix this issue without doing 3D perspectives. Mainly because the current mesh transformation scheme took a while to get to, and converting that to something that works on 3D space is extra effort that might be avoidable.
I have a feeling this is unavoidable, but I'm curious if anyone knows a solution. I'm currently using OpenGL ES 1.
Probably not the answer you wanted, but I'd go with the 3d transformation because it will save you not only this distortion, but will simplify many other things down the road and give you opportunities to do nice effects.
What you are lacking in this scene is "perspective-correct interpolation", which is slightly non-linear, and is done automatically when you provide coordinates with depth information.
It may be possible to emulate it another way (though your options are limited since you do not have shaders available) but they will all likely be less efficient than using the dedicated functionality of your GPU. I recommend that you switch to using 3D coordinates.
Actually, I just found the answer. Turns out there's a Q coordinate which you can use to play around with trapezoidal texture distortion:
texture mapping a trapezoid with a square texture in OpenGL
http://www.xyzw.us/~cass/qcoord/
http://hacksoflife.blogspot.com.au/2008/08/perspective-correct-texturing-in-opengl.html
Looks like it won't be as correct as doing it 3D, but I suppose it will be easier for my use right now.

What's the opposite of tesselation?

From what I understand, taking a polygon and breaking it up into composite triangles is called "tesselation". What's the opposite process called and can anyone link me to an algorithm for it?
Essentially, I have a list of 2D triangles and I need an algorithm to recombine them into a polygon.
Thanks!
I think you need to transform your triangles as a half edge data structure, and then you should be able to easily find the half edges which have no opposite.
It's called mesh decimation. Here is some code I wrote to do this for a class. Tibur is correct that the half edge data structure makes this much more efficient.
http://www.cs.virginia.edu/~mjh7v/advgfx/proj1/
The thing that you are calling tessellation is actually called triangulation. The thing you are searching for is tessellation (you may have heard of it referred to as tiling).
If you are more specific about the problem you are trying to solve (e.g. do you know the shape of the final polygon?) I can try to recommend some more specific algorithms.

How do I arbitrarily distort a textured polygon?

I'd like to write a program that lets me arbitrarily distort a textured polygon by dragging its vertices. I want the texture to distort fluidly and without overlap, assuming the new polygon doesn't intersect itself. I should also be able to repeat the process with the new shape, and with a minimum amount of loss.
Are there any algorithms for doing this?
It sounds like you might want a variation on the Schwarz-Christoffel mapping. This is a type of conformal mapping that can be used to warp a polygon to and from a simpler region, like a disk; although I have not implemented it, apparently it is computationally tractable.
For your application, you would set up a map from the original polygon to the simpler region, and compute the inverse map to the modified polygon; combining the two should give you a nice conformal mapping from the original to the modified polygon.
Conformal mappings are nice and smooth, but they can sometimes behave in unintuitive ways; I can imagine that an animated version might yield some entertaining "slidy" effects. The conformal mapping will preserve local angles in the interior of the polygon; this means that the size distortion very near a modified vertex can be severe.
People have been working on solutions to this problem for the past decade or two, and the state of the art keeps on getting better and better (but the math gets harder as well). A good place to start (and sort of where I stopped following it) is the work http://www.cs.technion.ac.il/~weber/Publications/Complex-Coordinates/
Read the paper there, and look up the papers in the references. One of them should give you an algorithm that you're willing to implement.
The simplest method I can think of is to triangulate the input polygon (using an ear clipping method, or something similarly good) and then move the points. Then you can use a barycentric mapping from the original polygon to the new space.
If you're looking for something more robust, you might look at mean value coordinates.

Resources