How to arrange faces of a quad sphere to simplify neighbour lookup? - graphics

I'm working on a procedural planet generator that uses a quad sphere, or quadrilateralized spherical cube, to represent its surface.
Most authors seem to number and arrange the faces arbitrarily, and so did I. For example, this is the arrangement that Wikipedia shows for cube maps (apparently the Direct3D convention, although Wikipedia presents it as "the way" without mentioning the zillion alternatives):
But this leads to an issue when you want to know the neighbours of a given pixel, for example for normal mapping, or all sorts of simulations. Given a triple (face, u, v) that identifies a pixel (where u and v are integer indices, not texture coordinates), the task is to find the four triples that identify its four neighbours.
In the face interior, this is easy. But on the edges, you have to take 24 cases of wrapping into account: 6 cube faces × 4 edges per face. In pseudo-C:
Index neighbor(idx: Index, direction: Direction) -> Index {
switch (direction) {
case UP: if (idx.v < SIZE - 1) {
return Index { face: idx.face, u: idx.u, v: idx.v + 1 };
} else {
switch (face) {
case 0: return Index { face: 2, u: SIZE - 1, v: u };
// And so on for the other five faces
}
}
// And so on for the other three directions
}
}
It's tedious and error-prone, and the branching makes it potentially slower than needed.
Then I found the 2007 SIGGRAPH sketch Creating Spherical Worlds (sap_0251) by Compton et al., which mentions:
Further, by choosing face mappings to be permutations of the corresponding axes, it is possible to formulate efficient algorithms for wrapping between faces, and projecting a 3D point into a chart.
This tantalizing sentence is all we get; there's no further explanation, and I can't find any follow-up articles by these authors either.
How can we choose the face mapping to allow for efficient wrapping?

UPDATE
Adding a separate answer because of the difference in approach.
(This answer assumes that the UV coordinates are per-face in the range 0..<SIZE, with SIZE being constant for all faces.)
I can't think of a good way of efficiently computing the the neighbouring (face, u, v) across the boundaries for arbitrary cuboid mappings, but it should be relatively easy to just store a mapping of it for each face. For each face, store four mappings, one for each primary direction in UV space (i.e, +U, -U, +V, -V). Each of these mappings should contain a reference to the next face in that direction along with mapping coefficients for transform (u0, v0) -> (u1, v1).
For the example mapping above, face 2 (the top one) would have the following mappings:
up:
faceID: 5
u: SIZE-u
v: SIZE-1
down:
faceID: 4
u: u
v: SIZE-1
left:
faceID: 1
u: SIZE-v
v: SIZE-1
right:
faceID: 0
u: v
v: SIZE-1
When doing a neighbour lookup, check if the lookup falls outside the dimensions (0..<SIZE) and if it does, use the lookup structure defined above. So if you're looking up the next position along the U dimension on the boundary of face 2, just check the mapping for 'right': face 0, with a u value equal to the original v value, and a v value equal to SIZE-1.
You will need some method for precomputing this data when creating the geometry.
OLD ANSWER
Assuming the following:
This is for doing texture lookups.
You have control over the way textures are generated.
Then I propose an alternate solution.
Instead of finding an efficient way of mapping the neighbour relationship across the discontinuity, simply make the texel value outside the UV map region the same as the value in its corresponding neighbour polygon.

After some quality time with pencil and paper, I figured out a reasonably elegant way to do this. We still need to distinguish between even and odd faces, but that can be done with a modulo operation (compiled down to a bitwise AND), so it doesn't need branching.
The arrangement is as follows:
The mapping of global axes to local axes:
u v normal
+X +Y +Z
-X +Z +Y
-Y -Z +X
+Y +X -Z
+Z -X -Y
-Z -Y -X
Each global axis is used once in each column, so this is indeed a permutation. Whether it's the same one that the Spore developers designed, remains an open question.
There are certain rules now, for example, moving UP from face f,
if f is even (0, 2, 4), we always find face (f + 1) % 6 (1, 3, 5),
if f is odd (1, 3, 5), we always find face (f + 5) % 6 (0, 2, 4).
So the neighbour lookup becomes much simpler:
Index neighbor(idx: Index, direction: Direction) -> Index {
switch (direction) {
case UP: if (idx.v < SIZE - 1) {
return Index { face: idx.face, u: idx.u, v: idx.v + 1 };
} else {
return Index { face: (face + 1 + 4 * (face % 2)) % 6, u: SIZE - 1 - u, v: SIZE - 1 }
}
// And similar for the other three directions
}
}
There might still be room for improvement, so better answers are quite welcome!

Related

corners of angled rect in 3d

Ive got 2 points in 3d space (with the same y coordinate). Ill call them c and m. I want to find the corner points (marked in the pic as p1-p4) of a square with the width w. The important thing is, that the square is not parallel to the x-axis. If it were, (for p1 as an example) I could just do:
p1.x = m.x + w / 2
p1.y = m.y + w / 2
p1.z = m.z
How would I do the same with a angled square? These are all the given points:
m; c
and lenghts:
w; d
There's multiple ways to do it, but here's one way.
If the two points are guaranteed to have the same y value, you should be able to do it as follows.
Take 'm - c' and call that u. Normalize u. Then take the cross product of u and the y axis to get v, a vector parallel to the xz plane that's perpendicular to u. (This can be optimized, but that's unlikely to be important.) Then take the cross product of u and v to get a third vector, w. Note that you can use 'm - c' or 'c - m', or use different orders for the cross-product arguments, and it'll still work, but the resulting vectors may point in different directions (but only opposite directions). You can also normalize at different points in the process and get the same results at the end.
Once you have m, v, and w, you can use some basic vector math to compute the corners.
[Edit: I see you have a variable named 'w', so I should clarify that the 'w' in my example is a different 'w' than yours. As for your 'w' and 'd', those would factor in in the vector math I mentioned at the end.]

Find rightmost/leftmost point of SVG path

How to find leftmost/rightmost point of SVG C (bezier curve) path segment? I know there is getBoundingClientRect() and getBBox() but none of them apply since they return only single coordinate of the point.
Just to avoid XY problem - I want to split single path composed of bezier curves into several paths each monotonously going from left to right (or right to left). It means that on any single path should be no 2 points having equal X coordinate. I understand that required split point may potentially be inside the bounding box of a segment thus not being leftmost/rightmost, but I'm almost sure that way of finding such point should use same techniques as finding horizontally extreme point.
You would need to iterate through the path length with .getPointAtLength(i) method, and then find the limits. Seemed like a fun thing to do so I made a quick and dirty implementation, this is the important part:
function findLimits(path) {
var boundingPoints = {
minX: {x: dimensions.width, y: dimensions.height},
minY: {x: dimensions.width, y: dimensions.height},
maxX: {x: 0, y: 0},
maxY: {x: 0, y: 0}
}
var l = path.getTotalLength();
for (var p = 0; p < l; p++) {
var coords = path.getPointAtLength(p);
if (coords.x < boundingPoints.minX.x) boundingPoints.minX = coords;
if (coords.y < boundingPoints.minY.y) boundingPoints.minY = coords;
if (coords.x > boundingPoints.maxX.x) boundingPoints.maxX = coords;
if (coords.y > boundingPoints.maxY.y) boundingPoints.maxY = coords;
}
return boundingPoints
}
You can find the implementation here: https://jsfiddle.net/4gus3hks/1/
Paul LeBeau's comment and fancy animation on the wiki inspired me for the solution. It is based mostly on following terms:
Values of parameter t from [0, 1] can be matched to the curve
points.
For any parameter value point on the curve can be constructed
step-by-step by linearly combining pairs of adjacent control points
into intermediate control points of higher "depth". This operation
can be repeated until only single point left - point on the curve
itself.
Coordinates of the intermediate points can be defined by
t-polynoms of degree equal to point "depth". And coefficients of
those polynoms ultimately depend only on coordinates of initial
control points.
Penultimate step of construction gives 2 points that define tangent
to the curve at the final point, and coordinates of those points are
controlled by quadratic polynom.
Having direction of tangent in question as vector allows to
construct quadratic equation against t where curve has required
tangent.
So, in fact, finding required points can be performed in constant O(1) time:
tangentPoints: function(tx, ty){
var ends = this.getPolynoms(2);
var tangent = [ends[1][0].subtractPoly(ends[0][0]),
ends[1][1].subtractPoly(ends[0][1])];
var eq = tangent[0].multiplyScalar(ty).subtractPoly(tangent[1].multiplyScalar(tx));
return solveQuadratic(...eq.values).filter(t => t >= 0 && t <= 1);
}
Full code with assisting Polynom class and visual demo I placed into this repo and fiddle

GLSL cube signed distance field implementation explanation?

I've been looking at and trying to understand the following bit of code
float sdBox( vec3 p, vec3 b )
{
vec3 d = abs(p) - b;
return min(max(d.x,max(d.y,d.z)),0.0) +
length(max(d,0.0));
}
I understand that length(d) handles the SDF case where the point is off to the 'corner' (ie. all components of d are positive) and that max(d.x, d.y, d.z) gives us the proper distance in all other cases. What I don't understand is how these two are combined here without the use of an if statement to check the signs of d's components.
When all of the d components are positive, the return expression can be reduced to length(d) because of the way min/max will evaluate - and when all of the d components are negative, we get max(d.x, d.y, d.z). But how am I supposed to understand the in-between cases? The ones where the components of d have mixed signs?
I've been trying to graph it out to no avail. I would really appreciate it if someone could explain this to me in geometrical/mathematical terms. Thanks.
If you like to know how It works It's better do the following steps:
1.first of all you should know definitions of shapes
2.It's always better to consider 2D shape of them, because three dimensions may be complex for you.
so let me to explain some shapes:
Circle
A circle is a simple closed shape. It is the set of all points in a plane that are at a given distance from a given point, the center.
You can use distance(), length() or sqrt() to calculate the distance to the center of the billboard.
The book of shaders - Chapter 7
Square
In geometry, a square is a regular quadrilateral, which means that it has four equal sides and four equal angles (90-degree angles).
I describe 2D shapes In before section now let me to describe 3D definition.
Sphere
A sphere is a perfectly round geometrical object in three-dimensional space that is the surface of a completely round ball.
Like a circle, which geometrically is an object in two-dimensional space, a sphere is defined mathematically as the set of points that are all at the same distance r from a given point, but in three-dimensional space.
Refrence - Wikipedia
Cube
In geometry, a cube is a three-dimensional solid object bounded by six square faces, facets or sides, with three meeting at each vertex.
Refrence : Wikipedia
Modeling with distance functions
now It's time to understanding modeling with distance functions
Sphere
As mentioned In last sections.In below code length() used to calculate the distance to the center of the billboard , and you can scale this shape by s parameter.
//Sphere - signed - exact
/// <param name="p">Position.</param>
/// <param name="s">Scale.</param>
float sdSphere( vec3 p, float s )
{
return length(p)-s;
}
Box
// Box - unsigned - exact
/// <param name="p">Position.</param>
/// <param name="b">Bound(Scale).</param>
float udBox( vec3 p, vec3 b )
{
return length(max(abs(p)-b,0.0));
}
length() used like previous example.
next we have max(x,0) It called Positive and negative parts
this is mean below code is equivalent:
float udBox( vec3 p, vec3 b )
{
vec3 value = abs(p)-b;
if(value.x<0.){
value.x = 0.;
}
if(value.y<0.){
value.y = 0.;
}
if(value.z<0.){
value.z = 0.;
}
return length(value);
}
step 1
if(value.x<0.){
value.x = 0.;
}
step 2
if(value.y<0.){
value.y = 0.;
}
step 3
if(value.z<0.){
value.z = 0.;
}
step 4
next we have absolution function.It used to remove additional parts.
Absolution Steps
Absolution step 1
if(value.x < -1.){
value.x = 1.;
}
Absolution step 2
if(value.y < -1.){
value.y = 1.;
}
Absolution step 3
if(value.z < -1.){
value.z = 1.;
}
Also you can make any shape by using Constructive solid geometry.
CSG is built on 3 primitive operations: intersection ( ∩ ), union ( ∪ ), and difference ( - ).
It turns out these operations are all concisely expressible when combining two surfaces expressed as SDFs.
float intersectSDF(float distA, float distB) {
return max(distA, distB);
}
float unionSDF(float distA, float distB) {
return min(distA, distB);
}
float differenceSDF(float distA, float distB) {
return max(distA, -distB);
}
I figured it out a while ago and wrote about this extensively in a blog post here: http://fabricecastel.github.io/blog/2016-02-11/main.html
Here's an excerpt (see the full post for a full explanation):
Consider the four points, A, B, C and D. Let's crudely reduce the distance function to try and get rid of the min/max functions in order to understand their effect (since that's what's puzzling about this function). The notation below is a little sloppy, I'm using square brackets to denote 2D vectors.
// 2D version of the function
d(p) = min(max(p.x, p.y), 0)
+ length(max(p, 0))
---
d(A) = min(max(-1, -1), 0)
+ length(max([-1, -1], 0))
d(A) = -1 + length[0, 0]
---
d(B) = min(max(1, 1), 0)
+ length(max([1, 1], 0))
d(B) = 0 + length[1, 1]
Ok, so far nothing special. When A is inside the square, we essentially get our first distance function based on planes/lines and when B is in the area where our first distance function is inaccurate, it gets zeroed out and we get the second distance function (the length). The trick lies in the other two cases C and D. Let's work them out.
d(C) = min(max(-1, 1), 0)
+ length(max([-1, 1], 0))
d(C) = 0 + length[0, 1]
---
d(D) = min(max(1, -1), 0)
+ length(max([-1, 1], 0))
d(D) = 0 + length[1, 0]
If you look back to the graph above, you'll note C' and D'. Those points have coordinates [0,1] and [1,0], respectively. This method uses the fact that both distance fields intersect on the axes - that D and D' lie at the same distance from the square.
If we zero out all negative component of a vector and take its length we will get the proper distance between the point and the square (for points outside of the square only). This is what max(d,0.0) does; a component-wise max operation. So long as the vector has at least one positive component, min(max(d.x,d.y),0.0) will resolve to 0 leaving us with only the second part of the equation. In the event that the point is inside the square, we want to return the first part of the equation (since it represents our first distance function). If all components of the vector are negative it's easy to see our condition will be met.
This understanding should tranlsate back into 3D seamlessly once you wrap your head around it. You may or may not have to draw a few graphs by hand to really "get" it - I know I did and would encourage you to do so if you're dissatisfied with my explanation.
Working this implementation into our own code, we get this:
float distanceToNearestSurface(vec3 p){
float s = 1.0;
vec3 d = abs(p) - vec3(s);
return min(max(d.x, max(d.y,d.z)), 0.0)
+ length(max(d,0.0));
}
And there you have it.

Is there a graph-drawing tool that will allow me to constrain x, and automatically lay out y?

I am looking for a tool similar to graphviz that can render graphs, but that will allow me to constrain just the x coordinate of each node. Then, the tool will automatically choose y coordinates to make the graph look neat.
Basically, I want to make a timeline.
Language / platform / rendering medium are not very important.
If you want a neat-looking graph a force-directed algorithm is going to be your best bet. One of the best ones is SFDP (developed by AT&T, included in graphviz) though I can't seem to find pseudocode or an easy implementation. I don't think there are any algorithms this specialized. Thankfully, it's easy to code your own. I'll present some pseudocode mostly lifted form Wikipedia, but with suitably one-dimensional modifications. I'll assume you have n vertices and the vector of x-positions is x, subscripted by x.i.
set all vertex velocities to (0,0)
set all vertex positions to (x.i, random)
while (KE > epsilon)
KE = 0
for each vertex v
force = (0,0)
for each vertex u != v
force = force + (0, coulomb(u, v).y)
if u is incident to v
force = force + (0, hooke(u, v).y)
v.velocity = (v.velocity + timestep * force) * damping
v.position = v.position + timestep * v.velocity
KE = KE + |v.velocity| ^ 2
here the .y denotes getting the y-component of the force. This ensures that the x-components of the positions of the vertices never change from what you set them to be. The epsilon parameter is to be set by you, and should be something small compared to what you expect KE (the kinetic energy) to be. Also, |v| denotes the magnitude of the vector v (all computations are of 2-vectors in the above, except the KE). Note I set the mass of all the nodes to be 1, but you can change that if you want.
The Hooke and Coulomb functions calculate the respective forces between nodes; the first is linear in distance between vertices, the second is quadratic, so there is a guaranteed equilibrium. These functions look something like
def hooke(u, v)
return -k * |u.position - v.position|
def coulomb(u, v)
return C * |u.position - v.position|
where again most computations are in vector form. C and k have real values but experiment to get the graph you want. This isn't usually necessary because the scaling factors will, in two dimensions, pretty much expand or contract the whole graph, but here the x-distances are set so to get a good-looking graph you will have to change the values a bit.

Projective transformation

Given two image buffers (assume it's an array of ints of size width * height, with each element a color value), how can I map an area defined by a quadrilateral from one image buffer into the other (always square) image buffer? I'm led to understand this is called "projective transformation".
I'm also looking for a general (not language- or library-specific) way of doing this, such that it could be reasonably applied in any language without relying on "magic function X that does all the work for me".
An example: I've written a short program in Java using the Processing library (processing.org) that captures video from a camera. During an initial "calibrating" step, the captured video is output directly into a window. The user then clicks on four points to define an area of the video that will be transformed, then mapped into the square window during subsequent operation of the program. If the user were to click on the four points defining the corners of a door visible at an angle in the camera's output, then this transformation would cause the subsequent video to map the transformed image of the door to the entire area of the window, albeit somewhat distorted.
Using linear algebra is much easier than all that geometry! Plus you won't need to use sine, cosine, etc, so you can store each number as a rational fraction and get the exact numerical result if you need it.
What you want is a mapping from your old (x,y) co-ordinates to your new (x',y') co-ordinates. You can do it with matrices. You need to find the 2-by-4 projection matrix P such that P times the old coordinates equals the new co-ordinates. We'll assume that you're mapping lines to lines (not, for instance, straight lines to parabolas). Because you have a projection (parallel lines don't stay parallel) and translation (sliding), you need a factor of (xy) and (1), too. Drawn as matrices:
[x ]
[a b c d]*[y ] = [x']
[e f g h] [x*y] [y']
[1 ]
You need to know a through h so solve these equations:
a*x_0 + b*y_0 + c*x_0*y_0 + d = i_0
a*x_1 + b*y_1 + c*x_1*y_1 + d = i_1
a*x_2 + b*y_2 + c*x_2*y_2 + d = i_2
a*x_3 + b*y_3 + c*x_3*y_3 + d = i_3
e*x_0 + f*y_0 + g*x_0*y_0 + h = j_0
e*x_1 + f*y_1 + g*x_1*y_1 + h = j_1
e*x_2 + f*y_2 + g*x_2*y_2 + h = j_2
e*x_3 + f*y_3 + g*x_3*y_3 + h = j_3
Again, you can use linear algebra:
[x_0 y_0 x_0*y_0 1] [a e] [i_0 j_0]
[x_1 y_1 x_1*y_1 1] * [b f] = [i_1 j_1]
[x_2 y_2 x_2*y_2 1] [c g] [i_2 j_2]
[x_3 y_3 x_3*y_3 1] [d h] [i_3 j_3]
Plug in your corners for x_n,y_n,i_n,j_n. (Corners work best because they are far apart to decrease the error if you're picking the points from, say, user-clicks.) Take the inverse of the 4x4 matrix and multiply it by the right side of the equation. The transpose of that matrix is P. You should be able to find functions to compute a matrix inverse and multiply online.
Where you'll probably have bugs:
When computing, remember to check for division by zero. That's a sign that your matrix is not invertible. That might happen if you try to map one (x,y) co-ordinate to two different points.
If you write your own matrix math, remember that matrices are usually specified row,column (vertical,horizontal) and screen graphics are x,y (horizontal,vertical). You're bound to get something wrong the first time.
EDIT
The assumption below of the invariance of angle ratios is incorrect. Projective transformations instead preserve cross-ratios and incidence. A solution then is:
Find the point C' at the intersection of the lines defined by the segments AD and CP.
Find the point B' at the intersection of the lines defined by the segments AD and BP.
Determine the cross-ratio of B'DAC', i.e. r = (BA' * DC') / (DA * B'C').
Construct the projected line F'HEG'. The cross-ratio of these points is equal to r, i.e. r = (F'E * HG') / (HE * F'G').
F'F and G'G will intersect at the projected point Q so equating the cross-ratios and knowing the length of the side of the square you can determine the position of Q with some arithmetic gymnastics.
Hmmmm....I'll take a stab at this one. This solution relies on the assumption that ratios of angles are preserved in the transformation. See the image for guidance (sorry for the poor image quality...it's REALLY late). The algorithm only provides the mapping of a point in the quadrilateral to a point in the square. You would still need to implement dealing with multiple quad points being mapped to the same square point.
Let ABCD be a quadrilateral where A is the top-left vertex, B is the top-right vertex, C is the bottom-right vertex and D is the bottom-left vertex. The pair (xA, yA) represent the x and y coordinates of the vertex A. We are mapping points in this quadrilateral to the square EFGH whose side has length equal to m.
Compute the lengths AD, CD, AC, BD and BC:
AD = sqrt((xA-xD)^2 + (yA-yD)^2)
CD = sqrt((xC-xD)^2 + (yC-yD)^2)
AC = sqrt((xA-xC)^2 + (yA-yC)^2)
BD = sqrt((xB-xD)^2 + (yB-yD)^2)
BC = sqrt((xB-xC)^2 + (yB-yC)^2)
Let thetaD be the angle at the vertex D and thetaC be the angle at the vertex C. Compute these angles using the cosine law:
thetaD = arccos((AD^2 + CD^2 - AC^2) / (2*AD*CD))
thetaC = arccos((BC^2 + CD^2 - BD^2) / (2*BC*CD))
We map each point P in the quadrilateral to a point Q in the square. For each point P in the quadrilateral, do the following:
Find the distance DP:
DP = sqrt((xP-xD)^2 + (yP-yD)^2)
Find the distance CP:
CP = sqrt((xP-xC)^2 + (yP-yC)^2)
Find the angle thetaP1 between CD and DP:
thetaP1 = arccos((DP^2 + CD^2 - CP^2) / (2*DP*CD))
Find the angle thetaP2 between CD and CP:
thetaP2 = arccos((CP^2 + CD^2 - DP^2) / (2*CP*CD))
The ratio of thetaP1 to thetaD should be the ratio of thetaQ1 to 90. Therefore, calculate thetaQ1:
thetaQ1 = thetaP1 * 90 / thetaD
Similarly, calculate thetaQ2:
thetaQ2 = thetaP2 * 90 / thetaC
Find the distance HQ:
HQ = m * sin(thetaQ2) / sin(180-thetaQ1-thetaQ2)
Finally, the x and y position of Q relative to the bottom-left corner of EFGH is:
x = HQ * cos(thetaQ1)
y = HQ * sin(thetaQ1)
You would have to keep track of how many colour values get mapped to each point in the square so that you can calculate an average colour for each of those points.
I think what you're after is a planar homography, have a look at these lecture notes:
http://www.cs.utoronto.ca/~strider/vis-notes/tutHomography04.pdf
If you scroll down to the end you'll see an example of just what you're describing. I expect there's a function in the Intel OpenCV library which will do just this.
There is a C++ project on CodeProject that includes source for projective transformations of bitmaps. The maths are on Wikipedia here. Note that so far as i know, a projective transformation will not map any arbitrary quadrilateral onto another, but will do so for triangles, you may also want to look up skewing transforms.
If this transformation has to look good (as opposed to the way a bitmap looks if you resize it in Paint), you can't just create a formula that maps destination pixels to source pixels. Values in the destination buffer have to be based on a complex averaging of nearby source pixels or else the results will be highly pixelated.
So unless you want to get into some complex coding, use someone else's magic function, as smacl and Ian have suggested.
Here's how would do it in principle:
map the origin of A to the origin of B via a traslation vector t.
take unit vectors of A (1,0) and (0,1) and calculate how they would be mapped onto the unit vectors of B.
this gives you a transformation matrix M so that every vector a in A maps to M a + t
invert the matrix and negate the traslation vector so for every vector b in B you have the inverse mapping b -> M-1 (b - t)
once you have this transformation, for each point in the target area in B, find the corresponding in A and copy.
The advantage of this mapping is that you only calculate the points you need, i.e. you loop on the target points, not the source points. It was a widely used technique in the "demo coding" scene a few years back.

Resources