I'm reading Shaders for Game Programming and Artists. In Chapter 13 "Building Materials from Scratch", the author introduced some render techniques to simulate complex materials such as marble or wood by using Perlin noise. But I'm puzzled by the wood rendering.
To simulate the wood, we need a function gives a circular value along a specific plane so that we can create the rings in the woods. This is what the author said, "take the dot product of two axes along a plane, creating the circular value on that plane"
Circle = dot(noisetxr.xy, noisetxr.xy);
noisetxr is a float3, it's a texture coordinate to sample the noise texture, I can't understand why the dot product will gives a circular value
Here is the complete code(pixel shader in hlsl):
float persistance;
float4 wood_color; //a predefined value
sampler Texture0; // noise texture
float4 ps_main(float3 txr: TEXCOORD0) : COLOR
{
// Determine two set of coordinates, one for the noise
// and one for the wood rings
float3 noisetxr = txr;
txr = txr/8;
// Combine 3 octaves of noise together.
float final_noise = 0;
for(int i=0;i<2;i++)
final_noise += ((1.0/pow(persistance,i))*
((tex3D(Texture0, txr*pow(2,i))*2)-1));
// The wood is defined by a set of concentric rings in the XY
// plane. Those rings are pertubated by the computed noise.
final_noise = abs(final_noise);
float grain = cos(dot(noisetxr.xy,noisetxr.xy) + final_noise*4);//what is this ??
return wood_color - pow(grain,8)/2; //raising the cosine to higher power
}
I know that raising the cosine function to higher power will create sharper rings, but what does the dot product mean ? Why it can create a circle value ?
A dot-product of a vector with itself simply results in the squared length of the vector. So for each point in the xy-plane, dot(noisetxr.xy,noisetxr.xy) return the squared distance of the point to the origin. Now you're applying a cosinus-function on this distance, which means for all points on the plane, which have the same distance to the origin, it creates the same output value => a circle of equal values around the origin.
Related
Question:
I need to calculate intersection shape (purple) of plane defined by Ax + By + Cz + D = 0 and frustum defined by 4 rays emitting from corners of rectangle (red arrows). The result shoud be quadrilateral (4 points) and important requirement is that result shape must be in plane's local space. Plane is created with transformation matrix T (planes' normal is vec3(0, 0, 1) in T's space).
Explanation:
This is perspective form of my rectangle projection to another space (transformation / matrix / node). I am able to calculate intersection shape of any rectangle without perspective rays (all rays are parallel) by plane-line intersection algorithm (pseudocode):
Definitions:
// Plane defined by normal (A, B, C) and D
struct Plane { vec3 n; float d; };
// Line defined by 2 points
struct Line { vec3 a, b; };
Intersection:
vec3 PlaneLineIntersection(Plane plane, Line line) {
vec3 ba = normalize(line.b, line.a);
float dotA = dot(plane.n, l.a);
float dotBA = dot(plane.n, ba);
float t = (plane.d - dotA) / dotBA;
return line.a + ba * t;
}
Perspective form comes with some problems, because some of rays could be parallel with plane (intersection point is in infinite) or final shape is self-intersecting. Its works in some cases, but it's not enough for arbitary transformation. How to get correct intersection part of plane wtih perspective?
Simply, I need to get visible part of arbitary plane by arbitary perspective "camera".
Thank you for suggestions.
Intersection between a plane (one Ax+By+Cx+D equation) and a line (two planes equations) is a matter of solving the 3x3 matrix for x,y,z.
Doing all calculations on T-space (origin is at the top of the pyramid) is easier as some A,B,C are 0.
What I don't know if you are aware of is that perspective is a kind of projection that distorts the z ("depth", far from the origin). So if the plane that contains the rectangle is not perpendicular to the axis of the fustrum (z-axis) then it's not a rectangle when projected into the plane, but a trapezoid.
Anyhow, using the projection perspective matrix you can get projected coordinates for the four rectangle corners.
To tell if a point is in one side of a plane or in the other just put the point coordinates in the plane equation and get the sign, as shown here
Your question seems inherently mathematic so excuse my mathematical solution on StackOverflow. If your four arrows emit from a single point and the formed side planes share a common angle, then you are looking for a solution to the frustum projection problem. Your requirements simplify the problem quite a bit because you define the plane with a normal, not two bounded vectors, thus if you agree to the definitions...
then I can provide you with the mathematical solution here (Internet Explorer .mht file, possibly requiring modern Windows OS). If you are thinking about an actual implementation then I can only direct you to a very similar frustum projection implementation that I have implemented/uploaded here (Lua): https://github.com/quiret/mta_lua_3d_math
The roadmap for the implementation could be as follows: creation of condition container classes for all sub-problems (0 < k1*a1 + k2, etc) plus the and/or chains, writing algorithms for the comparisions across and-chains as well as normal-form creation, optimization of object construction/memory allocation. Since each check for frustum intersection requires just a fixed amount of algebraic objects you can implement an efficient cache.
I'm trying to find the best way to calculate this. On a 2D plane I have fixed points all with an instantaneous measurement value. The coordinates of these points is known. I want to predict the value of a movable point between these fixed points. The movable point coodinates will be known. So the distance betwwen the points is known as well.
This could be comparable to temperature readings or elevation on topography. I this case I'm wanting to predict ionospheric TEC of the mobile point from the fixed point measurements. The fixed point measurements are smoothed over time however I do not want to have to store previous values of the mobile point estimate in RAM.
Would some sort of gradient function be the way to go here?
This is the same algorithm for interpolating the height of a point from a triangle.
In your case you don't have z values for heights, but some other float value for each triangle vertex, but it's the same concept, still 3D points.
Where you have 3D triangle points p, q, r and test point pt, then pseudo code from the above mathgem is something like this:
Vector3 v1 = q - p;
Vector3 v2 = r - p;
Vector3 n = v1.CrossProduct(v2);
if n.z is not zero
return ((n.x * (pt.x - p.x) + n.y * (pt.y - p.y)) / -n.z) + p.z
As you indicate in your comment to #Phpdevpad, you do have 3 fixed points so this will work.
You can try contour plots especially contour lines. Simply use a delaunay triangulation of the points and a linear transformation along the edges. You can try my PHP implementations https://contourplot.codeplex.com for geographic maps. Another algorithm is conrec algorithm from Paul Bourke.
I have a problem with creating 3D cylinders (without OpenGL). I understand that a mesh is used to create the cylinder surface and triangle fans are used to create the top and bottom caps. I have already implemented the mesh but not the planar triangle fans, so currently my 3D object looks like a cylinder without the bottom and top cap.
I believe this is what I need to do in order to create the bottom and top caps. First, find the center point of the cylinder mesh. Second, find the vertices of the mesh. Third, using the center point and the 2 vertex points, create the triangle. Fourth, repeat the steps until a planar circle is created.
Are the above steps a sufficient way of creating the caps or is there a better way? And how do I find the vertices of the mesh so I can create the triangle fans?
First some notes:
you did not specify your platform
gfx interface
language
not enough info about your cylinder either
is it axis aligned?
what coordinate system (Cartesian/orthogonal/orthonormal)?
need additional dimensions like color or texture coordinates?
So I can provide just generic info then
Axis aligned cylinder
choose the granularity N
number of points along your cap's circle
usually 20-36 is OK but if you need higher precision then sometimes you need even 1000 points or more
all depends on the purpose,zoom, angle and distance of view ...
and performance issues
for now let N=32
you need BR (boundary representation)
you did not specify gfx interface but your text implies BR model (surface polygons)
also no pivot point position so I will choose middle point of cylinder to be (0,0,0)
z axis will be the height of cylinder
and the caps will be coplanar with xy plane
so for cylinder is enough set of 2 rings (caps)
so the points can be defined in C++ like this:
const int N=32; // mesh complexity
double p0[N][3],p1[N][3]; // rings`
double a,da,c,s,r,h2; // some temp variables
int i;
r =50.0; // cylinder radius
h2=100.0*0.5; // half height of cyliner
da=M_PI/double(N-1);
for (a=0.0,i=0;i<N;i++,a+=da)
{
c=r*cos(a);
s=r*sin(a);
p0[i][0]=c;
p0[i][1]=s;
p0[i][2]=+h2;
p1[i][0]=c;
p1[i][1]=s;
p1[i][2]=-h2;
}
the ring points are as closed loop (p0[0]==p0[N-1])
so you do not need additional lines to handle it...
now how to draw
cant write the code for unknown api but
'mesh' is something like QUAD_STRIP I assume
so just add points to it in this order:
QUAD_STRIP = { p0[0],p1[0],p0[1],p1[1],...p0[N-1],p1[N-1] };
if you have inverse normal problem then swap p0/p1
now for the fans
you do not need the middle point (unless you have interpolation aliasing issues)
so similar:
TRIANGLE_FAN0 = { p0[0],p0[1],...p0[N-1] };
TRIANGLE_FAN1 = { p1[0],p1[1],...p1[N-1] };
if you still want the middle point then:
TRIANGLE_FAN0 = { (0.0,0.0,+h2),p0[0],p0[1],...p0[N-1] };
TRIANGLE_FAN1 = { (0.0,0.0,-h2),p1[0],p1[1],...p1[N-1] };
if you have inverse normal problem then reverse the points order (middle point stays where it is)
Not axis aligned cylinder?
just use transform matrix on your p0[],p1[] point lists to translate/rotate to desired position
the rest stays the same
we are programming a 2D game in XNA. Now we have polygons which define our level elements. They are triangulated such that we can easily render them. Now I would like to write a shader which renders the polygons as outlined textures. So in the middle of the polygon one would see the texture and on the border it should somehow glow.
My first idea was to walk along the polygon and draw a quad on each line segment with a specific texture. This works but looks strange for small corners where the textures are forced to overlap.
My second approach was to mark all border vertices with some kind of normal pointing out of the polygon. Passing this to the shader would interpolate the normals across edges of the triangulation and I could use the interpolated "normal" as a value for shading. I could not test it yet but would that work? A special property of the triangulation is that all vertices are on the border so there are no vertices inside the polygon.
Do you guys have a better idea for what I want to achieve?
Here A picture of what it looks right now with the quad solution:
You could render your object twice. A bigger stretched version behind the first one. Not that ideal since a complex object cannot be streched uniformly to create a border.
If you have access to your screen buffer you can render your glow components into a rendertarget and align a full-screen quad to your viewport and add a fullscreen 2D silhouette filter to it.
This way you gain perfect control over the edge by defining its radius, colour, blur. With additional output values such as the RGB values from the object render pass you can even have different advanced glows.
I think rendermonkey had some examples in their shader editor. Its definetly a good starting point to work with and try out things.
Propaply you want calclulate new border vertex list (easy fill example with triangle strip with originals). If you use constant border width and convex polygon its just:
B_new = B - (BtoA.normalised() + BtoC.normalised()).normalised() * width;
If not then it can go more complicated, there is my old but pretty universal solution:
//Helper function. To working right, need that v1 is before v2 in vetex list and vertexes are going to (anti???) cloclwise!
float vectorAngle(Vector2 v1, Vector2 v2){
float alfa;
if (!v1.isNormalised())
v1.normalise();
if (!v2.isNormalised())
v2.normalise();
alfa = v1.dotProduct(v2);
float help = v1.x;
v1.x = v1.y;
v1.y = -help;
float angle = Math::ACos(alfa);
if (v1.dotProduct(v2) < 0){
angle = -angle;
}
return angle;
}
//Normally dont use directly this!
Vector2 calculateBorderPoint(Vector2 vec1, Vector2 vec2, float width1, float width2){
vec1.normalise();
vec2.normalise();
float cos = vec1.dotProduct(vec2); //Calculates actually cosini of two (normalised) vectors (remember math lessons)
float csc = 1.0f / Math::sqrt(1.0f-cos*cos); //Calculates cosecant of angle, This return NaN if angle is 180!!!
//And rest of the magic
Vector2 difrence = (vec1 * csc * width2) + (vec2 * csc * width1);
//If you use just convex polygons (all angles < 180, = 180 not allowed in this case) just return value, and if not you need some more magic.
//Both of next things need ordered vertex lists!
//Output vector is always to in side of angle, so if this angle is.
if (Math::vectorAngle(vec1, vec2) > 180.0f) //Note that this kind of function can know is your function can know that angle is over 180 ONLY if you use ordered vertexes (all vertexes goes always (anti???) cloclwise!)
difrence = -difrence;
//Ok and if angle was 180...
//Note that this can fix your situation ONLY if you use ordered vertexes (all vertexes goes always (anti???) cloclwise!)
if (difrence.isNaN()){
float width = (width1 + width2) / 2.0; //If angle is 180 and border widths are difrent, you cannot get perfect answer ;)
difrence = vec1 * width;
//Just turn vector -90 degrees
float swapHelp = difrence.y
difrence.y = -difrence.x;
difrence.x = swapHelp;
}
//If you don't want output to be inside of old polygon but outside, just: "return -difrence;"
return difrence;
}
//Use this =)
Vector2 calculateBorderPoint(Vector2 A, Vector2 B, Vector2 C, float widthA, float widthB){
return B + calculateBorderPoint(A-B, C-B, widthA, widthB);
}
Your second approach can be possible...
mark the outer vertex (in border) with 1 and the inner vertex (inside) with 0.
in the pixel shader you can choose to highlight, those that its value is greater than 0.9f or 0.8f.
it should work.
I am trying to design an asic graphics processor. I have done extensive research on the topic but I am still kind of fuzzy on how to translate and rotate points. I am using orthographic projection to rasterize the transformed points.
I have been using the following lecture regarding the matrix multiplication (homogenous coordinates)
http://www.cs.kent.edu/~zhao/gpu/lectures/Transformation.pdf
Could someone please explain this a little more in depth to me. I am still somewhat shakey on the algorithm. I am passing a camera (x,y,z) and a camera vector (x,y,z) representing the camera angle, along with a point (x,y,z). What should go where within the matrices to transform the point to the new appropriate location?
Here's the complete transformation algorithm in pseudocode:
void project(Vec3d objPos, Matrix4d modelViewMatrix,
Matrix4d projMatrix, Rect viewport, Vec3d& winCoords)
{
Vec4d in(objPos.x, objPos.y, objPos.z, 1.0);
in = projMatrix * modelViewMatrix * in;
in /= in.w; // perspective division
// "in" is now in normalized device coordinates, which are in the range [-1, 1].
// Map coordinates to range [0, 1]
in.x = in.x / 2 + 0.5;
in.y = in.y / 2 + 0.5;
in.z = in.z / 2 + 0.5;
// Map to viewport
winCoords.x = in.x * viewport.w + viewport.x;
winCoords.y = in.y * viewport.h + viewport.y;
winCoords.z = in.z;
}
Then rasterize using winCoords.x and winCoords.y.
For an explanation of the stages of this algorithm, see question 9.011 from the OpenGL FAQ.
For the first few years they were for sale, mass-market graphics processors for PC didn't translate or rotate points at all. Are you required to implement this feature? If not, you may wish to let software do it. Depending on your circumstances, software may be the more sensible route.
If you are required to implement the feature, I'll tell you how they did it in the early days.
The hardware has sixteen floating point registers that represent a 4x4 matrix. The application developer loads these registers with the ModelViewProjection matrix just before rendering a mesh of triangles. The ModelViewProjection matrix is:
Model * View * Projection
Where "Model" is a matrix that brings vertices from "model" coordinates into "world" coordinates, "View" is a matrix that brings vertices from "world" coordinates into "camera" coordinates, and "Projection" is a matrix that brings vertices from "camera" coordinates to "screen" coordinates. Together they bring vertices from "model" coordinates - coordinates relative to the 3D model they belong to - into "screen" coordinates, where you intend to rasterize them as triangles.
Those are three different matrices, but they're multiplied together and the 4x4 result is written to hardware registers.
When a buffer of vertices is to be rendered as triangles, the hardware reads in vertices as [x,y,z] vectors from memory, and treats them as if they were [x,y,z,w] where w is always 1. It then multiplies each vector by the 4x4 ModelViewProjection matrix to get [x',y',z',w']. If there is perspective (you said there wasn't) then we divide by w' to get perspective [x'/w',y'/w',z'/w',w'/w'].
Then triangles are rasterized with the newly computed vertices. This enables a model's vertices to be in read-only memory if desired, though the model and camera may be in motion.