Abound image-space derivatives of the barycentrics - graphics

I found a code in geometry shader to calculate the derivatives of barycentrics w.r.t screen space coordinates (dudX,dudY,dvdX,dvdY)。
And here is the code:
void main()
{
// Plane equations for bary differentials.
float w0 = gl_in[0].gl_Position.w;
float w1 = gl_in[1].gl_Position.w;
float w2 = gl_in[2].gl_Position.w;
vec2 p0 = gl_in[0].gl_Position.xy / w0;
vec2 p1 = gl_in[1].gl_Position.xy / w1;
vec2 p2 = gl_in[2].gl_Position.xy / w2;
vec2 e0 = p0 - p2;
vec2 e1 = p1 - p2;
float a = e0.x*e1.y - e0.y*e1.x;
// Clamp area to an epsilon to avoid arbitrarily high bary differentials.
float eps = 1e-6f; // ~1 pixel in 1k x 1k image.
float ca = (abs(a) >= eps) ? a : (a < 0.f) ? -eps : eps; // Clamp with sign.
float ia = 1.f / ca; // Inverse area.
vec2 ascl = ia * vp_scale;
float dudx = e1.y * ascl.x;
float dudy = -e1.x * ascl.y;
float dvdx = -e0.y * ascl.x;
float dvdy = e0.x * ascl.y;
float duwdx = dudx / w0;
float dvwdx = dvdx / w1;
float duvdx = (dudx + dvdx) / w2;
float duwdy = dudy / w0;
float dvwdy = dvdy / w1;
float duvdy = (dudy + dvdy) / w2;
vec4 db0 = vec4(duvdx - dvwdx, duvdy - dvwdy, dvwdx, dvwdy);
vec4 db1 = vec4(duwdx, duwdy, duvdx - duwdx, duvdy - duwdy);
vec4 db2 = vec4(duwdx, duwdy, dvwdx, dvwdy);
int layer_id = v_layer[0];
int prim_id = gl_PrimitiveIDIn + v_offset[0];
gl_Layer = layer_id; gl_PrimitiveID = prim_id; gl_Position = vec4(gl_in[0].gl_Position.x, gl_in[0].gl_Position.y, gl_in[0].gl_Position.z, gl_in[0].gl_Position.w); var_uvzw = vec4(1.f, 0.f, gl_in[0].gl_Position.z, gl_in[0].gl_Position.w); var_db = db0; EmitVertex();
gl_Layer = layer_id; gl_PrimitiveID = prim_id; gl_Position = vec4(gl_in[1].gl_Position.x, gl_in[1].gl_Position.y, gl_in[1].gl_Position.z, gl_in[1].gl_Position.w); var_uvzw = vec4(0.f, 1.f, gl_in[1].gl_Position.z, gl_in[1].gl_Position.w); var_db = db1; EmitVertex();
gl_Layer = layer_id; gl_PrimitiveID = prim_id; gl_Position = vec4(gl_in[2].gl_Position.x, gl_in[2].gl_Position.y, gl_in[2].gl_Position.z, gl_in[2].gl_Position.w); var_uvzw = vec4(0.f, 0.f, gl_in[2].gl_Position.z, gl_in[2].gl_Position.w); var_db = db2; EmitVertex();
}
db0, db1 and db2 are the output derivatives of three vertex in a triangle.
vp_scale is a vec2 variable which contains (width, height) of display viewport.
I could understand the code until dudx, dudy, dvdx, dvdy.
The most confusion part for me is the db0, db1 and db2. Also, I dont know what duvdx and duvdy presents for.
I think maybe its something relate to perspective correction in rasterization-inpterpolation of vertex attribute. But I cant found a good way to the answer.
Does anyone have idea about it?

Related

Raytracer renders objects too large

I am following this course to learn computer graphics and write my first ray tracer.
I already have some visible results, but they seem to be too large.
The overall algorithm the course outlines is this:
Image Raytrace (Camera cam, Scene scene, int width, int height)
{
Image image = new Image (width, height) ;
for (int i = 0 ; i < height ; i++)
for (int j = 0 ; j < width ; j++) {
Ray ray = RayThruPixel (cam, i, j) ;
Intersection hit = Intersect (ray, scene) ;
image[i][j] = FindColor (hit) ;
}
return image ;
}
I perform all calculations in camera space (where the camera is at (0, 0, 0)). Thus RayThruPixel returns me a ray in camera coordinates, Intersect returns an intersection point also in camera coordinates, and the image pixel array is a direct mapping from the intersectionr results.
The below image is the rendering of a sphere at (0, 0, -40000) world coordinates and radius 0.15, and camera at (0, 0, 2) world coordinates looking towards (0, 0, 0) world coordinates. I would normally expect the sphere to be a lot smaller given its small radius and far away Z coordinate.
The same thing happens with rendering triangles too. In the below image I have 2 triangles that form a square, but it's way too zoomed in. The triangles have coordinates between -1 and 1, and the camera is looking from world coordinates (0, 0, 4).
This is what the square is expected to look like:
Here is the code snippet I use to determine the collision with the sphere. I'm not sure if I should divide the radius by the z coordinate here - without it, the circle is even larger:
Sphere* sphere = dynamic_cast<Sphere*>(object);
float t;
vec3 p0 = ray->origin;
vec3 p1 = ray->direction;
float a = glm::dot(p1, p1);
vec3 center2 = vec3(modelview * object->transform * glm::vec4(sphere->center, 1.0f)); // camera coords
float b = 2 * glm::dot(p1, (p0 - center2));
float radius = sphere->radius / center2.z;
float c = glm::dot((p0 - center2), (p0 - center2)) - radius * radius;
float D = b * b - 4 * a * c;
if (D > 0) {
// two roots
float sqrtD = glm::sqrt(D);
float root1 = (-b + sqrtD) / (2 * a);
float root2 = (-b - sqrtD) / (2 * a);
if (root1 > 0 && root2 > 0) {
t = glm::min(root1, root2);
found = true;
}
else if (root2 < 0 && root1 >= 0) {
t = root1;
found = true;
}
else {
// should not happen, implies sthat both roots are negative
}
}
else if (D == 0) {
// one root
float root = -b / (2 * a);
t = root;
found = true;
}
else if (D < 0) {
// no roots
// continue;
}
if (found) {
hitVector = p0 + p1 * t;
hitNormal = glm::normalize(result->hitVector - center2);
}
Here I generate the ray going through the relevant pixel:
Ray* RayThruPixel(Camera* camera, int x, int y) {
const vec3 a = eye - center;
const vec3 b = up;
const vec3 w = glm::normalize(a);
const vec3 u = glm::normalize(glm::cross(b, w));
const vec3 v = glm::cross(w, u);
const float aspect = ((float)width) / height;
float fovyrad = glm::radians(camera->fovy);
const float fovx = 2 * atan(tan(fovyrad * 0.5) * aspect);
const float alpha = tan(fovx * 0.5) * (x - (width * 0.5)) / (width * 0.5);
const float beta = tan(fovyrad * 0.5) * ((height * 0.5) - y) / (height * 0.5);
return new Ray(/* origin= */ vec3(modelview * vec4(eye, 1.0f)), /* direction= */ glm::normalize(vec3( modelview * glm::normalize(vec4(alpha * u + beta * v - w, 1.0f)))));
}
And intersection with a triangle:
Triangle* triangle = dynamic_cast<Triangle*>(object);
// vertices in camera coords
vec3 vertex1 = vec3(modelview * object->transform * vec4(*vertices[triangle->index1], 1.0f));
vec3 vertex2 = vec3(modelview * object->transform * vec4(*vertices[triangle->index2], 1.0f));
vec3 vertex3 = vec3(modelview * object->transform * vec4(*vertices[triangle->index3], 1.0f));
vec3 N = glm::normalize(glm::cross(vertex2 - vertex1, vertex3 - vertex1));
float D = -glm::dot(N, vertex1);
float m = glm::dot(N, ray->direction);
if (m == 0) {
// no intersection because ray parallel to plane
}
else {
float t = -(glm::dot(N, ray->origin) + D) / m;
if (t < 0) {
// no intersection because ray goes away from triange plane
}
vec3 Phit = ray->origin + t * ray->direction;
vec3 edge1 = vertex2 - vertex1;
vec3 edge2 = vertex3 - vertex2;
vec3 edge3 = vertex1 - vertex3;
vec3 c1 = Phit - vertex1;
vec3 c2 = Phit - vertex2;
vec3 c3 = Phit - vertex3;
if (glm::dot(N, glm::cross(edge1, c1)) > 0
&& glm::dot(N, glm::cross(edge2, c2)) > 0
&& glm::dot(N, glm::cross(edge3, c3)) > 0) {
found = true;
hitVector = Phit;
hitNormal = N;
}
}
Given that the output image is a circle, and that the same problem happens with triangles as well, my guess is the problem isn't from the intersection logic itself, but rather something wrong with the coordinate spaces or transformations. Could calculating everything in camera space be causing this?
I eventually figured it out by myself. I first noticed the problem was here:
return new Ray(/* origin= */ vec3(modelview * vec4(eye, 1.0f)),
/* direction= */ glm::normalize(vec3( modelview *
glm::normalize(vec4(alpha * u + beta * v - w, 1.0f)))));
When I removed the direction vector transformation (leaving it at just glm::normalize(alpha * u + beta * v - w)) I noticed the problem disappeared - the square was rendered correctly. I was prepared to accept it as an answer, although I wasn't completely sure why.
Then I noticed that after doing transformations on the object, the camera wasn't positioned properly, which makes sense - we're not pointing the rays in the correct direction.
I realized that my entire approach of doing the calculations in camera space was wrong. If I still wanted to use this approach, the rays would have to be transformed, but in a different way that would involve some complex math I wasn't ready to deal with.
I instead changed my approach to do transformations and intersections in world space and only use camera space at the lighting stage. We have to use camera space at some point, since we want to actually look in the direction of the object we are rendering.

Upscaling using color interpolation for lighting?

I'm writing a lighting system for 2D games using a rather common method of 2D radiosity. The idea is to generate a JFA voronoi of the game scene (black, alpha = 1.0 for occluders and color, alpha = 1.0 for emitters) and generate an SDF from the JFA. Next you raymarch every pixel on screen for N rays with M max steps on the SDF with random angle offsets for each pixel. You then sample the emitter/occluder surface at the end point of each ray, step back into empty space and sample again for light emitted in the nearest empty space. This gives you a nice result as seen below:
That isn't the problem, it works great. The problem is efficiency. The idea behind fixing this is to render the GI at 1/N sample size (width/N, height/N) and then upscale the GI using interpolation. As I've done below:
This is the problem. The upscaling I've accomplished using weighted color-interpolation, but it produces these nasty results near occluders:
Here's the full shader:
The uniforms passed are the GI downsampled texture (in_GIField), Scene (emitters/occluders only) Texture (gm_basetexture), Signed Distance Field (in_SDField), Resolution (in_Screen) and the Downsample ratio (in_Sample).
/*
UPSCALING SHADER:
Find the nearest 4 boundign samples to the current pixel (xyDelta & xyShift)
Calculate all of the sample's weights based on whether they're marchable or source pixels.
Final perform a composite weighted interpolation for the current pixel to the nearest 4 samples.
*/
varying vec2 in_Coord;
uniform float in_Sample;
uniform vec2 in_Screen;
uniform sampler2D in_GIField;
uniform sampler2D in_SDField;
#define TPI 9.4247779607693797153879301498385
#define PI 3.1415926535897932384626433832795
#define TAU 6.2831853071795864769252867665590
#define EPSILON 0.001 // floating point precision check
#define dot(f) dot(f,f) // shorthand dot of a single float
float ATAN2(float yy, float xx) { return mod(atan(yy, xx), TAU); }
float DIRECT(vec2 v1, vec2 v2) { vec2 v3 = v2 - v1; return ATAN2(-v3.y, v3.x); }
float DIFFERENCE(float src, float dst) { return mod(dst - src + TPI, TAU) - PI; }
float V2_F16(vec2 v) { return v.x + (v.y / 255.0); }
float VMAX(vec3 v) { return max(v.r, max(v.g, v.b)); }
vec2 SAMPLEXY(vec2 xycoord) { return (floor(xycoord / in_Sample) * in_Sample) + (in_Sample*0.5); }
vec3 TONEMAP(vec3 color, float dist) { return color * (1.0 / (1.0 + dot(dist / min(in_Screen.x, in_Screen.y)))); }
float TESTMARCH(vec2 pix, vec2 end) {
float aspect = in_Screen.x / in_Screen.y,
dst = distance(pix, end);
vec2 dir = normalize((end*in_Screen) - (pix*in_Screen)) / in_Screen;
for(float i = 0.0; i < in_Sample; i += 1.0) {
vec2 test = vec2(pix.x * aspect, pix.y) + (dir * (i/in_Screen));
test.x /= aspect;
vec4 sourceCol = texture2D(gm_BaseTexture, test);
float source = max(sourceCol.r, max(sourceCol.g, sourceCol.b));
if (source < EPSILON && sourceCol.a > 0.0) return 0.0;
}
return 1.0;
}
vec3 WCOMPOSITE(vec3 colors[4], float weights[4], vec2 uv) {
// (uv * A * B) + (B * (1.0 - A)) //0, 2, 1, 3
float weightA = (uv.y * weights[0] * weights[2]) + (weights[2] * (1.0 - weights[0])),
weightB = (uv.y * weights[1] * weights[3]) + (weights[3] * (1.0 - weights[1]));
vec3 colorA = mix(colors[0], colors[2], weightA),
colorB = mix(colors[1], colors[3], weightB);
return mix(colorA, colorB, uv.x);
}
void main() {
vec2 xyCoord = in_Coord * in_Screen;
vec2 xyLight = SAMPLEXY(xyCoord);
vec2 xyDelta = sign(sign(xyCoord - xyLight) - 1.0);
vec2 xyShift[4];
xyShift[0] = vec2(0.,0.) + xyDelta;
xyShift[1] = vec2(1.,0.) + xyDelta;
xyShift[2] = vec2(0.,1.) + xyDelta;
xyShift[3] = vec2(1.,1.) + xyDelta;
vec2 xyField[4]; vec3 xyColor[4]; float notSource[4]; float xyWghts[4];
for(int i = 0; i < 4; i++) {
xyField[i] = (xyLight + (xyShift[i] * in_Sample)) * (1.0/in_Screen);
xyColor[i] = texture2D(in_GIField, xyField[i]).rgb;
notSource[i] = 1.0 - sign(texture2D(gm_BaseTexture, xyField[i]).a);
xyWghts[i] = TESTMARCH(in_Coord, xyField[i]) * sign(VMAX(xyColor[i])) * notSource[i];
}
vec2 uvCoord = mod(xyCoord-xyLight, in_Sample) * (1.0/in_Sample);
vec3 xyFinal = WCOMPOSITE(xyColor, xyWghts, uvCoord);
vec4 xySource = texture2D(gm_BaseTexture, in_Coord);
float isSource = sign(xySource.a);
gl_FragColor = vec4((isSource * xySource.rgb) + ((1.0-isSource) * xyFinal), 1.0);
}
EDIT: This DOES produce the intended result in empty space, but ends up with nasty artifacting near emitters and occluders. I tried to solve this in the for-loop in the main function by weighting out the emitter/occluder (source pixels in the scene texture) colors, but this isn't working.
See shader code attached (Shadertoy). I noticed that the weighting function will actually produce some colors with a weight of 0 (as expected as originally written). I currently don't have a solution for how to remove colors from the interpolation process entirely.
Full Source Code
Full Color Shader Code

Why use shadier if can use mesh property?

I am working with shaders in THREE.js and the example I am following shows how to create waving flag effect with a plane mesh. The result is a plane with z coordinates waving as so in picture.
I only have a basic understanding of shaders but my question is why use shader to change 'modelPosition.z' when we can just do same using mesh.position.z in main javascript file where THREE.Mesh is instanciated? Are shaders just a way of creating custom materials?
uniform vec2 uFrequency;
uniform float uTime;
attribute float aRandom;
varying vec2 vUv;
varying float vElevation;
void main()
{
//gl_Position = projectionMatrix * viewMatrix * modelMatrix * vec4(position, 1.0);
//gl_Position.x += 0.5;
//gl_Position.y += 0.5;
vec4 modelPosition = modelMatrix * vec4(position, 1.0);
float elevation = sin(modelPosition.x * uFrequency.x - uTime) * 0.1;
elevation += sin(modelPosition.y * uFrequency.y - uTime) * 0.1;
modelPosition.z += elevation;
vec4 viewPosition = viewMatrix * modelPosition;
vec4 projectedPosition = projectionMatrix * viewPosition;
gl_Position = projectedPosition;
vUv = uv;
vElevation = elevation;
}

How to implement SLERP in GLSL/HLSL

I'm attempting to SLERP from GLSL (HLSL would also be okay as I'm targeting Unity3D)
I've found this page: http://www.geeks3d.com/20140205/glsl-simple-morph-target-animation-opengl-glslhacker-demo
It contains the following listing:
#version 150
in vec4 gxl3d_Position;
in vec4 gxl3d_Attrib0;
in vec4 gxl3d_Attrib1;
out vec4 Vertex_Color;
uniform mat4 gxl3d_ModelViewProjectionMatrix;
uniform float time;
vec4 Slerp(vec4 p0, vec4 p1, float t)
{
float dotp = dot(normalize(p0), normalize(p1));
if ((dotp > 0.9999) || (dotp<-0.9999))
{
if (t<=0.5)
return p0;
return p1;
}
float theta = acos(dotp * 3.14159/180.0);
vec4 P = ((p0*sin((1-t)*theta) + p1*sin(t*theta)) / sin(theta));
P.w = 1;
return P;
}
void main()
{
vec4 P = Slerp(gxl3d_Position, gxl3d_Attrib1, time);
gl_Position = gxl3d_ModelViewProjectionMatrix * P;
Vertex_Color = gxl3d_Attrib0;
}
The maths can be found on the Wikipedia page for SLERP: http://en.wikipedia.org/wiki/Slerp
But I question the line
float theta = acos(dotp * 3.14159/180.0);
That number is 2π/360, i.e. DEG2RAD
And dotp, a.k.a cos(theta) is not an angle
i.e. it doesn't make sense to DEG2RAD it.
Isn’t the bracketing wrong?
float DEG2RAD = 3.14159/180.0;
float theta_rad = acos(dotp) * DEG2RAD;
And even then I doubt acos() returns degrees.
Can anyone provide a correct implementation of SLERP in GLSL?
All that code seems fine. Just drop the " * 3.14159/180.0 " and let it be just:
float theta = acos(dotp);

Per fragment lighting on heightmap and generating normals

I am trying to implement per framgent lighting on a heightmap. I am uploading the height map to the shader as a texture and adjusting vertex heights according to respective pixels. To generate the normals I take values of four neighbouring pixels, make vectors of them and compute the cross product like so:
vec3 offset = vec3(-1.0/mapSize, 0, 1.0/mapSize);
float s1 = texture2D(sampler, texCoord + offset.xy).x;
float s2 = texture2D(sampler, texCoord + offset.zy).x;
float s3 = texture2D(sampler, texCoord + offset.yx).x;
float s4 = texture2D(sampler, texCoord + offset.yz).x;
vec3 va = normalize(vec3(1.0, 0.0, s2 - s1));
vec3 vb = normalize(vec3(0.0, 1.0, s3 - s4));
vec3 n = normalize(cross(va, vb));
and heres my lighting function
vec4 directional(Light light){
vec4 ret = vec4(0.0);
vec3 lPos = (V * vec4(light.position, 0.0)).xyz;
vec3 normal = normalize(vNormal);
vec3 lightDir = normalize(lPos);
vec3 reflectDir = reflect(-lightDir, normal);
vec3 viewDir = normalize(-vPosition);
float lambertTerm = max(dot(lightDir, normal), 0.0);
float specular = 0.0;
if(lambertTerm > 0.0){
float specAngle = max(dot(reflectDir, viewDir), 0.0);
specular = pow(specAngle, material.shininess);
}
ret = vec4(light.ambient * material.ambient + light.diffuse * material.diffuse * lambertTerm + light.specular * material.specular * specular, 1.0);
return ret;
}
This kinda works. Only the y and z axis seem to by flipped ie. if I move the light along the y axis it looks like its moving along the z axis and vice versa.
I should also point out that the function works perfectly on regular 3D models, so I assume the problem is in the generation of normals.
If you're using a y-up coordinate system then you want to be doing your deltas on the y component, not the z-component.
vec3 va = normalize(vec3(1.0, s2 - s1, 0.0));
vec3 vb = normalize(vec3(0.0, s4 - s3, 0.0));
Also you should confirm whether it's s3 - s4 or s4 - s3.

Resources