Why use shadier if can use mesh property? - graphics

I am working with shaders in THREE.js and the example I am following shows how to create waving flag effect with a plane mesh. The result is a plane with z coordinates waving as so in picture.
I only have a basic understanding of shaders but my question is why use shader to change 'modelPosition.z' when we can just do same using mesh.position.z in main javascript file where THREE.Mesh is instanciated? Are shaders just a way of creating custom materials?
uniform vec2 uFrequency;
uniform float uTime;
attribute float aRandom;
varying vec2 vUv;
varying float vElevation;
void main()
{
//gl_Position = projectionMatrix * viewMatrix * modelMatrix * vec4(position, 1.0);
//gl_Position.x += 0.5;
//gl_Position.y += 0.5;
vec4 modelPosition = modelMatrix * vec4(position, 1.0);
float elevation = sin(modelPosition.x * uFrequency.x - uTime) * 0.1;
elevation += sin(modelPosition.y * uFrequency.y - uTime) * 0.1;
modelPosition.z += elevation;
vec4 viewPosition = viewMatrix * modelPosition;
vec4 projectedPosition = projectionMatrix * viewPosition;
gl_Position = projectedPosition;
vUv = uv;
vElevation = elevation;
}

Related

Hollow radial gradient shader with speedlines

I'm trying to create a shader which Ideally produces this:
And so far I've come up with this shader:
shader_type canvas_item;
uniform vec4 main_color : hint_color = vec4(1.0);
uniform float outter_radius : hint_range(0.0, 1.0) = 1.0;
uniform float inner_radius : hint_range(0.0, 1.0) = 1.0;
uniform float blur_radius : hint_range(0.0, 1.0) = 1.0;
void fragment() {
float dist_length=distance(UV, vec2(0.5));
COLOR=main_color;
if(dist_length<outter_radius && dist_length>inner_radius)
COLOR.a=(dist_length - inner_radius)/(outter_radius - inner_radius);
else if(dist_length<blur_radius && dist_length>outter_radius)
COLOR.a=(blur_radius - dist_length)/(blur_radius - outter_radius);
else
COLOR=vec4(0.0);
}
Which produces this:
But I'm stuck trying to add the speed lines, I tried combining this shader as such:
shader_type canvas_item;
uniform vec4 main_color : hint_color = vec4(1.0);
uniform float outter_radius : hint_range(0.0, 1.0) = 1.0;
uniform float inner_radius : hint_range(0.0, 1.0) = 1.0;
uniform float blur_radius : hint_range(0.0, 1.0) = 1.0;
uniform sampler2D noise;
uniform float sample_radius: hint_range(0.0, 1.0) = 0.5;
uniform vec4 line_color: hint_color = vec4(1.0);
uniform float center_radius: hint_range(0.0, 1.0) = 0.5;
const float pi = 3.14159265359;
void fragment() {
vec2 dist = UV - vec2(0.5);
float dist_length=length(dist);
float angle = atan(dist.y / dist.x);
vec2 sample = vec2(sample_radius * cos(angle), sample_radius * sin(angle));
float noise_value = texture(noise, sample).r;
if(dist_length<outter_radius){
vec4 lines_color=mix(line_color, vec4(0.0), noise_value);
lines_color = mix(lines_color, vec4(0.0), 1.0 - dist_length - center_radius);
COLOR=lines_color;
if(dist_length>inner_radius){
COLOR+=main_color;
COLOR.a=(dist_length - inner_radius)/(outter_radius - inner_radius);
}
}
else{
if(dist_length<blur_radius){
COLOR += main_color;
COLOR.a=(blur_radius - dist_length)/(blur_radius - outter_radius);
}
else
COLOR=vec4(0.0);
}
}
but it results in this:
And even if I do get it to work I'm still left with the problem that the center won't be hollow as shown in the Ideal image
So is there any fix for this? or a completely different approach I should be taking?

Upscaling using color interpolation for lighting?

I'm writing a lighting system for 2D games using a rather common method of 2D radiosity. The idea is to generate a JFA voronoi of the game scene (black, alpha = 1.0 for occluders and color, alpha = 1.0 for emitters) and generate an SDF from the JFA. Next you raymarch every pixel on screen for N rays with M max steps on the SDF with random angle offsets for each pixel. You then sample the emitter/occluder surface at the end point of each ray, step back into empty space and sample again for light emitted in the nearest empty space. This gives you a nice result as seen below:
That isn't the problem, it works great. The problem is efficiency. The idea behind fixing this is to render the GI at 1/N sample size (width/N, height/N) and then upscale the GI using interpolation. As I've done below:
This is the problem. The upscaling I've accomplished using weighted color-interpolation, but it produces these nasty results near occluders:
Here's the full shader:
The uniforms passed are the GI downsampled texture (in_GIField), Scene (emitters/occluders only) Texture (gm_basetexture), Signed Distance Field (in_SDField), Resolution (in_Screen) and the Downsample ratio (in_Sample).
/*
UPSCALING SHADER:
Find the nearest 4 boundign samples to the current pixel (xyDelta & xyShift)
Calculate all of the sample's weights based on whether they're marchable or source pixels.
Final perform a composite weighted interpolation for the current pixel to the nearest 4 samples.
*/
varying vec2 in_Coord;
uniform float in_Sample;
uniform vec2 in_Screen;
uniform sampler2D in_GIField;
uniform sampler2D in_SDField;
#define TPI 9.4247779607693797153879301498385
#define PI 3.1415926535897932384626433832795
#define TAU 6.2831853071795864769252867665590
#define EPSILON 0.001 // floating point precision check
#define dot(f) dot(f,f) // shorthand dot of a single float
float ATAN2(float yy, float xx) { return mod(atan(yy, xx), TAU); }
float DIRECT(vec2 v1, vec2 v2) { vec2 v3 = v2 - v1; return ATAN2(-v3.y, v3.x); }
float DIFFERENCE(float src, float dst) { return mod(dst - src + TPI, TAU) - PI; }
float V2_F16(vec2 v) { return v.x + (v.y / 255.0); }
float VMAX(vec3 v) { return max(v.r, max(v.g, v.b)); }
vec2 SAMPLEXY(vec2 xycoord) { return (floor(xycoord / in_Sample) * in_Sample) + (in_Sample*0.5); }
vec3 TONEMAP(vec3 color, float dist) { return color * (1.0 / (1.0 + dot(dist / min(in_Screen.x, in_Screen.y)))); }
float TESTMARCH(vec2 pix, vec2 end) {
float aspect = in_Screen.x / in_Screen.y,
dst = distance(pix, end);
vec2 dir = normalize((end*in_Screen) - (pix*in_Screen)) / in_Screen;
for(float i = 0.0; i < in_Sample; i += 1.0) {
vec2 test = vec2(pix.x * aspect, pix.y) + (dir * (i/in_Screen));
test.x /= aspect;
vec4 sourceCol = texture2D(gm_BaseTexture, test);
float source = max(sourceCol.r, max(sourceCol.g, sourceCol.b));
if (source < EPSILON && sourceCol.a > 0.0) return 0.0;
}
return 1.0;
}
vec3 WCOMPOSITE(vec3 colors[4], float weights[4], vec2 uv) {
// (uv * A * B) + (B * (1.0 - A)) //0, 2, 1, 3
float weightA = (uv.y * weights[0] * weights[2]) + (weights[2] * (1.0 - weights[0])),
weightB = (uv.y * weights[1] * weights[3]) + (weights[3] * (1.0 - weights[1]));
vec3 colorA = mix(colors[0], colors[2], weightA),
colorB = mix(colors[1], colors[3], weightB);
return mix(colorA, colorB, uv.x);
}
void main() {
vec2 xyCoord = in_Coord * in_Screen;
vec2 xyLight = SAMPLEXY(xyCoord);
vec2 xyDelta = sign(sign(xyCoord - xyLight) - 1.0);
vec2 xyShift[4];
xyShift[0] = vec2(0.,0.) + xyDelta;
xyShift[1] = vec2(1.,0.) + xyDelta;
xyShift[2] = vec2(0.,1.) + xyDelta;
xyShift[3] = vec2(1.,1.) + xyDelta;
vec2 xyField[4]; vec3 xyColor[4]; float notSource[4]; float xyWghts[4];
for(int i = 0; i < 4; i++) {
xyField[i] = (xyLight + (xyShift[i] * in_Sample)) * (1.0/in_Screen);
xyColor[i] = texture2D(in_GIField, xyField[i]).rgb;
notSource[i] = 1.0 - sign(texture2D(gm_BaseTexture, xyField[i]).a);
xyWghts[i] = TESTMARCH(in_Coord, xyField[i]) * sign(VMAX(xyColor[i])) * notSource[i];
}
vec2 uvCoord = mod(xyCoord-xyLight, in_Sample) * (1.0/in_Sample);
vec3 xyFinal = WCOMPOSITE(xyColor, xyWghts, uvCoord);
vec4 xySource = texture2D(gm_BaseTexture, in_Coord);
float isSource = sign(xySource.a);
gl_FragColor = vec4((isSource * xySource.rgb) + ((1.0-isSource) * xyFinal), 1.0);
}
EDIT: This DOES produce the intended result in empty space, but ends up with nasty artifacting near emitters and occluders. I tried to solve this in the for-loop in the main function by weighting out the emitter/occluder (source pixels in the scene texture) colors, but this isn't working.
See shader code attached (Shadertoy). I noticed that the weighting function will actually produce some colors with a weight of 0 (as expected as originally written). I currently don't have a solution for how to remove colors from the interpolation process entirely.
Full Source Code
Full Color Shader Code

pixi.js: how to draw outline of container while keeping its content transparent

I have a container with several graphics containing circles. I would like to only render this container's outline, without the graphics themselves.
I managed to draw the outlines using OutlineFilter, and I managed to make the container transparent using AlphaFilter, but not both at the same time, no matter in which order I added the filters.
That is technically not possible like you intend to do it. One shader (pixi.js filter) doesn't know about the previous shader, such as where the outline was painted or what is the original texture alpha.
Alternatively you can create a new filter with a new shader that achieves that effect. I'm basing this on the OutlineFilter:
varying vec2 vTextureCoord;
uniform sampler2D uSampler;
uniform vec2 thickness;
uniform vec4 outlineColor;
uniform vec4 filterClamp;
const float DOUBLE_PI = 3.14159265358979323846264 * 2.;
void main(void) {
vec4 ownColor = texture2D(uSampler, vTextureCoord);
vec4 curColor;
float maxAlpha = 0.;
vec2 displaced;
for (float angle = 0.; angle <= DOUBLE_PI; angle += 0.1) {
displaced.x = vTextureCoord.x + thickness.x * cos(angle);
displaced.y = vTextureCoord.y + thickness.y * sin(angle);
curColor = texture2D(uSampler, clamp(displaced, filterClamp.xy, filterClamp.zw));
maxAlpha = max(maxAlpha, curColor.a);
}
float resultAlpha = maxAlpha * step(ownColor.a, 0.0) > 0. ? 1. : 0.0;
gl_FragColor = vec4(outlineColor.rgb * resultAlpha, resultAlpha);
}
Example result as in the pixi-filters demos:

Path tracing cosine hemisphere sampling and emissive objects

I'm building my own path tracer by self-learning from online resources. But I find that my implementation has an issue with emissive objects in the scene, especially in a dark environment (no skybox).
For example, in the following environment:
The box in the middle is the only light source in the environment, with emission value of (3.0,3.0,3.0), and all other objects emission value of (0.0,0.0,0.0). I was expecting the light to scatter smoothly on the walls, but it looks like they are biased towards one direction.
My cosine sampling function is (modified from lwjgl3-demos):
float3 SampleHemisphere3(float3 norm, float alpha = 0.0)
{
float3 randomVec = rand3();
float r = saturate(pow(randomVec.x, 1.0 / (1.0 + alpha)));
float angle = randomVec.y * PI_TWO;
float sr = saturate(sqrt(1.0 - r * r));
float3 ph = float3(sr * cos(angle), sr * sin(angle), r);
float3 tangent = normalize(randomVec * 2.0 - 1.0);
float3 bitangent = cross(norm, tangent);
tangent = cross(norm, bitangent);
return mul(ph, float3x3(tangent, bitangent, norm));
}
This is how I compute the shading and next ray info:
float3 Shade(inout Ray ray, HitInfo hit)
{
ray.origin = hit.pos + hit.norm * 1e-5;
ray.dir = normalize(SampleHemisphere3(hit.norm, 0.0));
ray.energy *= 2.0 * hit.colors.albedo * saturate(dot(hit.norm, ray.dir));
return hit.colors.emission;
}
And the recursion happens here:
// generate ray from camera
Ray ray = CreateCameraRay(camera, PixelCenter);
// trace ray
float3 color = 0.0;
for (int i = 0; i < _TraceDepth; i++)
{
// get nearest ray hit
HitInfo hit = Trace(ray);
// accumulate color
color += ray.energy * Shade(ray, hit);
// if ray has no energy, stop tracing
if(!any(ray.energy))
break;
}
// write to frame target
_FrameTarget[id.xy] = float4(color, 1.0);
I learned the last two functions from GPU Path Tracing in Unity.
Here is another example of the similar error:
I feel that the problem is caused by the cosine weighted hemisphere sampling, but I have no idea how to fix it.
What should I do to get distributed light effect from emissive objects on the diffuse surfaces? Do I have to specify light sources and shapes and sample from them directly instead of emissive objects?
Edit:
It is indeed the cosine weighted sampling that is causing the problem.
Instead of:
float3 tangent = normalize(randomVec * 2.0 - 1.0);
I should have another vector of independent random values:
float3 tangent = normalize(rand3() * 2.0 - 1.0);
Now it is shows
Still not perfect, because it is clearly a cross shape. (Probably caused by sparsity of floating values)
How can I further improve this?
Edit 2:
After some more debugging and experiments, I figure out the "solution", but I don't understand the reason behind it.
The random value generator is from this Shadertoy project, because I see that GLSL-PathTracer is also using it.
Here is part of it:
void rng_initialize(float2 p, int frame)
{
//white noise seed
RandomSeed = uint4(p, frame, p.x + p.y);
}
void pcg4d(inout uint4 v)
{
v = v * 1664525u + 1013904223u;
v.x += v.y * v.w;
v.y += v.z * v.x;
v.z += v.x * v.y;
v.w += v.y * v.z;
v = v ^ (v >> 16u);
v.x += v.y * v.w;
v.y += v.z * v.x;
v.z += v.x * v.y;
v.w += v.y * v.z;
}
float3 rand3()
{
pcg4d(RandomSeed);
return float3(RandomSeed.xyz) / float(0xffffffffu);
}
float4 rand4()
{
pcg4d(RandomSeed);
return float4(RandomSeed) / float(0xffffffffu);
}
At initialization, I pass float2(id.xy) from SV_DispatchThreadID and current frame counter to rng_initialize.
And here is my new cosine weighted hemisphere sampling function:
float3 SampleHemisphere3(float3 norm, float alpha = 0.0)
{
float4 rand = rand4();
float r = pow(rand.w, 1.0 / (1.0 + alpha));
float angle = rand.y * PI_TWO;
float sr = sqrt(1.0 - r * r);
float3 ph = float3(sr * cos(angle), sr * sin(angle), r);
float3 tangent = normalize(rand.zyx + rand3() - 1.0);
float3 bitangent = cross(norm, tangent);
tangent = cross(norm, bitangent);
return mul(ph, float3x3(tangent, bitangent, norm));
}
And the results are: (which looks much better)
My discoveries from the experiments are:
r in the sampling function has to be dependent on w component of random values.
angle can be any in x, y, z.
tangent has to be dependent on current xyz values and a new vector of random xyz values. Order doesn't matter so I use zyx here. Missing either current xyz or new xyz will result in a cross shape on the wall.
I'm not sure if this is a correct solution, but as far as my eyes can tell, it solves the problem.

How to implement SLERP in GLSL/HLSL

I'm attempting to SLERP from GLSL (HLSL would also be okay as I'm targeting Unity3D)
I've found this page: http://www.geeks3d.com/20140205/glsl-simple-morph-target-animation-opengl-glslhacker-demo
It contains the following listing:
#version 150
in vec4 gxl3d_Position;
in vec4 gxl3d_Attrib0;
in vec4 gxl3d_Attrib1;
out vec4 Vertex_Color;
uniform mat4 gxl3d_ModelViewProjectionMatrix;
uniform float time;
vec4 Slerp(vec4 p0, vec4 p1, float t)
{
float dotp = dot(normalize(p0), normalize(p1));
if ((dotp > 0.9999) || (dotp<-0.9999))
{
if (t<=0.5)
return p0;
return p1;
}
float theta = acos(dotp * 3.14159/180.0);
vec4 P = ((p0*sin((1-t)*theta) + p1*sin(t*theta)) / sin(theta));
P.w = 1;
return P;
}
void main()
{
vec4 P = Slerp(gxl3d_Position, gxl3d_Attrib1, time);
gl_Position = gxl3d_ModelViewProjectionMatrix * P;
Vertex_Color = gxl3d_Attrib0;
}
The maths can be found on the Wikipedia page for SLERP: http://en.wikipedia.org/wiki/Slerp
But I question the line
float theta = acos(dotp * 3.14159/180.0);
That number is 2π/360, i.e. DEG2RAD
And dotp, a.k.a cos(theta) is not an angle
i.e. it doesn't make sense to DEG2RAD it.
Isn’t the bracketing wrong?
float DEG2RAD = 3.14159/180.0;
float theta_rad = acos(dotp) * DEG2RAD;
And even then I doubt acos() returns degrees.
Can anyone provide a correct implementation of SLERP in GLSL?
All that code seems fine. Just drop the " * 3.14159/180.0 " and let it be just:
float theta = acos(dotp);

Resources