I'm now working on the deferred shading using WebGL2.0. One primary problem I'm now facing is that I can't read the depth value from the DEPTH_ATTACHMENT. Actually I creat my own GBuffer with a DEPTH24_STENCIL8 texture as DEPTH_ATTACHMENT, then I bind this texture to the sampler and try to read the value in my fragment shader in deferred shading part like this:
uniform sampler2D u_depthSampler;
vec4 depthValue = texture(u_depthSampler, v_uv);
Then I set the depthValue as output in my shading fragment shader:
layout(location = 0) out vec4 o_fragOut;
o_fragColor.xyz = depthValue.xyz;
o_fragColor.w = 1.0;
When doing this on firefox, it didn't report any error, but the output color is just pure red.(which means vec3(1.0, 0.0, 0.0) I think). This really confuse me a lot. Can anyone provide some instruction? Is there any problem with my glsl code? THX~
The depth buffer is not linear. To linearize it use this formula:
float f = 1000.0; //far plane
float n = 1.0; //near plane
float z = (2.0 * n) / (f + n - texture2D( diffuse, texCoord ).x * (f - n));
gl_FragColor = vec4(z,z,z, 255)
Sorry that I may made a big mistake that the depth texture is acutually nearly pure red but not really red. I tried this:
float depthPow = pow(depthValue.x, 10.0);
o_fragOut.xyz = vec3(depthPow);
And I get the right result I think.
Related
I made a plane in THREEjs using Mesh, PlaneGeometry and ShaderMaterial. It's a very simple/basic form.
I applied a simple phormula to make the plain more steep. Now I'm trying to make the lower surface darker than the higher surface. Here is what I tried.
Vertex shader:
varying vec3 test;
void main(void) {
float amp = 2.5;
float z = amp * sin(position.x*0.2) * cos(position.y*0.5); //this makes the surface steeper
test = vec3(1, 1, -z); //this goes to fragment shader
//test = vec3(698.0, 400.0, -z); I have tried this. first coordenates here are to normalize the vector
gl_Position = projectionMatrix * modelViewMatrix * vec4(position.x, position.y, z, 1.0);
}
Fragment shader:
precision mediump float;
varying vec3 test;
void main(void) {
vec3 st = gl_FragCoord.xyz/test;
gl_FragColor = vec4(st.xyz, 1.0);
}
Result:
This result is not desirable, since the contrast between top and down is too aggressive and I'd like the lower surface less white. What do I have to change to accomplish this?
If you want to create a brightness based on the height of the waves, then you'll need to only use the test.z value, since test.xy aren't really doing anything. The problem is that brightness needs a value between [0, 1] and due to the amplitude multiplication, you're getting a value between [-2.5, 2.5] range.
precision mediump float;
varying vec3 test;
void main(void) {
float amp = 2.5;
// Extract brightness from test.z
float brightness = test.z;
// Convert brightness from [-2.5, 2.5] to [0.0, 1.0] range
brightness = (brightness / amp) * 0.5 + 0.5;
vec3 yellow = vec3(1.0, 1.0, 0.0);
// Multiply final color by brigthness (0 brightness = black)
vec3 st = yellow * brightness;
gl_FragColor = vec4(st.xyz, 1.0);
}
That should give you a smoother transition from full yellow to black.
As an aside, to help me visualize the values I'm getting from GLSL functions, I like to use the Graphtoy tool. I recommend you give it a shot to help you write shaders!
I'm trying to get into Shaders and decided to init a project using Rust and Bevy, the objective is to reproduce a raymarching shader just to confirm that the environment is ok, i was able to reproduce the "fragCoord" by using:
var fragCoord: vec2<f32> = vec2<f32>((input.uv.x+1.0) * iResolution.res.x, (input.uv.y+1.0) * iResolution.res.y);
//iResolution.res is the screen res in pixels
up to this point everything is ok, when trying to reproduce BigWing's example i notice a difference in the result imagine when passing only the follow line:
vec2 uv = (fragCoord-.5*iResolution.xy)/iResolution.y;
//where fragCoord is the pixel position of the frag and the iResolution is the screen size in pixels
shader result image
I suspected about fragCoord, but after a check, it give the same result as shadertoy's version, but after trying to check iResolution I noticed a big difference, then did a test with fixed output color values and got this, as you can see, the color is not the same:
Result of using the same values for the shader
I used different browser too but got the same result :(, i suspect now of my camera/mesh code:
//camera
fn spawn_camera(mut commands: Commands) {
let mut camera = OrthographicCameraBundle::new_2d();
camera.orthographic_projection.right = 0.0;
camera.orthographic_projection.left = 1.0 ;
camera.orthographic_projection.top = 0.0;
camera.orthographic_projection.bottom = 1.0;
camera.orthographic_projection.scaling_mode = ScalingMode::None;
commands.spawn_bundle(camera);
}
//mesh to display the frag shader
let ZOOM = 1.0;
let vertices = [
([-1.0,-1.0,0.0] /*pos*/, [0.0,0.0,0.0] /*normal*/, [1.0 / ZOOM, 1.0 / ZOOM] /*uv*/), //bottom left
([-1.0,1.0,0.0], [0.0, 0.0, 0.0], [1.0 / ZOOM, -1.0 / ZOOM]), //top left
([1.0,1.0,0.0], [0.0, 0.0, 0.0], [-1.0 / ZOOM, -1.0 / ZOOM]), //top right
([1.0,-1.0,0.0], [0.0, 0.0, 0.0], [-1.0 / ZOOM, 1.0 / ZOOM]), //bottom right];
let indices = Indices::U32(vec![ 0, 3, 2,0, 2, 1]);
My main question here is, how i can reproduce the exact environment of shadertoy using Rust and Bevy? If it's not possible please show me an alternative.
I'm just trying to use the fragment shader, I don't need to show anything besides the actual fragment shader result.
Kevin Reid is correct. The default color space for is sRGB:
How to specify color space for canvas in JavaScript?
You can get the expected result by transforming your colors from linear color space to sRGB, like it is posted here:
https://www.shadertoy.com/view/Wd2yRt
Which will make your code look like this:
vec3 lin2srgb( vec3 cl )
{
vec3 c_lo = 12.92 * cl;
vec3 c_hi = 1.055 * pow(cl,vec3(0.41666)) - 0.055;
vec3 s = step( vec3(0.0031308), cl);
return mix( c_lo, c_hi, s );
}
void mainImage( out vec4 fragColor, in vec2 fragCoord )
{
vec2 uv = (fragCoord-.5*iResolution.xy)/iResolution.y;
vec3 c = vec3( lin2srgb( vec3(uv.xy, 0.0) ) );
fragColor = vec4(c,1.0);
}
Making you end up with:
So, to finally answer your question:
To reproduce the shadertoy's environment, you need to use the sRGB color space in Rust.
Alternative: Just use the transformation to sRGB in shadertoy.
I have a container with several graphics containing circles. I would like to only render this container's outline, without the graphics themselves.
I managed to draw the outlines using OutlineFilter, and I managed to make the container transparent using AlphaFilter, but not both at the same time, no matter in which order I added the filters.
That is technically not possible like you intend to do it. One shader (pixi.js filter) doesn't know about the previous shader, such as where the outline was painted or what is the original texture alpha.
Alternatively you can create a new filter with a new shader that achieves that effect. I'm basing this on the OutlineFilter:
varying vec2 vTextureCoord;
uniform sampler2D uSampler;
uniform vec2 thickness;
uniform vec4 outlineColor;
uniform vec4 filterClamp;
const float DOUBLE_PI = 3.14159265358979323846264 * 2.;
void main(void) {
vec4 ownColor = texture2D(uSampler, vTextureCoord);
vec4 curColor;
float maxAlpha = 0.;
vec2 displaced;
for (float angle = 0.; angle <= DOUBLE_PI; angle += 0.1) {
displaced.x = vTextureCoord.x + thickness.x * cos(angle);
displaced.y = vTextureCoord.y + thickness.y * sin(angle);
curColor = texture2D(uSampler, clamp(displaced, filterClamp.xy, filterClamp.zw));
maxAlpha = max(maxAlpha, curColor.a);
}
float resultAlpha = maxAlpha * step(ownColor.a, 0.0) > 0. ? 1. : 0.0;
gl_FragColor = vec4(outlineColor.rgb * resultAlpha, resultAlpha);
}
Example result as in the pixi-filters demos:
I have a Three js scene that contains a 100x100 plane centred at the origin (ie. min coord: (-50,-50), max coord: (50,50)). I am trying to have the plane appear as a colour wheel by using the x and z coords in a custom glsl shader. Using this guide (see HSB in polar coordinates, towards the bottom of the page) I have gotten my
Shader Code with Three.js Scene
but it is not quite right.
I have played around tweaking all the variables that make sense to me, but as you can see in the screenshot the colours change twice as often as what they should. My math intuition says just divide the angle by 2 but when I tried that it was completely incorrect.
I know the solution is very simple but I have tried for a couple hours and I haven't got it.
How do I turn my shader that I currently have into one that makes exactly 1 full colour rotation in 2pi radians?
EDIT: here is the relevant shader code in plain text
varying vec3 vColor;
const float PI = 3.1415926535897932384626433832795;
uniform float delta;
uniform float scale;
uniform float size;
vec3 hsb2rgb( in vec3 c ){
vec3 rgb = clamp(abs(mod(c.x*6.0+vec3(0.0,4.0,2.0),
6.0)-3.0)-1.0,
0.0,
1.0 );
rgb = rgb*rgb*(3.0-2.0*rgb);
return c.z * mix( vec3(1.0), rgb, c.y);
}
void main()
{
vec4 worldPosition = modelMatrix * vec4(position, 1.0);
float r = 0.875;
float g = 0.875;
float b = 0.875;
if (worldPosition.y > 0.06 || worldPosition.y < -0.06) {
vec2 toCenter = vec2(0.5) - vec2((worldPosition.z+50.0)/100.0, (worldPosition.x+50.0)/100.0);
float angle = atan(worldPosition.z/worldPosition.x);
float radius = length(toCenter) * 2.0;
vColor = hsb2rgb(vec3((angle/(PI))+0.5,radius,1.0));
} else {
vColor = vec3(r,g,b);
}
vec4 mvPosition = modelViewMatrix * vec4(position, 1.0);
gl_PointSize = size * (scale/length(mvPosition.xyz));
gl_Position = projectionMatrix * mvPosition;
}
I have discovered that the guide I was following was incorrect. I wasn't thinking about my math properly but I now know what the problem was.
atan has a range from -PI/2 to PI/2 which only accounts for half of a circle. When worldPosition.x is negative atan will not return the correct angle since it is out of range of the function. The angle needs to be adjusted based on what quadrant it is in the plane.
Q1: do nothing
Q2: add PI to the angle
Q3: add PI to the angle
Q4: add 2PI to the angle
After this normalize the angle (divide by 2PI) then pass it to the hsb2rgb function.
Lately I implemented the FXAA algorithm into my OpenGL application. I haven't understand this algorithm completely by now but I know that it uses contrast data of the final image to selectively apply blurring. As a post processing effect that makes sense. B since I use deferred shading in my application I already have a depth texture of the scene. Using that it might be much easier and more precise to find edges for applying blur there.
So is there a known antialiasing algorithm using the depth texture instead of the final image to find the edges? By fakes I mean an antialiasing algorithm based on a pixel basis instead of a vertex basis.
After some research I found out that my idea is widely used already in deferred renderers. I decided to post this answer because I came up with my own implementation which I want to share with the community.
Based on the gradient changes of the depth and the angle changes of the normals, there is blurring applied to the pixel.
// GLSL fragment shader
#version 330
in vec2 coord;
out vec4 image;
uniform sampler2D image_tex;
uniform sampler2D position_tex;
uniform sampler2D normal_tex;
uniform vec2 frameBufSize;
void depth(out float value, in vec2 offset)
{
value = texture2D(position_tex, coord + offset / frameBufSize).z / 1000.0f;
}
void normal(out vec3 value, in vec2 offset)
{
value = texture2D(normal_tex, coord + offset / frameBufSize).xyz;
}
void main()
{
// depth
float dc, dn, ds, de, dw;
depth(dc, vec2( 0, 0));
depth(dn, vec2( 0, +1));
depth(ds, vec2( 0, -1));
depth(de, vec2(+1, 0));
depth(dw, vec2(-1, 0));
float dvertical = abs(dc - ((dn + ds) / 2));
float dhorizontal = abs(dc - ((de + dw) / 2));
float damount = 1000 * (dvertical + dhorizontal);
// normals
vec3 nc, nn, ns, ne, nw;
normal(nc, vec2( 0, 0));
normal(nn, vec2( 0, +1));
normal(ns, vec2( 0, -1));
normal(ne, vec2(+1, 0));
normal(nw, vec2(-1, 0));
float nvertical = dot(vec3(1), abs(nc - ((nn + ns) / 2.0)));
float nhorizontal = dot(vec3(1), abs(nc - ((ne + nw) / 2.0)));
float namount = 50 * (nvertical + nhorizontal);
// blur
const int radius = 1;
vec3 blur = vec3(0);
int n = 0;
for(float u = -radius; u <= +radius; ++u)
for(float v = -radius; v <= +radius; ++v)
{
blur += texture2D(image_tex, coord + vec2(u, v) / frameBufSize).rgb;
n++;
}
blur /= n;
// result
float amount = mix(damount, namount, 0.5);
vec3 color = texture2D(image_tex, coord).rgb;
image = vec4(mix(color, blur, min(amount, 0.75)), 1.0);
}
For comparison, this is the scene without any anti-aliasing.
This is the result with anti-aliasing applied.
You may need to view the images at their full resolution to judge the effect. In my view the result is adequate for the simple implementation. The best thing is that there are nearly no jagged artifacts when the camera moves.