Using 2D metaballs to draw an outline with a constant thickness - graphics

I'm apply the concept of metaballs to a game I'm making in order to show that the player has selected a few ships, like so http://prntscr.com/klgktf
However, my goal is to keep a constant thickness of this outline, and that's not what I'm getting with the current code.
I'm using a GLSL shader to do this, and I pass to the fragmentation shader a uniform array of positions for the ships (u_metaballs).
Vertex shader:
#version 120
void main() {
gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex;
}
Fragmentation shader:
#version 120
uniform vec2 u_metaballs[128];
void main() {
float intensity = 0;
for(int i = 0; i < 128 && u_metaballs[i].x != 0; i++){
float r = length(u_metaballs[i] - gl_FragCoord.xy);
intensity += 1 / r;
}
gl_FragColor = vec4(0, 0, 0, 0);
if(intensity > .2 && intensity < .21)
gl_FragColor = vec4(.5, 1, .7, .2);
}
I've tried playing around with the intensity ranges, and even changing 1 / r to 10000 / (r ^ 4) which (although it makes no sense) helps a bit, though it does not fix the problem.
Any help or suggestions would be greatly appreciated.

after some more taught it is doable even in single pass ... you just compute the distance to nearest metaball and if less or equal to the boundary thickness render fragment otherwise discard it ... Here example (assuming single quad <-1,+1> is rendered covering whole screen):
Vertex:
// Vertex
varying vec2 pos; // fragment position in world space
void main()
{
pos=gl_Vertex.xy;
gl_Position=ftransform();
}
Fragment:
// Fragment
#version 120
varying vec2 pos;
const float r=0.3; // metabal radius
const float w=0.02; // border line thickness
uniform vec2 u_metaballs[5]=
{
vec2(-0.25,-0.25),
vec2(+0.25,-0.25),
vec2( 0.00,+0.05),
vec2(+0.30,+0.35),
vec2(-1000.1,-1000.1), // end of metaballs
};
void main()
{
int i;
float d;
// d = min distance to any metaball
for (d=r+r+w+w,i=0;u_metaballs[i].x>-1000.0;i++)
d=min(d,length(pos-u_metaballs[i].xy));
// if outside range ignore fragment
if ((d<r)||(d>r+w)) discard;
// otherwise render it
gl_FragColor=vec4(1.0,1.0,1.0,1.0);
}
Preview:

Related

Screen-space shadows producing white result

I've been trying to learn screen-space techniques, specifically Ray-marching ones but I have been struggling to get a single working example to continue learning from and solidify my knowledge. I'm implementing Screen-space shadows following this article but my result just seems to be a white image and I cannot seem to understand why. The code makes sense to me but the result does not seem to be right. I can't seem to understand where I might have gone wrong while attempting this screen-space ray-marching technique and would appreciate any insight to that will help me continue learning.
Using Vulkan + GLSL
Full shader: screen_space_shadows.glsl
// calculate screen space shadows
float computeScreenSpaceShadow()
{
vec3 FragPos = texture(gPosition, uvCoords).rgb;
vec4 ViewSpaceLightPosition = camera.view * light.LightPosition;
vec3 LightDirection = ViewSpaceLightPosition.xyz - FragPos.xyz;
// Ray position and direction in view-space.
vec3 RayPos = texture(gPosition, uvCoords).xyz; // ray start position
vec3 RayDirection = normalize(-LightDirection.xyz);
// Save original depth of the position
float DepthOriginal = RayPos.z;
// Ray step
vec3 RayStep = RayDirection * STEP_LENGTH;
float occlusion = 0.0;
for(uint i = 0; i < MAX_STEPS; i++)
{
RayPos += RayStep;
vec2 Ray_UV = ViewToScreen(RayPos);
// Make sure the UV is inside screen-space
if(!ValidRay(Ray_UV)){
return 1.0;
}
// Compute difference between ray and cameras depth
float DepthZ = linearize_depth(texture(depthMap, Ray_UV).x);
float DepthDelta = RayPos.z - DepthZ;
// Check if camera cannot see the ray. Ray depth must be larger than camera depth = positive delta
bool canCameraSeeRay = (DepthDelta > 0.0) && (DepthDelta < THICKNESS);
bool occludedByOriginalPixel = abs(RayPos.z - DepthOriginal) < MAX_DELTA_FROM_ORIGINAL_DEPTH;
if(canCameraSeeRay && occludedByOriginalPixel)
{
// Mark as occluded
occlusion = 1.0;
break;
}
}
return 1.0 - occlusion;
}
Output

pixi.js: how to draw outline of container while keeping its content transparent

I have a container with several graphics containing circles. I would like to only render this container's outline, without the graphics themselves.
I managed to draw the outlines using OutlineFilter, and I managed to make the container transparent using AlphaFilter, but not both at the same time, no matter in which order I added the filters.
That is technically not possible like you intend to do it. One shader (pixi.js filter) doesn't know about the previous shader, such as where the outline was painted or what is the original texture alpha.
Alternatively you can create a new filter with a new shader that achieves that effect. I'm basing this on the OutlineFilter:
varying vec2 vTextureCoord;
uniform sampler2D uSampler;
uniform vec2 thickness;
uniform vec4 outlineColor;
uniform vec4 filterClamp;
const float DOUBLE_PI = 3.14159265358979323846264 * 2.;
void main(void) {
vec4 ownColor = texture2D(uSampler, vTextureCoord);
vec4 curColor;
float maxAlpha = 0.;
vec2 displaced;
for (float angle = 0.; angle <= DOUBLE_PI; angle += 0.1) {
displaced.x = vTextureCoord.x + thickness.x * cos(angle);
displaced.y = vTextureCoord.y + thickness.y * sin(angle);
curColor = texture2D(uSampler, clamp(displaced, filterClamp.xy, filterClamp.zw));
maxAlpha = max(maxAlpha, curColor.a);
}
float resultAlpha = maxAlpha * step(ownColor.a, 0.0) > 0. ? 1. : 0.0;
gl_FragColor = vec4(outlineColor.rgb * resultAlpha, resultAlpha);
}
Example result as in the pixi-filters demos:

Compute Shader Corrupting Vertex Buffer

I'm making a tutorial for computing tangents and bitangents in a WGPU (Vulkan GLSL) compute shader. I'm creating the vertex buffer on the CPU from a .obj I made in blender.
Here's the code for the compute shader.
#version 450
#define VERTICES_PER_TRIANGLE 3
layout(local_size_x = VERTICES_PER_TRIANGLE) in;
// Should match the struct in model.rs
struct ModelVertex {
vec3 position;
vec2 tex_coords;
vec3 normal;
vec3 tangent;
vec3 bitangent;
};
layout(std140, set=0, binding=0) buffer SrcVertexBuffer {
ModelVertex srcVertices[];
};
layout(std140, set=0, binding=1) buffer DstVertexBuffer {
ModelVertex dstVertices[];
};
layout(std140, set=0, binding=2) buffer IndexBuffer {
uint Indices[];
};
void main() {
uint index = gl_GlobalInvocationID.x;
// Grab the indices for the triangle
uint i0 = Indices[index];
uint i1 = Indices[index + 1];
uint i2 = Indices[index + 2];
// Grab the vertices for the triangle
ModelVertex v0 = srcVertices[i0];
ModelVertex v1 = srcVertices[i1];
ModelVertex v2 = srcVertices[i2];
// Grab the position and uv components of the vertices
vec3 pos0 = v0.position;
vec3 pos1 = v1.position;
vec3 pos2 = v2.position;
vec2 uv0 = v0.tex_coords;
vec2 uv1 = v1.tex_coords;
vec2 uv2 = v2.tex_coords;
// Calculate the edges of the triangle
vec3 delta_pos1 = pos1 - pos0;
vec3 delta_pos2 = pos2 - pos0;
// This will give us a direction to calculate the
// tangent and bitangent
vec2 delta_uv1 = uv1 - uv0;
vec2 delta_uv2 = uv2 - uv0;
// Solving the following system of equations will
// give us the tangent and bitangent.
// delta_pos1 = delta_uv1.x * T + delta_u.y * B
// delta_pos2 = delta_uv2.x * T + delta_uv2.y * B
// Luckily, the place I found this equation provided
// the solution!
float r = 1.0 / (delta_uv1.x * delta_uv2.y - delta_uv1.y * delta_uv2.x);
vec3 tangent = (delta_pos1 * delta_uv2.y - delta_pos2 * delta_uv1.y) * r;
vec3 bitangent = (delta_pos2 * delta_uv1.x - delta_pos1 * delta_uv2.x) * r;
// We'll use the same tangent/bitangent for each vertex in the triangle
dstVertices[i0].tangent = tangent;
dstVertices[i1].tangent = tangent;
dstVertices[i2].tangent = tangent;
dstVertices[i0].bitangent = bitangent;
dstVertices[i1].bitangent = bitangent;
dstVertices[i2].bitangent = bitangent;
}
This leads to an image like the following.
The problem occurs in the last six lines.
dstVertices[i0].tangent = tangent;
dstVertices[i1].tangent = tangent;
dstVertices[i2].tangent = tangent;
dstVertices[i0].bitangent = bitangent;
dstVertices[i1].bitangent = bitangent;
dstVertices[i2].bitangent = bitangent;
If I delete these lines, the output is fine (albeit the lightings all wrong due to the tangent and bitangent being a 0 vector).
Why is modifying the tangent and bitangent messing with the position of the vertices?
Here's the rest of the code for context. https://github.com/sotrh/learn-wgpu/tree/compute/code/intermediate/tutorial14-compute
EDIT:
Here's the code where I'm calling the compute shader.
let src_vertex_buffer = device.create_buffer_init(&wgpu::util::BufferInitDescriptor {
label: Some(&format!("{:?} Vertex Buffer", m.name)),
contents: bytemuck::cast_slice(&vertices),
// UPDATED!
usage: wgpu::BufferUsage::STORAGE,
});
let dst_vertex_buffer = device.create_buffer_init(&wgpu::util::BufferInitDescriptor {
label: Some(&format!("{:?} Vertex Buffer", m.name)),
contents: bytemuck::cast_slice(&vertices),
// UPDATED!
usage: wgpu::BufferUsage::VERTEX | wgpu::BufferUsage::STORAGE,
});
let index_buffer = device.create_buffer_init(&wgpu::util::BufferInitDescriptor {
label: Some(&format!("{:?} Index Buffer", m.name)),
contents: bytemuck::cast_slice(&m.mesh.indices),
// UPDATED!
usage: wgpu::BufferUsage::INDEX | wgpu::BufferUsage::STORAGE,
});
let binding = BitangentComputeBinding {
dst_vertex_buffer,
src_vertex_buffer,
index_buffer,
num_elements: m.mesh.indices.len() as u32,
};
// Calculate the tangents and bitangents
let calc_bind_group = self.binder.create_bind_group(
&binding,
device,
Some("Mesh BindGroup")
);
let mut encoder = device.create_command_encoder(&wgpu::CommandEncoderDescriptor {
label: Some("Tangent and Bitangent Calc"),
});
{
let mut pass = encoder.begin_compute_pass();
pass.set_pipeline(&self.pipeline);
pass.set_bind_group(0, &calc_bind_group, &[]);
pass.dispatch(binding.num_elements as u32 / 3, 1, 1);
}
queue.submit(std::iter::once(encoder.finish()));
device.poll(wgpu::Maintain::Wait);
The shader is supposed to loop through all the triangles in the mesh and compute the tangent and bitangent using the positon, and uv coordinates of the vertices of that triangle. I'm guessing that the vertices that are shared with multiple triangles are getting written to at the same time, causing this memory corruption.
I don't think it's a problem with shaders elsewhere, as I'm using the same model for the light, and the vertex shader responsible for that doesn't use the tangent and bitangent at all.
#version 450
layout(location=0) in vec3 a_position;
layout(location=0) out vec3 v_color;
layout(set=0, binding=0)
uniform Uniforms {
vec3 u_view_position;
mat4 u_view_proj;
};
layout(set=1, binding=0)
uniform Light {
vec3 u_position;
vec3 u_color;
};
// Let's keep our light smaller than our other objects
float scale = 0.25;
void main() {
vec3 v_position = a_position * scale + u_position;
gl_Position = u_view_proj * vec4(v_position, 1);
v_color = u_color;
}
Looking at the vertex data in Render Doc shows that they position data is getting messed up.
Also here's what the cubes look like if I set the tangent and bitangent to vec3(0, 1, 0).
My only guess is that storage buffers have a byte alignment rule that I'm unaware of. I know that's the case for uniform buffers, but I'm using storage buffers for my instancing code, and that doesn't seem to have any issues.
Turns out Vulkan style GLSL aligns to the largest field in the struct when using std430.
https://github.com/KhronosGroup/glslang/issues/264
In my case it's vec3. The vec2 tex_coord is throwing it off causing the shader to pull data from the wrong parts of the vertex buffer.
The fix was to change the struct in model_load.comp to specify the individual components instead.
struct ModelVertex {
float x; float y; float z;
float uv; float uw;
float nx; float ny; float nz;
float tx; float ty; float tz;
float bx; float by; float bz;
};
Now the base alignment is a float (4 bytes), and the shader reads the vertex buffer data properly.
I'm aware there's a packed layout, but shaderc doesn't allow me to use that for reasons beyond me. Honestly I think this is quite annoying, and cumbersome, but it works.
There's still a flaw in the result. There's some banding on the edge faces of the cube. My guess is that it's do a single vertex sharing multiple triangles, but that's another problem that I'll have to look into later.

Smoothing pixel-by-pixel drawing in Processing

I picked up Processing today, and wrote a program to generate a double slit interference pattern. After tweaking with the values a little, it works, but the pattern generated is fuzzier than what is possible in some other programs. Here's a screenshot:
As you can see, the fringes are not as smooth at the edges as I believe is possible. I expect them to look like this or this.
This is my code:
// All quantities in mm
float slit_separation = 0.005;
float screen_dist = 50;
float wavelength = 5e-4f;
PVector slit1, slit2;
float scale = 1e+1f;
void setup() {
size(500, 500);
colorMode(HSB, 360, 100, 1);
noLoop();
background(255);
slit_separation *= scale;
screen_dist *= scale;
wavelength *= scale;
slit1 = new PVector(-slit_separation / 2, 0, -screen_dist);
slit2 = new PVector(slit_separation / 2, 0, -screen_dist);
}
void draw() {
translate(width / 2, height / 2);
for (float x = -width / 2; x < width / 2; x++) {
for (float y = -height / 2; y < height / 2; y++) {
PVector pos = new PVector(x, y, 0);
float path_diff = abs(PVector.sub(slit1, pos).mag() - PVector.sub(slit2, pos).mag());
float parameter = map(path_diff % wavelength, 0, wavelength, 0, 2 * PI);
stroke(100, 100, pow(cos(parameter), 2));
point(x, y);
}
}
}
My code is mathematically correct, so I am wondering if there's something wrong I am doing in transforming the physical values to pixels on screen.
I'm not totally sure what you're asking- what exactly do you expect it to look like? Would it be possible to narrow this down to a single line that's misbehaving instead of the nested for loop?
But just taking a guess at what you're talking about: keep in mind that Processing enables anti-aliasing by default. To disable it, you have to call the noSmooth() function. You can call it in your setup() function:
void setup() {
size(500, 500);
noSmooth();
//rest of your code
It's pretty obvious if you compare them side-by-side:
If that's not what you're talking about, please post an MCVE of just one or two lines instead of a nested for loop. It would also be helpful to include a mockup of what you'd expect versus what you're getting. Good luck!

Is there a faked antialiasing algorithm using the depth buffer?

Lately I implemented the FXAA algorithm into my OpenGL application. I haven't understand this algorithm completely by now but I know that it uses contrast data of the final image to selectively apply blurring. As a post processing effect that makes sense. B since I use deferred shading in my application I already have a depth texture of the scene. Using that it might be much easier and more precise to find edges for applying blur there.
So is there a known antialiasing algorithm using the depth texture instead of the final image to find the edges? By fakes I mean an antialiasing algorithm based on a pixel basis instead of a vertex basis.
After some research I found out that my idea is widely used already in deferred renderers. I decided to post this answer because I came up with my own implementation which I want to share with the community.
Based on the gradient changes of the depth and the angle changes of the normals, there is blurring applied to the pixel.
// GLSL fragment shader
#version 330
in vec2 coord;
out vec4 image;
uniform sampler2D image_tex;
uniform sampler2D position_tex;
uniform sampler2D normal_tex;
uniform vec2 frameBufSize;
void depth(out float value, in vec2 offset)
{
value = texture2D(position_tex, coord + offset / frameBufSize).z / 1000.0f;
}
void normal(out vec3 value, in vec2 offset)
{
value = texture2D(normal_tex, coord + offset / frameBufSize).xyz;
}
void main()
{
// depth
float dc, dn, ds, de, dw;
depth(dc, vec2( 0, 0));
depth(dn, vec2( 0, +1));
depth(ds, vec2( 0, -1));
depth(de, vec2(+1, 0));
depth(dw, vec2(-1, 0));
float dvertical = abs(dc - ((dn + ds) / 2));
float dhorizontal = abs(dc - ((de + dw) / 2));
float damount = 1000 * (dvertical + dhorizontal);
// normals
vec3 nc, nn, ns, ne, nw;
normal(nc, vec2( 0, 0));
normal(nn, vec2( 0, +1));
normal(ns, vec2( 0, -1));
normal(ne, vec2(+1, 0));
normal(nw, vec2(-1, 0));
float nvertical = dot(vec3(1), abs(nc - ((nn + ns) / 2.0)));
float nhorizontal = dot(vec3(1), abs(nc - ((ne + nw) / 2.0)));
float namount = 50 * (nvertical + nhorizontal);
// blur
const int radius = 1;
vec3 blur = vec3(0);
int n = 0;
for(float u = -radius; u <= +radius; ++u)
for(float v = -radius; v <= +radius; ++v)
{
blur += texture2D(image_tex, coord + vec2(u, v) / frameBufSize).rgb;
n++;
}
blur /= n;
// result
float amount = mix(damount, namount, 0.5);
vec3 color = texture2D(image_tex, coord).rgb;
image = vec4(mix(color, blur, min(amount, 0.75)), 1.0);
}
For comparison, this is the scene without any anti-aliasing.
This is the result with anti-aliasing applied.
You may need to view the images at their full resolution to judge the effect. In my view the result is adequate for the simple implementation. The best thing is that there are nearly no jagged artifacts when the camera moves.

Resources