Constructing a Line in the Vertex Shader - Removing Perspective Scaling - graphics

I'm trying to implement a line renderer that expands the line vertices in the vertex shader, so that they expand in screen space, so that all segments of lines are exactly the same size, regardless of how far they are from the camera, each other, or the origin.
I first tried implementing my own version, but could not seem to cancel out the perspective scaling that seems to happen automatically in the graphics pipeline. I then adapted some components from this website: https://mattdesl.svbtle.com/drawing-lines-is-hard (see Screen-Space Projected Lines section). See their full vertex shader here: https://github.com/mattdesl/webgl-lines/blob/master/projected/vert.glsl
In my version, the c++ code uploads two vertices for each end of the line, along with a direction vector pointing from line point A to B, and a scalar sign (-1 or +1) used to expand the line in opposite perpendicular directions, to give it thickness. The vertex shader then constructs two screen space coordinates, generates a screen space direction, then generates a perpendicular direction (using the signed scalar) from that.
In the website's code, they upload 3 positions (prev, cur, next) - I believe so that they can generate joints. But in my case, I just want a simple segment, so I upload the current position, along with a world-space direction to the next position (all vertices of a segment get the same world space line direction). Then in my vertex shader, I construct the "next world position" by adding the world line direction to the current world/vertex position, then transform both into screen space. I probably could have just transformed the world space direction into screen space, but I'm currently trying to rule out all sources of unknowns.
Here is the code I have so far. I believe I've transformed and scaled my vectors just as they have, but my lines are still scaling as they change depths. I'm not sure if I've missed something from the web-page, or if this is the result they were after. But since they are dividing their projected xy coordinates by their projected w coordinate, it sure seems like they were trying to cancel out the scaling.
The closest I've came to achieving the result I want (constant thickness) was to override the w component of all projected positions with the Scene.ViewProj[3][3] component. It almost seemed to work that way, but there was still some strange scaling when the view was rotated. Anyway, here is the code trying to emulate the logic from the website. Any advice on how to make this work would be very much appreciated:
struct sxattrScene
{
float4x4 Eye; // world space transform of the camera
float4x4 View; // view transform - inverted camera
float4x4 Proj; // projection transform for camera perspective/FOV
float4x4 ViewProj; // view * projection transform for camera
float4x4 Screen; // screen projection transform for 2D blitting
float2 Display; // size of display
float Aspect; // aspect ratio of display sizes
float TimeStep; // time that advanced from last frame to this one, in milliseconds
};
ConstantBuffer<sxattrScene> Scene; // constant buffer scene
// input vertex
struct vinBake
{
// mesh inputs
float4 Position : ATTRIB0; // world position of the center of the line (2 verts at each end)
float4 Color : ATTRIB1; // color channels
float3 TexCoord : ATTRIB2; // x=sign, y=thickness, z=feather
// enhanced logic
float4 Prop : ATTRIB3; // xyz contains direction of line (from end points A -> B)
float4 Attr : ATTRIB4; // not used here
};
// 3D line drawing interpolator
struct lerpLine3D
{
float4 ClipPos : SV_POSITION; // projected clip-space screen position of vertex
float4 Diffuse : COLOR0; // diffuse color
float3 ScrPos : TEXCOORD0; // screen-space position of this point
float Factor : TEXCOORD1; // factor value of this position (0->1)
float Feather : TEXCOORD2; // falloff of line
};
// vertex shader
lerpLine3D vs(vinBake vin)
{
// prepare output
lerpLine3D lerp;
// float ww = Scene.ViewProj[3][3];
// generate projected screen position
lerp.ClipPos = mul( Scene.ViewProj, float4( vin.Position.xyz, 1.0) );
// generate a fake "next position" using the line direction, then transform into screen space
float4 next_proj = mul( Scene.ViewProj, float4( vin.Position.xyz + vin.Prop.xyz, 1.0) );
// remove perspect from both positions
float2 curr_screen = lerp.ClipPos.xy / lerp.ClipPos.w;
float2 next_screen = next_proj.xy / next_proj.w;
// correct for aspect ratio
curr_screen.x *= Scene.Aspect;
next_screen.x *= Scene.Aspect;
// generate a direction between these two screen positions
float2 dir = normalize( next_screen - curr_screen );
// extract sign direction .. -1 (neg side) to +1 (pos side)
float sign = vin.TexCoord.x;
// extract line size
float thickness = vin.TexCoord.y;
// extract alpha falloff (used in pixel shader)
lerp.Feather = vin.TexCoord.z;
// remap sign (-1 to +1) into line factor (0 to 1) - used in ps
lerp.Factor = ( sign + 1.0 ) * 0.5;
// compute our expanse, defining how far to push our line vertices out from the starting center point
float expanse = thickness * sign;
// compute our offset vector
float4 offset = float4( -dir.y * expanse / Scene.Aspect, dir.x * expanse, 0.0, 1.0 );
// push our projected position by this offset
lerp.ClipPos += offset;
// copy diffuse color
lerp.Diffuse = vin.Color;
// return lerp data
return lerp;
}
// compute a slope for the alpha falloff of a line draw
float ComputeLineAlpha(float t,float feather)
{
// slope feather to make it more useful
float ft = 1.0 - feather;
float ft4 = ft*ft*ft*ft;
// compute slope
return min( 1.0, t * 40.0 * ( 1.0 - t ) * ( 0.1 + ft4 ) );
}
// pixel shader
float4 ps(lerpLine3D lerp) : SV_TARGET
{
// compute line slope alpha
float alpha = ComputeLineAlpha( lerp.Factor, lerp.Feather );
// return the finished color while scaling the curve with alpha
return float4( lerp.Diffuse.rgb, lerp.Diffuse.a * alpha );
}
Edit:
I think I'm really close to figuring this out. I have things setup so that the lines are scaled correctly as long as all parts of a visible line are in front of the camera. Here is the updated vertex shader code, which is simpler than before:
lerpLine3D main(vinBake vin)
{
// prepare output
lerpLine3D lerp;
// generate projected screen position
lerp.ClipPos = mul( Scene.ViewProj, float4( vin.Position.xyz, 1.0 ) );
// generate fake clip-space point in the direction of the line
// + vin.Prop.xyz contains the world space direction of the line itself (A->B)
float4 next_proj = mul( Scene.ViewProj, float4( vin.Position.xyz + vin.Prop.xyz, 1.0 ) );
// generate a directiion between these two screen positions
float2 dir = normalize( next_proj.xy - lerp.ClipPos.xy );
// extract sign direction .. -1 (neg side) to +1 (pos side)
float sign = vin.TexCoord.x;
// extract line size from input
float thickness = vin.TexCoord.y;
// extract alpha falloff from input
lerp.Feather = vin.TexCoord.z;
// remap sign (-1 to +1) into line factor (0 to 1)
lerp.Factor = ( sign + 1.0 ) * 0.5;
// compute our expanse, defining how far to push our line vertices out from the starting center point
float expanse = thickness * sign;
// compute our offset vector
float2 offset = float2( -dir.y * expanse, dir.x * expanse * Scene.Aspect );
lerp.ClipPos.xy += offset * abs( lerp.ClipPos.w * 0.001 ); // <----- important part
// copy diffuse color
lerp.Diffuse = vin.Color;
// return lerp data
return lerp;
}
However, there is one serious problem I could use some help with, if anyone knows how to pull it off. Notice the updated code above that has the "important part" comment. The reason I placed an abs() here is because sometimes the end-points of a single line segment can cross through the camera/screen plane. In fact, this is pretty common, when drawing long lines, such as for a grid.
Also notice the 0.001 on that same line, which is an arbitrary number that I plugged in to make the scale similar to pixel scaling. But I'm pretty sure there is an exact way to calculate this scaling that will take things into account, such as lines crossing the screen plane.
The updated code above seems to work really well as long as both ends of the line segment are in front of the camera. But when one end is behind the camera, the line is expanded incorrectly. My understanding of the w component and perspective scaling is very limited, beyond knowing that things that are further away are smaller. The w component seems to be heavily derived from the 'z'/depth component after transforming into clip space, but I'm not sure what its min/max range would be under normal 3D circumstances. I'm wondering if just having the correct scaler in that line of code might fix the problem - something like this:
lerp.ClipPos.xy += offset * ((lerp.ClipPos.w-MIN_W_VALUE)/ENTIRE_W_RANGE);
But I'm honestly not familiar with these concepts enough to figure this out. Would anyone be able to point me in the right direction?
Edit: Well, in my engine at least, the w component seems to literally just be world-space depth, relative to the camera. So if something is 100 units in front of the camera, its w value will be 100. And if -100 units behind the camera, then it will be -100. Unfortunately, that seems like it would then have no range to lock it into. So its possible I'm going about this the wrong way. Anyway, would really appreciate any advice.

Related

Is it possible to test if an arbitrary pixel is modifiable by the shader?

I am writing a spatial shader in godot to pixelate an object.
Previously, I tried to write outside of an object, however that is only possible in CanvasItem shaders, and now I am going back to 3D shaders due rendering annoyances (I am unable to selectively hide items without using the culling mask, which being limited to 20 layers is not an extensible solution.)
My naive approach:
Define a pixel "cell" resolution (ie. 3x3 real pixels)
For each fragment:
If the entire "cell" of real pixels is within the models draw bounds, color the current pixel as per the lower-left (where the pixel that has coordinates that are the multiple of the cell resolution).
If any pixel of the current "cell" is out of the draw bounds, set alpha to 1 to erase the entire cell.
psuedo-code for people asking for code of the likely non-existant functionality that I am seeking:
int cell_size = 3;
fragment {
// check within a cell to see if all pixels are part of the object being drawn to
for (int y = 0; y < cell_size; y++) {
for (int x = 0; x < cell_size; x++) {
int erase_pixel = 0;
if ( uv_in_model(vec2(FRAGCOORD.x - (FRAGCOORD.x % x), FRAGCOORD.y - (FRAGCOORD.y % y))) == false) {
int erase_pixel = 1;
}
}
}
albedo.a = erase_pixel
}
tl;dr, is it possible to know if any given point will be called by the fragment function?
On your object's material there should be a property called Next Pass. Add a new Spatial Material in this section, open up flags and check transparent and unshaded, and then right-click it to bring up the option to convert it to a Shader Material.
Now, open up the new Shader Material's Shader. The last process should have created a Shader formatted with a fragment() function containing the line vec4 albedo_tex = texture(texture_albedo, base_uv);
In this line, you can replace "texture_albedo" with "SCREEN_TEXTURE" and "base_uv" with "SCREEN_UV". This should make the new shader look like nothing has changed, because the next pass material is just sampling the screen from the last pass.
Above that, make a variable called something along the lines of "pixelated" and set it to the following expression:
vec2 pixelated = floor(SCREEN_UV * scale) / scale; where scale is a float or vec2 containing the pixel size. Finally replace SCREEN_UV in the albedo_tex definition with pixelated.
After this, you can have a float depth which samples DEPTH_TEXTURE with pixelated like this:
float depth = texture(DEPTH_TEXTURE, pixelated).r;
This depth value will be very large for pixels that are just trying to render the background onto your object. So, add a conditional statement:
if (depth > 100000.0f) { ALPHA = 0.0f; }
As long as the flags on this new next pass shader were set correctly (transparent and unshaded) you should have a quick-and-dirty pixelator. I say this because it has some minor artifacts around the edges, but you can make scale a uniform variable and set it from the editor and scripts, so I think it works nicely.
"Testing if a pixel is modifiable" in your case means testing if the object should be rendering it at all with that depth conditional.
Here's the full shader with my modifications from the comments
// NOTE: Shader automatically converted from Godot Engine 3.4.stable's SpatialMaterial.
shader_type spatial;
render_mode blend_mix,depth_draw_opaque,cull_back,unshaded;
//the size of pixelated blocks on the screen relative to pixels
uniform int scale;
void vertex() {
}
//vec2 representation of one used for calculation
const vec2 one = vec2(1.0f, 1.0f);
void fragment() {
//scale SCREEN_UV up to the size of the viewport over the pixelation scale
//assure scale is a multiple of 2 to avoid artefacts
vec2 pixel_scale = VIEWPORT_SIZE / float(scale * 2);
vec2 pixelated = SCREEN_UV * pixel_scale;
//truncate the decimal place from the pixelated uvs and then shift them over by half a pixel
pixelated = pixelated - mod(pixelated, one) + one / 2.0f;
//scale the pixelated uvs back down to the screen
pixelated /= pixel_scale;
vec4 albedo_tex = texture(SCREEN_TEXTURE,pixelated);
ALBEDO = albedo_tex.rgb;
ALPHA = 1.0f;
float depth = texture(DEPTH_TEXTURE, pixelated).r;
if (depth > 10000.0f)
{
ALPHA = 0.0f;
}
}

Converting X, Z coords to RGB using GLSL shaders

I have a Three js scene that contains a 100x100 plane centred at the origin (ie. min coord: (-50,-50), max coord: (50,50)). I am trying to have the plane appear as a colour wheel by using the x and z coords in a custom glsl shader. Using this guide (see HSB in polar coordinates, towards the bottom of the page) I have gotten my
Shader Code with Three.js Scene
but it is not quite right.
I have played around tweaking all the variables that make sense to me, but as you can see in the screenshot the colours change twice as often as what they should. My math intuition says just divide the angle by 2 but when I tried that it was completely incorrect.
I know the solution is very simple but I have tried for a couple hours and I haven't got it.
How do I turn my shader that I currently have into one that makes exactly 1 full colour rotation in 2pi radians?
EDIT: here is the relevant shader code in plain text
varying vec3 vColor;
const float PI = 3.1415926535897932384626433832795;
uniform float delta;
uniform float scale;
uniform float size;
vec3 hsb2rgb( in vec3 c ){
vec3 rgb = clamp(abs(mod(c.x*6.0+vec3(0.0,4.0,2.0),
6.0)-3.0)-1.0,
0.0,
1.0 );
rgb = rgb*rgb*(3.0-2.0*rgb);
return c.z * mix( vec3(1.0), rgb, c.y);
}
void main()
{
vec4 worldPosition = modelMatrix * vec4(position, 1.0);
float r = 0.875;
float g = 0.875;
float b = 0.875;
if (worldPosition.y > 0.06 || worldPosition.y < -0.06) {
vec2 toCenter = vec2(0.5) - vec2((worldPosition.z+50.0)/100.0, (worldPosition.x+50.0)/100.0);
float angle = atan(worldPosition.z/worldPosition.x);
float radius = length(toCenter) * 2.0;
vColor = hsb2rgb(vec3((angle/(PI))+0.5,radius,1.0));
} else {
vColor = vec3(r,g,b);
}
vec4 mvPosition = modelViewMatrix * vec4(position, 1.0);
gl_PointSize = size * (scale/length(mvPosition.xyz));
gl_Position = projectionMatrix * mvPosition;
}
I have discovered that the guide I was following was incorrect. I wasn't thinking about my math properly but I now know what the problem was.
atan has a range from -PI/2 to PI/2 which only accounts for half of a circle. When worldPosition.x is negative atan will not return the correct angle since it is out of range of the function. The angle needs to be adjusted based on what quadrant it is in the plane.
Q1: do nothing
Q2: add PI to the angle
Q3: add PI to the angle
Q4: add 2PI to the angle
After this normalize the angle (divide by 2PI) then pass it to the hsb2rgb function.

equivalent to gl_FragCoord in glsl vertex shader

I'm trying to get a screen position of a vertex in pixels inside a vertex shader,
I saw some others posts here but I can't find answer that works for me.
this is what I've got in my vertex Shader:
#version 400
layout (location = 0) in vec3 inPosition;
uniform mat4 MVP; // modelViewProjection
uniform vec2 window;
void main()
{
// vertex in screen space
vec2 fake_frag_coord = (MVP * vec4(inPosition,1.0)).xy;
float X = (fake_frag_coord.x*window.x/2.0) + window.x;
float Y = (fake_frag_coord.y*window.y/2.0) + window.y;
}
It's not working very well and I know it's a strange think to do inside a vertex shader but I want to multiply my vertex offset by a 2d texture, so I need to find the pixel the vertex is on top to be able to multiply it by the pixel of the texture.
thanks!
Luiz
I have corrected your vertex shader with proper terms, and shown you the exact sequence of transformations that actually happens when GL computes gl_FragCoord (window-space).
#version 400
layout (location = 0) in vec4 inPosition; // Always use vec4, it makes life easier!
uniform mat4 MVP; // modelViewProjection
uniform vec2 window;
void main()
{
// Vertex in clip-space
vec4 fake_frag_coord = (MVP * inPosition); // Range: [-w,w]^4
// Vertex in NDC-space
fake_frag_coord.xyz /= fake_frag_coord.w; // Rescale: [-1,1]^3
fake_frag_coord.w = 1.0 / fake_frag_coord.w; // Invert W
// Vertex in window-space
fake_frag_coord.xyz *= vec3 (0.5) + vec3 (0.5); // Rescale: [0,1]^3
fake_frag_coord.xy *= window; // Scale and Bias for Viewport
// Assume depth range: [0,1] --> No need to adjust fake_frag_coord.z
[...]
}
Texture coordinates and window-space coordinates are very different things, however. Generally you need normalized coordinates for traditional texture fetches, that means you want the coordinates in the range [0,1].
Luckily window-space and texture-space share the same origin convention (0,0) = bottom-left, so you can cut out the line below to get the appropriate texture coordinates:
fake_frag_coord.xy *= window; // Scale and Bias for Viewport
I think Andon M. Coleman's answer is fine. However, I like to point out a more general issue with the approach discussed in the question: there might be no meaningful screen space position for a vertex at all.
The vertex might lie utside the viewing frustum. This will not be a a problem if the vertices you draw are guaranteed to lie in the frustum, or if you are drawing only points.
But it will fail if you have primitives intersecting the near plane. You might think that in such a case, you just get some coordinates which are outside [-1,1] in NDC space, and if you just use them to assign some output value for the vertex, the clipping state will make it right. But that assumption is wrong. You might values which are pefectly in [-1,1] in NDC space even for vertices which are outside the frustum, and it it will appear as if the vertices lie in front of the camera for all vertices wich actually lie behind the camera. And no subsequent clipping stage is able to fix this.
The only way to get this right would be to actually carry out the clipping operation, before doing the divide by w. And this is something you don't want to do in a vertex shader.
If you want to get this working on the js part of things, this is how I adapted Andon M. Coleman's reply:
var winW = window.innerWidth;
var winH = window.innerHeight;
camera.updateProjectionMatrix();
// Not sure about the order of these! I was using orthographic camera so it didn't matter but double check the order if it doesn't work!
var MVP = camera.projectionMatrix.multiply(camera.matrixWorldInverse);
// position to vertex clip-space
var fake_frag_coord = position.applyMatrix4(MVP); // Range: [-w,w]^4
// vertex to NDC-space
fake_frag_coord.x = fake_frag_coord.x / fake_frag_coord.w; // Rescale: [-1,1]^3
fake_frag_coord.y = fake_frag_coord.y / fake_frag_coord.w; // Rescale: [-1,1]^3
fake_frag_coord.z = fake_frag_coord.z / fake_frag_coord.w; // Rescale: [-1,1]^3
fake_frag_coord.w = 1.0 / fake_frag_coord.w; // Invert W
// Vertex in window-space
fake_frag_coord.x = fake_frag_coord.x * 0.5;
fake_frag_coord.y = fake_frag_coord.y * 0.5;
fake_frag_coord.z = fake_frag_coord.z * 0.5;
fake_frag_coord.x = fake_frag_coord.x + 0.5;
fake_frag_coord.y = fake_frag_coord.y + 0.5;
fake_frag_coord.z = fake_frag_coord.z + 0.5;
// Scale and Bias for Viewport (We want the window coordinates, so no need for this)
fake_frag_coord.x = fake_frag_coord.x / winW;
fake_frag_coord.y = fake_frag_coord.y / winH;

Rotating object relative to mouse position

At the moment I'm using the dot product of the mouse position and (0, 1) to generate radians to rotate an object, in three.js
Code below, works ok but the object 'jumps' because the radian angle skips from positive to negative when the clientX value goes between window.innerWidth / 2
onDocumentMouseMove : function(event) {
// rotate circle relative to current mouse pos
var oldPos = new THREE.Vector2(0, 1);
Template.Main.mouseCurrPos = new THREE.Vector2((event.clientX / window.innerWidth ) * 2 - 1, - (event.clientY / window.innerHeight) * 2 + 1);
Template.Main.mouseCurrPos.normalize();
//Template.Main.projector.unprojectVector(Template.Main.mouseCurrPos, Template.Main.scene);
var angle = oldPos.dot(Template.Main.mouseCurrPos);
Template.Main.mousePrevPos.x = event.clientX;
Template.Main.mousePrevPos.y = event.clientY;
if (event.clientX < window.innerWidth / 2) {
Template.Main.circle.rotation.z = -angle;
}
else {
Template.Main.circle.rotation.z = angle;
}
console.log(Template.Main.circle.rotation.z);
}
However if I add this to assign the value to oldPos:
if (event.clientX < window.innerWidth / 2) {
oldPos = new THREE.Vector2(0, -1);
}
else {
oldPos = new THREE.Vector2(0, 1);
}
Then the "jumping" goes but the effect of rotation is inverted when the mouse is on the left of the window.
I.e. mouse going up rotates anti-clockwise and vice-versa which is not desired.
It's frustrating.
Also if I keep the oldPos conditional assignment and leave out the conditional negation of the angle instead, the jumping comes back.
You can see a demo here: http://theworldmoves.me/rotation-demo/
Many thanks for any tips.
Why are you using the result of the dot product as the angle (radians)? The dot product gives you the cosine of the angle (times the magnitude of the vectors, but these are a unit vector and a normalized vector, so that doesn't matter).
You could change your angle computation to
var angle = Math.acos(oldPos.dot(Template.Main.mouseCurrPos));
However, you may get the wrong quadrant, since there can be two values of theta that satisfy cos(theta) = n. The usual way to get the angle of a vector (origin to mouse position) in the right quadrant is to use atan2():
var angle = Math.atan2(Template.Main.mouseCurrPos.y,
Template.Main.mouseCurrPos.x);
This should give the angle of the mouse position vector, going counterclockwise from (1, 0). A little experimentation can determine for sure where the zero angle is, and which direction is positive rotation.

mfc, can any one help with an algorithm for airbrush, i just can't understand how to do it

Is there any way to fill an ellipse or a rect by point to point like in an airbrush tool in mspaint?
I could not find a way to create an empty rect or an ellipse and then fill them up pixel by pixel or setting random pixels on screen in a circle way....
Can i tell setPixel to fill inside a dcellipse or anything like that?
10x
You need to create a region with CRgn, then select that as the clipping region in your CDC with SelectClipRgn. Then you can use CDC::SetPixel to set random pixels anywhere within the bounding rectangle of your shape, and only the ones within the clipping region will be painted.
Be aware that this will be slow, and will need to be redone every time the window paints (such as when another window is dragged over it).
In your "make random pixels" loop, just exclude the pixel if it's outside your desired circle.
num_pixels = 20; // how many pixels
circle_radius = 32; // 32-pixel radius, or whatever you'd like
circle_radius2 = circle_radius * circle_radius;
while (num_pixels-- > 0)
{
// get a random number between (-circle_radius / 2, circle_radius / 2)
pixel_x = rand(circle_radius) - circle_radius / 2;
pixel_y = rand(circle_radius) - circle_radius / 2;
// compute squared distance between generated pixel and radius,
// exclude if out of range
if ( (center_x - pixel_x) * (center_x - pixel_x) +
(center_y - pixel_y) * (center_y - pixel_y) > circle_radius2 )
continue; // generate another pixel
// do stuff with pixel
}

Resources