Is there the notion of a generalized vertex and fragment shader? - graphics

I am going to go about creating a simple 2D, maybe 3D down the road, game system like Pixi.js. I notice that they have shaders for each type of effect, and a generic projection matrix shader, but other than that, everything else occurs in regular-code-land.
gl_Position = projection * model * vec4(position, 1.0);
Are things like ShaderToy just that, toys, seeing how much you can do with shaders alone? Or do real game engines need to implement significant functionality directly in shaders? Basically, is there the notion of a generic standard shader you can use for all rendering in a game engine, or do you have to do one off shaders for this and that?
I am trying to get a sense if I can just find that keystone shader for the game engine, the one shader pair I need for a high-performance 2D engine in WebGL, rather than thinking/imagining I need to slowly figure out on a case-by-case basis where shaders will come into play in the game engine.
For example, this is the default shader in Pixi.js:
attribute vec2 aVertexPosition;
attribute vec2 aTextureCoord;
uniform mat3 projectionMatrix;
varying vec2 vTextureCoord;
void main(void)
{
gl_Position = vec4((projectionMatrix * vec3(aVertexPosition, 1.0)).xy, 0.0, 1.0);
vTextureCoord = aTextureCoord;
}
https://github.com/allotrop3/four/tree/master/src/shaders

The short answer is no, you can not make one generic shader unless your game is very simple and only needs one fixed set of features for all situations.
Game Engines like Unity and Unreal make thousands of shaders based on the features used by the game developers. Even three.js will which is not quite as sophisticated as those other engines generates different shaders based on the features used for each combination of lights, textures, skinning, blend shapes, environment mapping, etc..
There is an notion of an "uber shader" that tries to do a lot of stuff. Usually it's something a game dev uses to experiment because they know it's too slow for production. It's less common in modern engines because those engines are designed to generate the shaders either at runtime or at build time so it's easy to specify the features you want and the engine will then generate the shader. For engines that don't have a shader generating system a dev might make a shader that implements all the features. Once they get the look they want they'll then pair it down to only those features they need and/or they will add lots of conditional compilation macros to turn features on/off and then compile the shader into different versions for each combination of features they need.
You can get an idea of this by looking at three.js's shaders. Here is the shader generated by three.js for this program which I used this helper to view. I'd have pasted it in the question but it is 44k and S.O. only allows 30k for a message. First off it was assembled via a large number of snippets. Second you'll notice various conditional complication directives throughout the code. Example
#ifdef DITHERING
vec3 dithering( vec3 color ) {
float grid_position = rand( gl_FragCoord.xy );
vec3 dither_shift_RGB = vec3( 0.25 / 255.0, -0.25 / 255.0, 0.25 / 255.0 );
dither_shift_RGB = mix( 2.0 * dither_shift_RGB, -2.0 * dither_shift_RGB, grid_position );
return color + dither_shift_RGB;
}
#endif
#ifdef USE_COLOR
varying vec3 vColor;
#endif
#if ( defined( USE_UV ) && ! defined( UVS_VERTEX_ONLY ) )
varying vec2 vUv;
#endif
#if defined( USE_LIGHTMAP ) || defined( USE_AOMAP )
varying vec2 vUv2;
#endif
#ifdef USE_MAP
uniform sampler2D map;
#endif
#ifdef USE_ALPHAMAP
uniform sampler2D alphaMap;
#endif
#ifdef USE_AOMAP
uniform sampler2D aoMap;
uniform float aoMapIntensity;
#endif
#ifdef USE_LIGHTMAP
uniform sampler2D lightMap;
uniform float lightMapIntensity;
#endif
#ifdef USE_EMISSIVEMAP
uniform sampler2D emissiveMap;
#endif
#ifdef USE_ENVMAP
uniform float envMapIntensity;
uniform float flipEnvMap;
uniform int maxMipLevel;
#ifdef ENVMAP_TYPE_CUBE
uniform samplerCube envMap;
#else
uniform sampler2D envMap;
#endif
#endif
If you start turning on those features, for example if you set mateiral.envMap in JavaScript you'd see three.js insert #define USE_ENVMAP at the top of the shader in addiction to the fact that it generated the shader for a subset of all of the shader snippets.
This also shows the amount of work you save by using an existing engine. 44k of code is not a small amount of code to reproduce all of the features three.js gives you. If you're set on doing things from scratch it's at least good to be aware it can be a ton of work. Of course if you're making something that only needs a small set of features and no combinations you can get by with just a few hand-written shaders.
You also mentioned
if I can just find that keystone shader for the game engine, the one shader pair I need for a high-performance 2D engine in WebGL
There is arguably no such thing as a keystone shader for a high-performance 2D engine. If you want performance you need each shader to do as little as possible so that's the opposite of a keystone shader.
That said, it depends on the 2D game. IF you want to make Angry birds, that vertex shader you posted in your question is possibly the only shader you probably need. Angry Birds has no special effects. It just draws simple textured quads. So just
attribute vec2 aVertexPosition;
attribute vec2 aTextureCoord;
uniform mat3 projectionMatrix;
varying vec2 vTextureCoord;
void main(void)
{
gl_Position = vec4((projectionMatrix * vec3(aVertexPosition, 1.0)).xy, 0.0, 1.0);
vTextureCoord = aTextureCoord;
}
and a fragment shader like
precision mediump float;
varying vec2 vTextureCoord;
uniform sampler2D texture;
uniform sampler2D colorMult;
void main()
{
gl_FragColor = texture2D(texture, vTextureCoord) * colorMult;
}
would be enough for almost all 2D games made before 2010. 2D Games since then (I just picked an arbitrary date) often use custom shaders to achieve special effects or to optimize. For example certain kinds of particle effects are easy to make with custom shaders. Every particle effect in this game is made with this shader. If you skip to 00:50 you'll see 3 example. The 2 portals under the cake, the candles on the cake, the fireworks... also if you look close in parts of the video you can see particles where characters land on the ground after jumping, all the same stateless particle shader since running particles in Javascript and individually uploading their state would arguably be slow. Another example is the backgrounds are drawn with a tiling shader like this one. That was easier IMO than using the shader above and generating a mesh of vertices for tiles.
Shadertoy is for the most part a toy. See this

Related

How to combine world environment post processing with custom post processing shader in a 3D world, Godot 4.0

I am trying to use the in-built post processing effects attached to a Camera3D while also applying a custom post processing effect to run in combination with the other effects.
I have read tutorials on how to create custom post processing effects, like the one found on the official docs. It tells me to create a MeshInstance with a QuadMesh (well, in Godot 4.0, it is actually now a PlaneMesh) and transform it into clip space.
For one, the transformation explained in the docs did not work, the quad just disappeared when I applied the following vertex shader and applied a large value to extra_cull_margin:
shader_type spatial;
render_mode cull_disabled, unshaded;
void vertex() {
POSITION = vec4(VERTEX, 1.0);
}
Then, I managed to work around this by actually manually rotating the plane such that it faces the camera and with a Z offset of something small but larger than the camera near field.
The issue is that with this plane in front, none of the world environment post processing effects work. Now, I think it might work better if I get the transform working of the quad to clip space, but it doesn't work for me.
Has anyone tried this yet for Godot 4.0 beta 1?
Okay, so reading up on how to do this in general, I stumbled upon this question.
Based on the answer from derhass, I wrote the following vertex shader code:
shader_type spatial;
render_mode cull_disabled, unshaded;
const vec2 vertices[3] = {vec2(-1,-1), vec2(3,-1), vec2(-1, 3)};
void vertex() {
POSITION = vec4(vertices[VERTEX_ID],0.0,1.0);
}
This draws a triangle and it also transforms it successfully into clip space.
Now the world environment effects are working together with the custom post processing shader:
With shader
Without shader

Dynamic array of uniforms (GLSL OpenGL ES 2.0)

In a shader (using OpenGL ES 2.0) I want to have an array with a dynamic size.
I can declare an array with fixed size:
uniform vec2 vertexPositions[4];
But now I want to set the size dynamic to the number of points, which I will pass.
I thought about making a string replacement in the shader source before compiling it, but than I have to compile it everytime I draw a different element. That seems to be CPU-intensive.
The typical approach would be to size the uniform array to the maximum number of elements you expect to use, and then only update the subset of it that you're actually using. You can then pass in the effective size of the array as a separate uniform.
uniform vec2 arr[MAX_SIZE];
uniform int arr_size;

Having texture and color shader at the same time in WebGL

Let's say I wanted to have two different shapes- one with a color buffer and another with a texture buffer, how woud I write that code out in the shaders since the tutorials made it almost like you could only have one or the other, but not both?
So like in the following code, I have something for the texture and something to make the color blue in another line of code- how would I make that differentiation in this language- I tried using ints to symbolize the choice between the two but it didn't work out very well...
<script id="shader-fs" type="x-shader/x-fragment">
precision mediump float;
varying vec2 vTextureCoord;
uniform sampler2D uSampler;
void main(void) {
gl_FragColor = texture2D(uSampler, vec2(vTextureCoord.s, vTextureCoord.t));
// gl_FragColor= vec4(0.0, 1.0, 0.0, 1.0);
}
</script>
There are many, many different ways you can do this.
You could write an if statement or ? : expression in the shader to choose one or the other based on some parameter. (It sounds like you tried this but got it wrong somehow — I can't tell what the problem might have been from your question. However, I would avoid using an int parameter unless have a specific need for one, because integer computations are not what GPUs are optimized for.)
You could multiply the texture and uniform color together, and give textured objects a white color and colored objects a white texture. (This is like how the classic “fixed-function pipeline” rendering operates.)
You could use two different shaders, one which does only textures and another which does only colors (and switch between them using gl.useProgram between objects).
You could use only textures, and color objects using a 1×1 texture containing the desired color (if you want a uniform color, not a per-vertex color).
All of these and more are perfectly fine ways to solve the problem. Do whichever one is most convenient for the rest of your program. If you're concerned about performance, then try them and choose the fastest.

HLSL: Getting texture dimensions in a pixel shader

I have a texture and I need to know its dimensions within a pixel shader. This seems like a job for GetDimensions. Here's the code:
Texture2D t: register(t4);
...
float w;
float h;
t.GetDimensions(w, h);
However, this results in an error:
X4532: cannot map expression to pixel shader instruction set
This error doesn't seem to be documented anywhere. Am I using the function incorrectly? Is there a different technique that I should use?
I'm working in shader model 4.0 level 9_1, via DirectX.
This error usually occurs if a function is not available in the calling shader stage.
Is there a different technique that I should use?
Use shader constants for texture width and height. It saves instructions in the shader, which may also be better performance-wise.

How to use shaders in OpenGL ES with iPhone SDK

I have this obsession with doing realtime character animations based on inverse kinematics and morph targets.
I got a fair way with Animata, an open source (FLTK-based, sadly) IK-chain-style animation program. I even ported their rendering code to a variety of platforms (Java / Processing and iPhone) alt video http://ats.vimeo.com/612/732/61273232_100.jpg video of Animata renderers
However, I've never been convinced that their code is particularly optimised and it seems to take a lot of simulation on the CPU to render each frame, which seems a little unnecessary to me.
I am now starting a project to make an app on the iPad that relies heavily on realtime character animation, and leafing through the iOS documentation I discovered a code snippet for a 'two bone skinning shader'
// A vertex shader that efficiently implements two bone skinning.
attribute vec4 a_position;
attribute float a_joint1, a_joint2;
attribute float a_weight1, a_weight2;
uniform mat4 u_skinningMatrix[JOINT_COUNT];
uniform mat4 u_modelViewProjectionMatrix;
void main(void)
{
vec4 p0 = u_skinningMatrix[int(a_joint1)] * a_position;
vec4 p1 = u_skinningMatrix[int(a_joint2)] * a_position;
vec4 p = p0 * a_weight1 + p1 * a_weight2;
gl_Position = u_modelViewProjectionMatrix * p;
}
Does anybody know how I would use such a snippet? It is presented with very little context. I think it's what I need to be doing to do the IK chain bone-based animation I want to do, but on the GPU.
I have done a lot of research and now feel like I almost understand what this is all about.
The first important lesson I learned is that OpenGL 1.1 is very different to OpenGL 2.0. In v2.0, the principle seems to be that arrays of data are fed to the GPU and shaders used for rendering details. This is distinct from v1.1 where more is done in normal application code with pushmatrix/popmatrix and various inline drawing commands.
An excellent series of blog posts introducing the latest approaches to OpenGL available here: Joe's Blog: An intro to modern OpenGL
The vertex shader I describe above is a runs a transformation on a set of vertex positions. 'attribute' members are per-vertex and 'uniform' members are common across all vertices.
To make this code work you would feed in an array of vector positions (the original positions, I guess), corresponding arrays of joints and weights (the other attribute variables) and this shader would reposition the input vertices according to their attached joints.
The uniform variables relate first to the supplied texture image, and the projection matrix which I think is something to do with transforming the world coordinate system to something more appropriate to the particular requirements.
Relating this back to iPhone development, the best thing to do is to create an OpenGL ES template project and pay attention to the two different rendering classes. One is for the more linear and outdated OpenGL 1.1 and the other is for OpenGL 2.0. Personally I'm throwing out the GL1.1 code given that it applies mainly to older iPhone devices and since I'm targeting the iPad it's not relevant any more. I can get better performance with shaders on the GPU using GL2.0.

Resources