I would like to have a very simple explanation WHY these variables doesn't work on "Android Studio" and how to solve my problem(some work on "TheeBookOfShaders", some work on "Atom", others work on both, some work only on "ShaderToy" and some only work on "Android Studio").
* To really understand, this is a sample (from a "fragment.glsl" file) *
uniform vec2 resolution; // [-] work on...
uniform vec2 uresolution; // [-] work on...
uniform vec2 iresolution; // [Y] work only on "ShaderToy"
uniform vec2 u_resolution; // [Y] work on "Atom" and "WebGL"
i.e.
* Sample Conversion FROM "ShaderToy" TO "Atom" (live coding)*
uniform vec2 iresolution; // is used on: "ShaderToy"
uniform vec2 u_resolution; // is used on: "Atom", "WebGL", etc.
so: [iresolution = u_resolution] * OK It works *
* Well, now, why in "Android Studio" (java code + fragment.glsl) no one of these it works? *
uniform vec2 resolution; // doesn't work on "Android Studio"
uniform vec2 uresolution; // doesn't work on "Android Studio"
uniform vec2 iresolution; // doesn't work on "Android Studio"
uniform vec2 u_resolution; // doesn't work on "Android Studio"
uniform vec2 vresolution; // doesn't work on "Android Studio"
uniform vec2 v_resolution; // doesn't work on "Android Studio"
and obviously:
vec2 A = (gl_FragCoord.xy / u_resolution); // doesn't work on "Android Studio"
vec2 A = (gl_FragCoord.xy / uresolution); // doesn't work on "Android Studio"
vec2 A = (gl_FragCoord.xy / *SOME*resolution); // doesn't work on "Android Studio"
etc.
Same situation about the time var: time, utime, u_time, itime, vtime, v_time, globalTime, etc.
* Where do I can find the exact keyword to use RESOLUTION/TIME/others system-var in "Android Studio" GLSL shader file? *
"resolution" is there a currently defined reference table to understand how to convert system variables?
"resolution" is it a system-lib variable or not?
"Xresolution" is there a simple final real scheme to understand something in this confusion?
Atom-Editor - "u_resolution" using
in this sample, we can see the ONLY work version of Xresolution - try at home
Atom-Editor - "OTHERSresolution" using
in this other sample, we can see the ALL THE OTHERS yellow-failure versions of Xresolution - try at home
The "fragment.glsl" test-file work 100% on Atom-Editor (try at home please)
#ifdef GL_ES
precision highp float;
#endif
uniform vec2 resolution; // not-system var
uniform vec2 uresolution; // not-system var
uniform vec2 iResolution; // system-var WORK 100% on ShaderToy
uniform vec2 vresolution; // not-system var
uniform vec2 u_resolution; // system-var WORK 100% on Atom-Editor but NOT on Android Studio
uniform vec2 i_resolution; // not-system var
uniform vec2 v_resolution; // not-system var
void main()
{
vec2 A = (gl_FragCoord.xy / u_resolution);
gl_FragColor = vec4(A.x, A.y, 0.0, 1.0);
}
* SOLUTION | WORK 100% ONLY ON ANDROID STUDIO *
#ifdef GL_ES
precision highp float;
#endif
uniform vec2 u_resolution; // note: you can name it also "Pacman"...
// this mode let you can to create your
// personal var-name to access to windows view-port
void main()
{
// ---------------------------------------------------------------------------------
u_resolution = vec2(1920, 1080); // this assignment work 100% ONLY on Android Studio
// ---------------------------------------------------------------------------------
// --------------------------------------------------------------------------
vec2 A = (gl_FragCoord.xy / u_resolution); // solution 1
vec2 A = (gl_FragCoord.xy / vec2(1920, 1080)); // solution 2
vec2 A = (vec2(gl_FragCoord.x / 1920, gl_FragCoord.y / 1080)); // solution 3
// --------------------------------------------------------------------------
gl_FragColor = vec4(A.x, A.y, 0.0, 1.0);
}
Finally we found the solution, always before our eyes.
We start from a window of which we have the dimensions of X and Y set to 1920x1080 (in our case we do not need anything else) and I point out 3 modes of setting the variable "u_resolution". WARNING - this feature works ONLY in Android Studio and is able to answer my questions above. The problem has been solved. Felipe showed his commitment to solving the problem by getting involved. Of course we can also set this value from the main-code via Java or C ++ or other; but to us, in this post, it was only interesting to set/retrieve these "u_resolution" directly via/from GLSL.
The solution adopted perfectly meets the needs of departure, and I hope it will be helpful to all those who come after me.
The 3 line solution are equivalent: choose your preferred
A special thank to #felipe-gutierrez for his kind cooperation.
NONE of the GLSL variables you mentioned are system vars
They are user made up variables.
uniform vec2 resolution;
has absolutely no more meaning than;
uniform vec2 foobar;
Those are variables chosen by you.
You set them by looking up their location
In WebGL/JavaScript
const resolutionLocation = gl.getUniformLocation(someProgram, "resolution");
const foobarLocation = gl.getUniformLocation(someProgram, "foobar");
In Java
int resolutionLocation = GLES20.glGetUniformLocation(mProgram, "resolution");
int foobarLocation = GLES20.glGetUniformLocation(mProgram, "foobar");
You set them in WebGL/JavaScript
gl.useProgram(someProgram);
gl.uniform2f(resolutionLocation, yourVariableForResolutionX, yourVariableForResolutionY);
gl.uniform2f(foobarLocation, yourVariableForFoobarX, yourVariableForFoobarY);
or Java
GLES20.glUseProgram(someProgram);
GLES20.glUniform2f(resolutionLocation, yourVariableForResolutionX, yourVariableForResolutionY);
GLES20.glUniform2f(foobarLocation, yourVariableForFoobarX, yourVariableForFoobarY);
There is no magic system vars, they are 100% your app's variables. iResolution is a variable that the programmers of ShaderToy made up. u_resolution is a variable that some plugin author for Atom made up. They could have just as easily chosen renderSize or gamenHirosa (japanese for screen width), or anything. Again, they are not system vars, they are variables chosen by the programmer. In your app you also make up your own variables.
I suggest you read some tutorials on WebGL
According to the Khronos site: "A uniform is a global GLSLvariable declared with the "uniform" storage qualifier. These act as parameters that the user of a shader program can pass to that program. They are stored in a program object.
Uniforms are named so because they do not change from one execution of a shader program to the next within a particular rendering call. This makes them unlike shader stage inputs and outputs, which are often different for each invocation of a program stage."
So in other words it's a variable that you create in your host that you can access in your OpenGL program (Vertex and Fragment Shaders) but that you can't modify directly, so for example you get the resolution of your window in Java or C++ or Javascript or *** then you input it in Shadertoy's convention as iResolution, or your mouse position and left click (iMouse.xyz) and you pass it as a Uniform to your fragment shader.
They are useful for input that isn't too heavy, as you may have seen in Shadertoy, videos are pased as textures, like your webcam or Van Dammes' clip, you can even pass sound as input, for more advanced effects you can pass one shader program into another for things like additive blending or Ping-Pong as BufferA or B or C or D in Shadertoy.
You can see what they stand for from the inputs to the shader on the top part of the editor on the shadertoy site, and here you can check how I got many of the same inputs that shadertoy uses in C++ and unfortunately not plain Java but Processing
If you want to test that you have the correct iResolution uniform then you can type:
void main()
{
vec2 uv = gl_FragCoord.xy/u_resolution;
vec3 col = vec3( smoothstep( 0.1, 0.1 - 0.005, length( uv - 0.5 ) ) );
gl_FragColor = vec4( col, 1 );
}
And you should see the ellipse at the center of the screen.
Related
I have used this website to create a shader that displays a snowman and some snowflakes:
http://glslsandbox.com/e#54840.8
In case the link doesn't work, heres the code:
#ifdef GL_ES
precision mediump float;
#endif
#extension GL_OES_standard_derivatives : enable
uniform float time;
uniform vec2 mouse;
uniform vec2 resolution;
uniform sampler2D backbuffer;
#define PI 3.14159265
vec2 p;
float bt;
float seed=0.1;
float rand(){
seed+=fract(sin(seed)*seed*1000.0)+.123;
return mod(seed,1.0);
}
//No I don't know why he loks so creepy
float thicc=.003;
vec3 color=vec3(1.);
vec3 border=vec3(.4);
void diff(float p){
if( (p)<thicc)
gl_FragColor.rgb=color;
}
void line(vec2 a, vec2 b){
vec2 q=p-a;
vec2 r=normalize(b-a);
if(dot(r,q)<0.){
diff(length(q));
return;
}
if(dot(r,q)>length(b-a)){
diff(length(p-b));
return;
}
vec2 rr=vec2(r.y,-r.x);
diff(abs(dot(rr,q)));
}
void circle(vec2 m,float r){
vec2 q=p-m;
vec3 c=color;
diff(length(q)-r);
color=border;
diff(abs(length(q)-r));
color=c;
}
void main() {
p=gl_FragCoord.xy/resolution.y;
bt=mod(time,4.*PI);
gl_FragColor.rgb=vec3(0.);
vec2 last;
//Body
circle(vec2(1.,.250),.230);
circle(vec2(1.,.520),.180);
circle(vec2(1.,.75),.13);
//Nose
color=vec3(1.,.4,.0);
line(vec2(1,.720),vec2(1.020,.740));
line(vec2(1,.720),vec2(.980,.740));
line(vec2(1,.720),vec2(.980,.740));
line(vec2(1.020,.740),vec2(.980,.740));
border=vec3(0);
color=vec3(1);
thicc=.006;
//Eyes
circle(vec2(.930,.800),.014);
circle(vec2(1.060,.800),.014);
color=vec3(.0);
thicc=0.;
//mouth
for(float x=0.;x<.1300;x+=.010)
circle(vec2(.930+x,.680+cos(x*40.0+.5)*.014),.005);
//buttons
for(float x=0.02;x<.450;x+=.070)
circle(vec2(1.000,.150+x),0.01);
color=vec3(0.9);
thicc=0.;
//snowflakes
for(int i=0;i<99;i++){
circle(vec2(rand()*2.0,mod(rand()-time,1.0)),0.01);
}
gl_FragColor.a=1.0;
}
The way it works is, that for each pixel on the screen, the shader checks for each elment (button, body, head, eyes mouth, carrot, snowflake) wheter it's inside an area, n which case it replaces the current color at that position with the current draw color.
So we have a complexity of O(pixels_width * pixels_height * elements), which leads to to the shader slowing down when too many snowflakes are own screen.
So now I was wondering, how can this code be optimized? I already thought about using bounding boxes or even a 3d Octree (I guess that would be a quadtree) to quickly discard elements that are outside a certain pixel (or fragments) area.
Does anyone have another idea how to optimize this shadercode? Keeping in mind that every shader execution is completely independant of all others and I can't use any overarching structure.
You would need to break up your screen into regions, "tiles" and compute the snowflakes per tile. Tiles would have the same number of snowflakes and share the same seed, so that one particle leaving the tile's boundary would have an identical particle entering the next tile, making it look seamless. The pattern might still appear depending on your settings, but you could consider adding an extra uniform transformation, potentially based on the final screen position.
On a side note, your method for drawing circles could be more efficient by removing all conditional branching (and look anti-aliased in the process) and could get rid of the square root generated by length().
I'm making a blend mode shader in Love2D (version 0.9.2, which I cannot update). However, with it being broken already, I have it cut down to this:
[[
extern Image base;
vec4 effect(vec4 tint, sampler2D tex, vec2 tex_coords, vec2 pos) {
vec4 color = texture2D(tex, tex_coords);
return color;
}
]]
Problem is, the moment I use
shader:send("base", image)
In love.draw(), it results in a black (empty) screen.
What could I possibly be doing wrong here?
I found the problems:
A. I was not USING the 'base' variable in the shader
B. the console library 'Cupid' eats up certain graphical errors, so I was not getting any response.
To fix the shader, simply add something like the following to the 'effect' function:
vec4 baseColor = Texel(base, tex_coords);
This way, the extern Image base is kept rather than discarded after compilation for efficiency.
I finished adding light to my object. But I have most of it of an example of internet and I want to understand what i'm doing. Can somebody explain me in detail what every step in the code does?
Fragment Program,
Lightcolor : the light that we need (i took here red as example)
Shininess : how many light we want to use, can also change the picture into a dark one
gl_FragColor = to write the total color. But why do we do texture2D(..) + facingRatio * ...?
Vertex Program,
-Why gl_MultiTexCoord0.xy?
- And can someone explain how the lightdirection is calculated?
varying vec2 Texcoord;
uniform sampler2D baseMap;
uniform vec4 lightColor;
uniform float shininess;
varying vec3 LightDirection;
varying vec3 Normal;
void main(void)
{
float facingRatio = dot(normalize(Normal), normalize(LightDirection));
gl_FragColor = texture2D(baseMap, Texcoord) + facingRatio * lightColor * shininess;
}
varying vec2 Texcoord;
uniform mat4 modelView;
uniform vec3 lightPos;
varying vec3 Normal;
varying vec3 LightDirection;
void main(void)
{
gl_Position = gl_ProjectionMatrix * modelView * gl_Vertex;
Texcoord = gl_MultiTexCoord0.xy;
Normal = normalize( gl_NormalMatrix * gl_Normal);
vec4 objectPosition = gl_ModelViewMatrix * gl_Vertex;
LightDirection = (gl_ModelViewMatrix * vec4(lightPos, 1)).xyz - objectPosition.xyz;
}
Vertex shader
I guess an old version of OpenGL is used and that's why gl_MultiTexCoord0.xy is used. Have a look at this page to understand multitexturing and the built-in variable.
The variable LightDirection is easy to calculate, because it just refers to a vector from the object in direction to the light source. So, you can substract the position of the light from the current position (here: objectPosition). In this example, this is done in the eye-space, which has the advantage, that the view direction is easy to calculate (It's just vec3(0.0,0.0,1.0)).
The other lines are just some usual graphic transformations from the object-space of an vertex into the eye-space. To transform the normal into the eye-space, instead of the modelview matrix the normal matrix is needed. This is the inverse and transposed of the modelview matrix.
Fragment shader
The facingRatio defines how strong the current fragment should be enlightend. So, the scalar product between the normal and the light direction is used to measure it.
In the next line, the fragment color is calculated by reading a color for this fragment out of a texture and adding the color from the light to it.
I'm trying to emulate Photoshop's Overlay blend mode on a point sprite. Is this possible in OpenGL ES?
EDIT - This might help you along:
Please note: I do not take credit for the code below; I found it on the powervr forums: http://www.imgtec.com/forum/forum_posts.asp?TID=949
uniform sampler2D s_renderTexture;
uniform sampler2D s_overlayMap;
varying mediump vec2 myTexCoord;
void main()
{
//Get the Texture colour values
lowp vec3 baseColor = texture2D(s_renderTexture, myTexCoord).rgb;
lowp float overlayTexture = texture2D(s_overlayMap, myTexCoord).r;
lowp vec3 finalMix = baseColor + (overlayTexture - 0.5) * (1.0 - abs(2.0 * baseColor - 1.0));
//Set the Fragments colour
gl_FragColor = vec4( finalMix, 1.0 );
}
Sure, call this before rendering the point sprites:
glEnable(GL_BLEND);
glBlendFunc(GL_ONE, GL_ONE);
This should result in additive blending.
Here's a visual reference on the different blending mode combinations:
http://zanir.wz.cz/?p=60&lang=en
It's an old page, but it's a nice reference.
For more on opengl-es blending : http://www.khronos.org/opengles/sdk/docs/man/xhtml/glBlendFunc.xml
I made some simple shading in GLSL of a checkers board:
f(P) = [ floor(Px)+floor(Py)+floor(Pz) ] mod 2
It seems to work well except the fact that i see the interior of the objects but i want to see only the front face.
Any ideas how to fix this? Thanks!
Teapot (glutSolidTeapot()):
Cube (glutSolidCube):
The vertex shader file is:
varying float x,y,z;
void main(){
gl_Position = gl_ProjectionMatrix * gl_ModelViewMatrix * gl_Vertex;
x = gl_Position.x;
y = gl_Position.y;
z = gl_Position.z;
}
And the fragment shader file is:
varying float x,y,z;
void main(){
float _x=x;
float _y=y;
float _z=z;
_x=floor(_x);
_y=floor(_y);
_z=floor(_z);
float sum = (_x+_y+_z);
sum = mod(sum,2.0);
gl_FragColor = vec4(sum,sum,sum,1.0);
}
The shaders are not the problem - the face culling is.
You should either disable the face culling (which is not recommended, since it's bad for performance reasons):
glDisable(GL_CULL_FACE);
or use glCullFace and glFrontFace to set the culling mode, i.e.:
glEnable(GL_CULL_FACE); // enables face culling
glCullFace(GL_BACK); // tells OpenGL to cull back faces (the sane default setting)
glFrontFace(GL_CW); // tells OpenGL which faces are considered 'front' (use GL_CW or GL_CCW)
The argument to glFrontFace depends on application conventions, i.e. the matrix handedness.