OpenGL ES 2.0: attribute not bound on vertex shader - attributes

I'm developing an Android application.
I have the following vertex shader.
"attribute vec4 vertexPosition;
attribute vec4 vertexNormal;
attribute vec2 vertexTexCoord;
varying vec2 texCoord;
varying vec4 normal;
uniform mat4 modelViewProjectionMatrix;
void main()
{
gl_Position = modelViewProjectionMatrix * vertexPosition;
normal = vertexNormal;
texCoord = vertexTexCoord;
}
";
And this is the fragment shader:
precision mediump float;
varying vec2 texCoord;
varying vec4 normal;
uniform sampler2D texSampler2D;
void main()
{
gl_FragColor = texture2D(texSampler2D, texCoord);
}
";
Is there any problem if I left vertexTexCoord unbound? I think I must use a different vertex and fragment shader if my model doesn't have a texture, isn't?
Thanks.

Yes you should have another shader for models without texture. Otherwise, I think you will experience implementation dependant behavior.
Related to that, OpenGL documentation says:
Active attributes that are not
explicitly bound will be bound by the
linker when glLinkProgram is called. The locations assigned can be queried
by calling glGetAttribLocation.
So if vertex attributes are enabled it will try to get vertexTexCoord from one of the attributes. I'm not sure what will happen if no more than number of attributes needed for untextured model are enabled and you shouldn't rely on thing like that. Use another shader.

Related

How to draw lines and circles in a shader efficently

I have used this website to create a shader that displays a snowman and some snowflakes:
http://glslsandbox.com/e#54840.8
In case the link doesn't work, heres the code:
#ifdef GL_ES
precision mediump float;
#endif
#extension GL_OES_standard_derivatives : enable
uniform float time;
uniform vec2 mouse;
uniform vec2 resolution;
uniform sampler2D backbuffer;
#define PI 3.14159265
vec2 p;
float bt;
float seed=0.1;
float rand(){
seed+=fract(sin(seed)*seed*1000.0)+.123;
return mod(seed,1.0);
}
//No I don't know why he loks so creepy
float thicc=.003;
vec3 color=vec3(1.);
vec3 border=vec3(.4);
void diff(float p){
if( (p)<thicc)
gl_FragColor.rgb=color;
}
void line(vec2 a, vec2 b){
vec2 q=p-a;
vec2 r=normalize(b-a);
if(dot(r,q)<0.){
diff(length(q));
return;
}
if(dot(r,q)>length(b-a)){
diff(length(p-b));
return;
}
vec2 rr=vec2(r.y,-r.x);
diff(abs(dot(rr,q)));
}
void circle(vec2 m,float r){
vec2 q=p-m;
vec3 c=color;
diff(length(q)-r);
color=border;
diff(abs(length(q)-r));
color=c;
}
void main() {
p=gl_FragCoord.xy/resolution.y;
bt=mod(time,4.*PI);
gl_FragColor.rgb=vec3(0.);
vec2 last;
//Body
circle(vec2(1.,.250),.230);
circle(vec2(1.,.520),.180);
circle(vec2(1.,.75),.13);
//Nose
color=vec3(1.,.4,.0);
line(vec2(1,.720),vec2(1.020,.740));
line(vec2(1,.720),vec2(.980,.740));
line(vec2(1,.720),vec2(.980,.740));
line(vec2(1.020,.740),vec2(.980,.740));
border=vec3(0);
color=vec3(1);
thicc=.006;
//Eyes
circle(vec2(.930,.800),.014);
circle(vec2(1.060,.800),.014);
color=vec3(.0);
thicc=0.;
//mouth
for(float x=0.;x<.1300;x+=.010)
circle(vec2(.930+x,.680+cos(x*40.0+.5)*.014),.005);
//buttons
for(float x=0.02;x<.450;x+=.070)
circle(vec2(1.000,.150+x),0.01);
color=vec3(0.9);
thicc=0.;
//snowflakes
for(int i=0;i<99;i++){
circle(vec2(rand()*2.0,mod(rand()-time,1.0)),0.01);
}
gl_FragColor.a=1.0;
}
The way it works is, that for each pixel on the screen, the shader checks for each elment (button, body, head, eyes mouth, carrot, snowflake) wheter it's inside an area, n which case it replaces the current color at that position with the current draw color.
So we have a complexity of O(pixels_width * pixels_height * elements), which leads to to the shader slowing down when too many snowflakes are own screen.
So now I was wondering, how can this code be optimized? I already thought about using bounding boxes or even a 3d Octree (I guess that would be a quadtree) to quickly discard elements that are outside a certain pixel (or fragments) area.
Does anyone have another idea how to optimize this shadercode? Keeping in mind that every shader execution is completely independant of all others and I can't use any overarching structure.
You would need to break up your screen into regions, "tiles" and compute the snowflakes per tile. Tiles would have the same number of snowflakes and share the same seed, so that one particle leaving the tile's boundary would have an identical particle entering the next tile, making it look seamless. The pattern might still appear depending on your settings, but you could consider adding an extra uniform transformation, potentially based on the final screen position.
On a side note, your method for drawing circles could be more efficient by removing all conditional branching (and look anti-aliased in the process) and could get rid of the square root generated by length().

Apply light to an object in openGL

I finished adding light to my object. But I have most of it of an example of internet and I want to understand what i'm doing. Can somebody explain me in detail what every step in the code does?
Fragment Program,
Lightcolor : the light that we need (i took here red as example)
Shininess : how many light we want to use, can also change the picture into a dark one
gl_FragColor = to write the total color. But why do we do texture2D(..) + facingRatio * ...?
Vertex Program,
-Why gl_MultiTexCoord0.xy?
- And can someone explain how the lightdirection is calculated?
varying vec2 Texcoord;
uniform sampler2D baseMap;
uniform vec4 lightColor;
uniform float shininess;
varying vec3 LightDirection;
varying vec3 Normal;
void main(void)
{
float facingRatio = dot(normalize(Normal), normalize(LightDirection));
gl_FragColor = texture2D(baseMap, Texcoord) + facingRatio * lightColor * shininess;
}
varying vec2 Texcoord;
uniform mat4 modelView;
uniform vec3 lightPos;
varying vec3 Normal;
varying vec3 LightDirection;
void main(void)
{
gl_Position = gl_ProjectionMatrix * modelView * gl_Vertex;
Texcoord = gl_MultiTexCoord0.xy;
Normal = normalize( gl_NormalMatrix * gl_Normal);
vec4 objectPosition = gl_ModelViewMatrix * gl_Vertex;
LightDirection = (gl_ModelViewMatrix * vec4(lightPos, 1)).xyz - objectPosition.xyz;
}
Vertex shader
I guess an old version of OpenGL is used and that's why gl_MultiTexCoord0.xy is used. Have a look at this page to understand multitexturing and the built-in variable.
The variable LightDirection is easy to calculate, because it just refers to a vector from the object in direction to the light source. So, you can substract the position of the light from the current position (here: objectPosition). In this example, this is done in the eye-space, which has the advantage, that the view direction is easy to calculate (It's just vec3(0.0,0.0,1.0)).
The other lines are just some usual graphic transformations from the object-space of an vertex into the eye-space. To transform the normal into the eye-space, instead of the modelview matrix the normal matrix is needed. This is the inverse and transposed of the modelview matrix.
Fragment shader
The facingRatio defines how strong the current fragment should be enlightend. So, the scalar product between the normal and the light direction is used to measure it.
In the next line, the fragment color is calculated by reading a color for this fragment out of a texture and adding the color from the light to it.

how to swap current fragment shader color with neighbors?

I have a shader with a sampler texture. is it possible to swap the color of the current fragment with any of its neighbors? if it so, how?
uniform sampler2D map;
varying vec2 vuv;
void main() {
gl_FragColor = texture2D(map, vuv);
}
Fragment shader only knows about current fragment. The only way to swap colors would be to create pass where everything is rendered to texture and then one post-processing pass to swap colors. Hope this helps.

OpenGL ES Overlay Blend Mode with Point Sprites

I'm trying to emulate Photoshop's Overlay blend mode on a point sprite. Is this possible in OpenGL ES?
EDIT - This might help you along:
Please note: I do not take credit for the code below; I found it on the powervr forums: http://www.imgtec.com/forum/forum_posts.asp?TID=949
uniform sampler2D s_renderTexture;
uniform sampler2D s_overlayMap;
varying mediump vec2 myTexCoord;
void main()
{
//Get the Texture colour values
lowp vec3 baseColor = texture2D(s_renderTexture, myTexCoord).rgb;
lowp float overlayTexture = texture2D(s_overlayMap, myTexCoord).r;
lowp vec3 finalMix = baseColor + (overlayTexture - 0.5) * (1.0 - abs(2.0 * baseColor - 1.0));
//Set the Fragments colour
gl_FragColor = vec4( finalMix, 1.0 );
}
Sure, call this before rendering the point sprites:
glEnable(GL_BLEND);
glBlendFunc(GL_ONE, GL_ONE);
This should result in additive blending.
Here's a visual reference on the different blending mode combinations:
http://zanir.wz.cz/?p=60&lang=en
It's an old page, but it's a nice reference.
For more on opengl-es blending : http://www.khronos.org/opengles/sdk/docs/man/xhtml/glBlendFunc.xml

Can't pass float value to GLSL?

I try to send values to the GLSL, int is just all right, but float comes out strange.
Ubuntu 10.04LTS
Graphics card: G105M
Here is my vertex shader:
#version 110
attribute vec4 a_vertex;
attribute vec3 a_texCoord;
varying vec2 v_texCoord;
uniform float u_time;
void main()
{
gl_Position=vec4(a_vertex.x+u_time,a_vertex.y,a_vertex.z,1);
v_texCoord=a_texCoord.xy;
}
Here is my c code:
GLint timeLoc=glGetUniformLocation(splash_screen.proHandle,"u_time");
glUniform1f(timeLoc,1.0);
Here is the strange thing: if I change the u_time to int type, it works all right. But if I go with a float it is very strange.
if I use int,the vertex x will +1,but if i use float the vertex x not change.
I think i found it.
I port my program to Android,it work well.
It's my computer's problem(90% is the graphic card driver)
I finall found it.
I port my program to android,it work well
It's my computer's problem(90% is the graphic card driver)

Resources