A Question on OpenGL ES 2.0 and Alpha / Stencil Tests - graphics

I have a quad covering the area between -0.5, 0.5 and 0.5, -0.5 on a cleared viewport with a stencil and alpha buffer. In the fragment shader I apply a texture which happens to have a shape -- in this case a circle -- outside of which it is fully transparent.
I am trying to figure out how I can essentially "cut" that non-alpha textured shape out of the next draw of the shape, such that I draw the first quad, offset to some degree (say between -0.3, 0.5 and 0.8, -0.5) and draw again, and only the non-overlap of the non-alpha texture is drawn of the second quad's texture.
It is easy enough doing this with a stencil buffer, such that it applies to the quad and is blind to the texture, however I would like to apply it to the texture.
So as an example of the function what I want actually rendered of the conceptual circle texture would be a crescent in that case. I am not sure what tests I should be using for this.

I think you want to stick with the stencil buffer, but the alpha test isn't available in ES 2.0 per the philosophy that anything that can be done in a shader isn't supplied as fixed functionality.
Instead, you can insert one of your own choosing inside the fragment shader, thanks to the discard keyword. Supposing you had the most trivial textured fragment shader:
varying mediump vec2 texCoordVarying;
uniform sampler2D tex2D;
void main()
{
gl_FragColor = texture2D(tex2D, texCoordVarying);
}
You could throw in an alpha test so that pixels with an alpha of less than 0.1 don't proceed down the pipeline, and hence don't affect the stencil buffer with:
varying mediump vec2 texCoordVarying;
uniform sampler2D tex2D;
void main()
{
vec4 colour = texture2D(tex2D, texCoordVarying);
if(colour.a > 0.1)
gl_FragColor = colour;
else
discard;
}

Related

Tween the texture on a TextureButton / TextureRect. Fade out Image1 while simultaneously fade in Image2

Character portrait selection. Clicking next loads the next image in an array, clicking back loads the previous image. Instead of a sharp change from one image to another, I want a variable-speed fading out of the current image and fading in of the new image. Dissolve/Render effects would be nice, but even an opacity tween 100->0 / 0-> 100 in x Seconds.
I really prefer not to use multiple objects on top of each other and alternating between them for "current texture".
Is this possible?
We can do Fade-in and Fade-out by animation modulate. Which is the simple solution.
For dissolve we can use shaders. And there is a lot we can do with shaders. There are plenty of dissolve shaders you can find online... I'll explain some useful variations. I'm favoring variations that are easy to tinker with.
Fade-in and Fade-out
We can do this with a Tween object and either the modulate or self-modulate properties.
I would go ahead and create a Tween in code:
var tween:Tween
func _ready():
tween = Tween.new()
add_child(tween)
Then we can use interpolate_property to manipulate modulate:
var duration_seconds = 2
tween.interpolate_property(self, "modulate",
Color.white, Color.transparent, duration_seconds)
Don't forget to call start:
tween.start()
We can take advantage of yield, to add code that will execute when the tween is completed:
yield(tween, "tween_completed")
Then we change the texture:
self.texture = target_texture
And then interpolate modulate in the opposite direction:
tween.interpolate_property(self, "modulate",
Color.transparent, Color.white, duration_seconds)
tween.start()
Note that I'm using self but you could be manipulating another node. Also target_texture is whatever texture you want to transition into, loaded beforehand.
Dissolve Texture
For any effect that require both textures partially visible, use a custom shader. Go ahead and add a ShaderMaterial to your TextureRect (or similar), and give it a new Shader file.
This will be our starting point:
shader_type canvas_item;
void fragment()
{
COLOR = texture(TEXTURE, UV);
}
That is a shader that simply shows the texture. Your TextureRect should look the same it does without this shader material. Let us add the second texture with an uniform:
shader_type canvas_item;
uniform sampler2D target_texture;
void fragment()
{
COLOR = texture(TEXTURE, UV);
}
You should see a new entry on Shader Param on the Inspector panel for the new texture.
We also need another parameter to interpolate. It will be 0 to display the original Texture, and 1 for the alternative texture. In Godot we can add a hint for the range:
shader_type canvas_item;
uniform sampler2D target_texture;
uniform float weight: hint_range(0, 1);
void fragment()
{
COLOR = texture(TEXTURE, UV);
}
In Shader Param on the Inspector Panel you should now see the new float, with a slider that goes from 0 to 1.
It does nothing, of course. We still need the code to mix the textures:
shader_type canvas_item;
uniform sampler2D target_texture;
uniform float weight: hint_range(0, 1);
void fragment()
{
vec4 color_a = texture(TEXTURE, UV);
vec4 color_b = texture(target_texture, UV);
COLOR = mix(color_a, color_b, weight);
}
That will do. However, I'll do a little refactor for ease of modification, later on this answer:
shader_type canvas_item;
uniform sampler2D target_texture;
uniform float weight: hint_range(0, 1);
float adjust_weight(float input, vec2 uv)
{
return input;
}
void fragment()
{
vec4 color_a = texture(TEXTURE, UV);
vec4 color_b = texture(target_texture, UV);
float adjusted_weight = adjust_weight(weight, UV);
COLOR = mix(color_a, color_b, adjusted_weight);
}
And now we manipulate it, again with Tween. I'll assume you have a Tween created the same way as before. Also that you already have your target_texture loaded.
We will start by setting the weight to 0, and target_texture:
self.material.set("shader_param/weight", 0)
self.material.set("shader_param/target_texture", target_texture)
We can tween weight:
var duration_seconds = 4
tween.interpolate_property(self.material, "shader_param/weight",
0, 1, duration_seconds)
tween.start()
yield(tween, "tween_completed")
And then change the texture:
self.texture = target_texture
Making Dissolve Fancy
We can get fancy we our dissolve effect. For example, we can add another texture to control how fast different parts transition form one texture to the other:
uniform sampler2D transition_texture;
Set it to a new NoiseTexture (and don't forget to set the Noise property of the NoiseTexture). I'll be using the red channel of the texture.
A simple solution looks like this:
float adjust_weight(float input, vec2 uv)
{
float transition = texture(transition_texture, uv).r;
return min(1.0, input * (transition + 1.0));
}
Where the interpolation is always linear, and the transition controls the slope.
We can also do something like this:
float adjust_weight(float input, vec2 uv)
{
float transition = texture(transition_texture, uv).r;
float input_2 = input * input;
return input_2 + (input - input_2) * transition;
}
Which ensure that an input of 0 returns 0, and an input of 1 returns 1. But transition controls the curve in between.
If you plot x * x + (x - x * x) * y in the range from 0 to 1 in both axis, you will see that when y (transition) is 1, you have a line, but when y is 0 you have a parabola.
Alternatively, we can change adjusted_weight to an step function:
float adjust_weight(float input, vec2 uv)
{
float transition = texture(transition_texture, uv).r;
return smoothstep(transition, transition, input);
}
Using smoothstep instead of step to avoid artifacts near 0.
Which will not interpolate between the textures, but each pixel will change from one to the other texture at a different instant. If your noise texture is continuous, then you will see the dissolve advance through the gradient.
Ah, but it does not have to be a noise texture! Any gradient will do. *You can create a texture defining how you want the dissolve to happen (example, under MIT license).
You probably can come up with other versions for that function.
Making Dissolve Edgy
We also could add an edge color. We need, of course, to add a color parameter:
uniform vec4 edge_color: hint_color;
And we will add that color at an offset of where we transition. We need to define that offset:
uniform float edge_weight_offset: hint_range(0, 1);
Now you can add this code:
float adjusted_weight = adjust_weight(max(0.0, weight - edge_weight_offset * (1.0 - step(1.0, weight))), UV);
float edge_weight = adjust_weight(weight, UV);
color_a = mix(color_a, edge_color, edge_weight);
Here the factor (1.0 - step(1.0, weight)) is making sure that when weight is 0, we pass 0. And when weight is 1, we pass a 1. Sadly we also need to make sure the difference does not result in a negative value. There must be another way to do it… How about this:
float weight_2 = weight * weight;
float adjusted_weight = adjust_weight(weight_2, UV);
float edge_weight = adjust_weight(weight_2 + (weight - weight_2) * edge_weight_offset, UV);
color_a = mix(color_a, edge_color, edge_weight);
Ok, feel free to inline adjust_weight. Whichever version you are using (this makes edges with the smoothstep version. With the other it blends a color with the transition).
Dissolve Alpha
It is not hard to modify the above shader to dissolve to alpha instead of dissolving to another texture. First of all, remove target_texture, also remove color_b, which we don't need and should not use. And instead of mix, we can do this:
COLOR = vec4(color_a.rgb, 1.0 - adjusted_weight);
And to use it, do the same as before to transition out:
self.material.set("shader_param/weight", 0)
var duration_seconds = 2
tween.interpolate_property(self.material, "shader_param/weight",
0, 1, duration_seconds)
tween.start()
yield(tween, "tween_completed")
Which will result in making it transparent. So you can change the texture:
self.texture = target_texture
And transition in (with the new texture):
tween.interpolate_property(self.material, "shader_param/weight",
1, 0, duration_seconds)
tween.start()

How to do a shader to convert to azimuthal_equidistant

I have a 360 texture in Equirectangular Projection.
With what GLSL shader can I convert it into a azimuthal equidistant projection?
See also:
http://earth.nullschool.net/#current/wind/isobaric/500hPa/azimuthal_equidistant=24.64,98.15,169
I would do it in Fragment shader.
bind Equirectangular texture as 2D texture
bind projection shader
draw Quad covering the screen or target texture
store or use the result.
In Vertex shader I would:
Just pass the vertex coordinates as varying to fragment shader (no point using matrices here you can directly use x,y coordinates in range <-1,+1>)
In fragment shader I would:
compute azimuth and distance of interpolated vertex from point (0,0) (simple length and atan2 call)
then convert them to (u,v) coordinates of texture (just scale...)
and lastly render fragment with selected texel or throw it out if out of range ...
[edit1] just did bust a small example:
GL draw
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
GLint id;
glUseProgram(prog_id);
id=glGetUniformLocation(prog_id,"txr"); glUniform1i(id,0);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glMatrixMode(GL_TEXTURE);
glLoadIdentity();
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glDisable(GL_DEPTH_TEST);
glEnable(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D,txrmap);
glBegin(GL_QUADS);
glColor3f(1,1,1);
glVertex2f(-1.0,-1.0);
glVertex2f(-1.0,+1.0);
glVertex2f(+1.0,+1.0);
glVertex2f(+1.0,-1.0);
glEnd();
glDisable(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D,0);
glUseProgram(0);
glFlush();
SwapBuffers(hdc);
Vertex:
varying vec2 pos;
void main()
{
pos=gl_Vertex.xy;
gl_Position=gl_Vertex;
}
Fragment:
uniform sampler2D txr;
varying vec2 pos;
void main()
{
const float pi2=6.283185307179586476925286766559;
vec4 c=vec4(0.0,0.0,0.0,1.0);
vec2 uv; // texture coord = scaled spherical coordinates
float a,d; // azimuth,distance
d=length(pos);
if (d<1.0) // inside projected sphere surface
{
a=atan(-pos.x,pos.y);
if (a<0.0) a+=pi2;
if (a>pi2) a-=pi2;
uv.x=a/pi2;
uv.y=d;
c=texture2D(txr,uv);
}
gl_FragColor=c;
}
Input texture:
Output render:
[notes]
The vertical line is caused by not using GL_CLAMP_TO_EDGE on source texture. It can be repaired by using texture coordinates range shifted by 1 pixel on booth sides or use GL_CLAMP_TO_EDGE extension if present.
Weird atan() operands are result of rotating left by 90 degrees to match North azimuth to be UP.

Blended lines do not look as expected

I use the following fragment shader, which uses the fog effect, to draw my scene:
precision mediump float;
uniform int EnableFog;
uniform float FogMinDist;
uniform float FogMaxDist;
varying lowp vec4 DestinationColor;
varying float EyeToVertexDist;
float computeFogFactor()
{
float fogFactor = 1.0;
if (EnableFog != 0)
{
//Use a bit lower vlaue of FogMaxDist to get a better fog effect - it will make the far end disappear quicker.
float fogMaxDistABitCloser = FogMaxDist * 0.98;
fogFactor = (fogMaxDistABitCloser - EyeToVertexDist) / (fogMaxDistABitCloser - FogMinDist);
fogFactor = clamp(fogFactor, 0.0, 1.0);
}
return fogFactor;
}
void main(void)
{
float fogFactor = computeFogFactor();
gl_FragColor = DestinationColor * fogFactor;
}
And i enable alpha blending:
glEnable(GL_BLEND);
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
The result is the following scene:
My problem is with the places in which the lines overlap - the result is that the color seems darker than the color of both lines:
How i can fix it?
As already described in the comment you are blending the newly drawn line with the background which may already contain colours from another object at certain pixels, in your case where lines overlap. To solve this you will either have to draw your lines without overlapping or make your drawing independent from the current buffer state.
In your specific case you may pass the background colour to your fragment shader via some uniform or even a texture and then do your blending manually in the fragment shader.
In general you might want to draw the grid to some frame buffer object (FBO) with attached texture and then draw the whole texture in a single draw call using your fog shader and blending. The drawing to FBO should then be with disabled blending.
There are other ways such as drawing the grid to a stencil buffer first and then redraw a full-screen rect applying a colour with your shader and blending.

How can I display multiple separate textures (not multi-texturing) with OpenGL ES 2.0?

My iOS 4 app uses OpenGL ES 2.0 and renders elements with a single texture. I would like to draw elements using multiple different textures and am having problems getting things to work.
I added a variable to my vertex shader to indicate which texture to apply:
...
attribute float TextureIn;
varying float TextureOut;
void main(void)
{
...
TextureOut = TextureIn;
}
I use that value in the fragment shader to select the texture:
...
varying lowp float TextureOut;
uniform sampler2D Texture0;
uniform sampler2D Texture1;
void main(void)
{
if (TextureOut == 1.0)
{
gl_FragColor = texture2D(Texture1, TexCoordOut);
}
else // 0
{
gl_FragColor = texture2D(Texture0, TexCoordOut);
}
}
Compile shaders:
...
_texture = glGetAttribLocation(programHandle, "TextureIn");
glEnableVertexAttribArray(_texture);
_textureUniform0 = glGetUniformLocation(programHandle, "Texture0");
_textureUniform1 = glGetUniformLocation(programHandle, "Texture1");
Init/Setup:
...
GLuint _texture;
GLuint _textureUniform0;
GLuint _textureUniform1;
...
glActiveTexture(GL_TEXTURE0);
glEnable(GL_TEXTURE_2D); // ?
glBindTexture(GL_TEXTURE_2D, _textureUniform0);
glUniform1i(_textureUniform0, 0);
glActiveTexture(GL_TEXTURE1);
glEnable(GL_TEXTURE_2D); // ?
glBindTexture(GL_TEXTURE_2D, _textureUniform1);
glUniform1i(_textureUniform1, 1);
glActiveTexture(GL_TEXTURE0);
Render:
...
glVertexAttribPointer(_texture, 1, GL_FLOAT, GL_FALSE, sizeof(Vertex), (GLvoid*) (sizeof(float) * 13));
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, _textureUniform0);
glUniform1i(_textureUniform0, 0);
glActiveTexture(GL_TEXTURE1);
glBindTexture(GL_TEXTURE_2D, _textureUniform1);
glUniform1i(_textureUniform1, 1);
glActiveTexture(GL_TEXTURE0);
glDrawElements(GL_TRIANGLES, indicesCountA, GL_UNSIGNED_SHORT, (GLvoid*) (sizeof(GLushort) * 0));
glDrawElements(GL_TRIANGLES, indicesCountB, GL_UNSIGNED_SHORT, (GLvoid*) (sizeof(GLushort) * indicesCountA));
glDrawElements(GL_TRIANGLES, indicesCountC, GL_UNSIGNED_SHORT, (GLvoid*) (sizeof(GLushort) * (indicesCountA + indicesCountB)));
My hope was to dynamically apply the texture associated with a vertex but it seems to only recognize GL_TEXTURE0.
The only way I have been able to change textures is to associated each texture with GL_TEXTURE0 and then draw:
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, _textureUniformX);
glUniform1i(_textureUniformX, 0);
glDrawElements(GL_TRIANGLES, indicesCountA, GL_UNSIGNED_SHORT, (GLvoid*) (sizeof(GLushort) * 0));
...
In order to render all the textures, I would need a separate glDrawElements() call for each texture, and I have read that glDrawElements() calls are a big hit to performance and the number of calls should be minimized. Thats why I was trying to dynamically specifiy which texture to use for each vertex.
It's entirely possible that my understanding is wrong or I am missing something important. I'm still new to OpenGL and the more I learn the more I feel I have more to learn.
It must be possible to use textures other than just GL_TEXTURE0 but I have yet to figure out how.
Any guidance or direction would be greatly appreciated.
Can it be you're just experiencing floating point rounding issues? There shouldn't be any (except if a single privimitve shares vertices with different textures), but just to be sure replace this TextureOut == 1.0 with a TextureOut > 0.5 or something the like.
As a general advice, you are correct in that the number of draw calls should be reduced as much a possible, but your approach is quite odd. You are buying draw call reduction with fragment shader branching. Your approach also doesn't scale well with the overall number of textures, since you always need all textures in separate texture units.
The usual approach to reduce texture switches is to put all the textures into a single large texture, a so-called texture atlas, and use the texture coordinates to select the appropriate subregion in this texture. This also has some pitfalls (which are an entirely different question), but nothing comes for free.
EDIT: Oh wait, I see what you're actually doing wrong
glBindTexture(GL_TEXTURE_2D, _textureUniform0);
You're binding a texture to the current texture unit, but instead of the texture object you give this function a uniform location, which is complete rubbish (but might even work in some weird circumstances, since both uniform locations and texture objects are themselves just integers). Of course you have to bind the actual texture.

OpenGLES2 Shader : Lighting position and camera movement?

I tried to add lighting to my OpenGLES2 application following the tutorial at http://www.learnopengles.com/android-lesson-two-ambient-and-diffuse-lighting/
Unlike in above tutorial,I have FPS camera movements.In the vertex shader I have hard coded camera position (u_LightPos) in world coodinates.But its giving weird lighting effects when I move the camera.Do I have to transform this position using projection/view matrix ?
uniform mat4 u_MVPMatrix;
uniform mat4 u_MVMatrix;
attribute vec4 a_Position;
attribute vec4 a_Color;
attribute vec3 a_Normal;
varying vec4 v_Color;
void main()
{
vec3 u_LightPos=vec3(0,0,-20.0);
vec3 modelViewVertex = vec3(u_MVMatrix * a_Position);
vec3 modelViewNormal = vec3(u_MVMatrix * vec4(a_Normal, 0.0));
float distance = length(u_LightPos - modelViewVertex);
// Get a lighting direction vector from the light to the vertex.
vec3 lightVector = normalize(u_LightPos - modelViewVertex);
// Calculate the dot product of the light vector and vertex normal. If the normal and light vector are
// pointing in the same direction then it will get max illumination.
float diffuse = max(dot(modelViewNormal, lightVector), 0.1);
// Attenuate the light based on distance.
diffuse = diffuse * (1.0 / (1.0 + (0.25 * distance * distance)));
// Multiply the color by the illumination level. It will be interpolated across the triangle.
v_Color = a_Color * diffuse;
// gl_Position is a special variable used to store the final position.
// Multiply the vertex by the matrix to get the final point in normalized screen coordinates.
gl_Position = u_MVPMatrix * a_Position;
}
When performing arithmetic on vectors, they must be in the same coordinate space. You're subtracting modelViewVertex (view space) from u_LightPos (world space), which will give you a bogus result.
You need to decide if you want to do lighting calculations in world space, or view space (either should be valid), but you must transform all of the inputs to the same space.
That means either getting the vertex/normal/lightpos in world space, or the vertex/normal/lightpos in view space.
Try multiplying your lightpos by the view matrix (not modelview), and then using that in your computation instead of u_Lightpos, I think it should work.

Resources