I use the following fragment shader, which uses the fog effect, to draw my scene:
precision mediump float;
uniform int EnableFog;
uniform float FogMinDist;
uniform float FogMaxDist;
varying lowp vec4 DestinationColor;
varying float EyeToVertexDist;
float computeFogFactor()
{
float fogFactor = 1.0;
if (EnableFog != 0)
{
//Use a bit lower vlaue of FogMaxDist to get a better fog effect - it will make the far end disappear quicker.
float fogMaxDistABitCloser = FogMaxDist * 0.98;
fogFactor = (fogMaxDistABitCloser - EyeToVertexDist) / (fogMaxDistABitCloser - FogMinDist);
fogFactor = clamp(fogFactor, 0.0, 1.0);
}
return fogFactor;
}
void main(void)
{
float fogFactor = computeFogFactor();
gl_FragColor = DestinationColor * fogFactor;
}
And i enable alpha blending:
glEnable(GL_BLEND);
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
The result is the following scene:
My problem is with the places in which the lines overlap - the result is that the color seems darker than the color of both lines:
How i can fix it?
As already described in the comment you are blending the newly drawn line with the background which may already contain colours from another object at certain pixels, in your case where lines overlap. To solve this you will either have to draw your lines without overlapping or make your drawing independent from the current buffer state.
In your specific case you may pass the background colour to your fragment shader via some uniform or even a texture and then do your blending manually in the fragment shader.
In general you might want to draw the grid to some frame buffer object (FBO) with attached texture and then draw the whole texture in a single draw call using your fog shader and blending. The drawing to FBO should then be with disabled blending.
There are other ways such as drawing the grid to a stencil buffer first and then redraw a full-screen rect applying a colour with your shader and blending.
Related
Character portrait selection. Clicking next loads the next image in an array, clicking back loads the previous image. Instead of a sharp change from one image to another, I want a variable-speed fading out of the current image and fading in of the new image. Dissolve/Render effects would be nice, but even an opacity tween 100->0 / 0-> 100 in x Seconds.
I really prefer not to use multiple objects on top of each other and alternating between them for "current texture".
Is this possible?
We can do Fade-in and Fade-out by animation modulate. Which is the simple solution.
For dissolve we can use shaders. And there is a lot we can do with shaders. There are plenty of dissolve shaders you can find online... I'll explain some useful variations. I'm favoring variations that are easy to tinker with.
Fade-in and Fade-out
We can do this with a Tween object and either the modulate or self-modulate properties.
I would go ahead and create a Tween in code:
var tween:Tween
func _ready():
tween = Tween.new()
add_child(tween)
Then we can use interpolate_property to manipulate modulate:
var duration_seconds = 2
tween.interpolate_property(self, "modulate",
Color.white, Color.transparent, duration_seconds)
Don't forget to call start:
tween.start()
We can take advantage of yield, to add code that will execute when the tween is completed:
yield(tween, "tween_completed")
Then we change the texture:
self.texture = target_texture
And then interpolate modulate in the opposite direction:
tween.interpolate_property(self, "modulate",
Color.transparent, Color.white, duration_seconds)
tween.start()
Note that I'm using self but you could be manipulating another node. Also target_texture is whatever texture you want to transition into, loaded beforehand.
Dissolve Texture
For any effect that require both textures partially visible, use a custom shader. Go ahead and add a ShaderMaterial to your TextureRect (or similar), and give it a new Shader file.
This will be our starting point:
shader_type canvas_item;
void fragment()
{
COLOR = texture(TEXTURE, UV);
}
That is a shader that simply shows the texture. Your TextureRect should look the same it does without this shader material. Let us add the second texture with an uniform:
shader_type canvas_item;
uniform sampler2D target_texture;
void fragment()
{
COLOR = texture(TEXTURE, UV);
}
You should see a new entry on Shader Param on the Inspector panel for the new texture.
We also need another parameter to interpolate. It will be 0 to display the original Texture, and 1 for the alternative texture. In Godot we can add a hint for the range:
shader_type canvas_item;
uniform sampler2D target_texture;
uniform float weight: hint_range(0, 1);
void fragment()
{
COLOR = texture(TEXTURE, UV);
}
In Shader Param on the Inspector Panel you should now see the new float, with a slider that goes from 0 to 1.
It does nothing, of course. We still need the code to mix the textures:
shader_type canvas_item;
uniform sampler2D target_texture;
uniform float weight: hint_range(0, 1);
void fragment()
{
vec4 color_a = texture(TEXTURE, UV);
vec4 color_b = texture(target_texture, UV);
COLOR = mix(color_a, color_b, weight);
}
That will do. However, I'll do a little refactor for ease of modification, later on this answer:
shader_type canvas_item;
uniform sampler2D target_texture;
uniform float weight: hint_range(0, 1);
float adjust_weight(float input, vec2 uv)
{
return input;
}
void fragment()
{
vec4 color_a = texture(TEXTURE, UV);
vec4 color_b = texture(target_texture, UV);
float adjusted_weight = adjust_weight(weight, UV);
COLOR = mix(color_a, color_b, adjusted_weight);
}
And now we manipulate it, again with Tween. I'll assume you have a Tween created the same way as before. Also that you already have your target_texture loaded.
We will start by setting the weight to 0, and target_texture:
self.material.set("shader_param/weight", 0)
self.material.set("shader_param/target_texture", target_texture)
We can tween weight:
var duration_seconds = 4
tween.interpolate_property(self.material, "shader_param/weight",
0, 1, duration_seconds)
tween.start()
yield(tween, "tween_completed")
And then change the texture:
self.texture = target_texture
Making Dissolve Fancy
We can get fancy we our dissolve effect. For example, we can add another texture to control how fast different parts transition form one texture to the other:
uniform sampler2D transition_texture;
Set it to a new NoiseTexture (and don't forget to set the Noise property of the NoiseTexture). I'll be using the red channel of the texture.
A simple solution looks like this:
float adjust_weight(float input, vec2 uv)
{
float transition = texture(transition_texture, uv).r;
return min(1.0, input * (transition + 1.0));
}
Where the interpolation is always linear, and the transition controls the slope.
We can also do something like this:
float adjust_weight(float input, vec2 uv)
{
float transition = texture(transition_texture, uv).r;
float input_2 = input * input;
return input_2 + (input - input_2) * transition;
}
Which ensure that an input of 0 returns 0, and an input of 1 returns 1. But transition controls the curve in between.
If you plot x * x + (x - x * x) * y in the range from 0 to 1 in both axis, you will see that when y (transition) is 1, you have a line, but when y is 0 you have a parabola.
Alternatively, we can change adjusted_weight to an step function:
float adjust_weight(float input, vec2 uv)
{
float transition = texture(transition_texture, uv).r;
return smoothstep(transition, transition, input);
}
Using smoothstep instead of step to avoid artifacts near 0.
Which will not interpolate between the textures, but each pixel will change from one to the other texture at a different instant. If your noise texture is continuous, then you will see the dissolve advance through the gradient.
Ah, but it does not have to be a noise texture! Any gradient will do. *You can create a texture defining how you want the dissolve to happen (example, under MIT license).
You probably can come up with other versions for that function.
Making Dissolve Edgy
We also could add an edge color. We need, of course, to add a color parameter:
uniform vec4 edge_color: hint_color;
And we will add that color at an offset of where we transition. We need to define that offset:
uniform float edge_weight_offset: hint_range(0, 1);
Now you can add this code:
float adjusted_weight = adjust_weight(max(0.0, weight - edge_weight_offset * (1.0 - step(1.0, weight))), UV);
float edge_weight = adjust_weight(weight, UV);
color_a = mix(color_a, edge_color, edge_weight);
Here the factor (1.0 - step(1.0, weight)) is making sure that when weight is 0, we pass 0. And when weight is 1, we pass a 1. Sadly we also need to make sure the difference does not result in a negative value. There must be another way to do it… How about this:
float weight_2 = weight * weight;
float adjusted_weight = adjust_weight(weight_2, UV);
float edge_weight = adjust_weight(weight_2 + (weight - weight_2) * edge_weight_offset, UV);
color_a = mix(color_a, edge_color, edge_weight);
Ok, feel free to inline adjust_weight. Whichever version you are using (this makes edges with the smoothstep version. With the other it blends a color with the transition).
Dissolve Alpha
It is not hard to modify the above shader to dissolve to alpha instead of dissolving to another texture. First of all, remove target_texture, also remove color_b, which we don't need and should not use. And instead of mix, we can do this:
COLOR = vec4(color_a.rgb, 1.0 - adjusted_weight);
And to use it, do the same as before to transition out:
self.material.set("shader_param/weight", 0)
var duration_seconds = 2
tween.interpolate_property(self.material, "shader_param/weight",
0, 1, duration_seconds)
tween.start()
yield(tween, "tween_completed")
Which will result in making it transparent. So you can change the texture:
self.texture = target_texture
And transition in (with the new texture):
tween.interpolate_property(self.material, "shader_param/weight",
1, 0, duration_seconds)
tween.start()
I am developing an data viewing application in which time series data of multiple sensors is plotted. The size of data is always in Gigabytes therefore I opt to use OpenGl for this purpose.
Now I want the data to scroll vertically for better analysis of it. I am able to do this by two methods till now but none of them seems to be quite fully-complete. First method I tried was to simply change the index of sensors in an periodic manner with motion of the scroll wheel(using np.roll). Now this method completes my requirement but it looks like I am jumping data. I want an smooth transaction of scroll.
Second method was to set an position change in vertex shader. I simply passes an float value of range between -1 and 1 of Y-axis and it simply interpolates the positional data above it to the bottom. The problem with this method is whenever I reach the position when an sensor data is on the both ends (-1 and 1) it joins the lines across the whole screen.
I am unable to understand what else can I do using shader to make this happen.
Below are the shader that I am using and Image result I am getting.
Passing data using VBO and using gl.glDrawArrays(gl.GL_LINE_STRIP, i * self.count, self.count) in a loop to make lines.
VS = '''uniform mat4 projection;
uniform mat4 modelview;
layout(location = 0) in vec2 position;
layout(location = 1) in vec3 color;
uniform float level;
void main() {
if (position.y>-level){
gl_Position = vec4(position.x, position.y-(1-level), 0., 1.);
}else
{gl_Position = vec4(position.x, position.y + (1+level), 0., 1.);}'''
FS = '''#version 450
// Output variable of the fragment shader, which is a 4D vector containing the
// RGBA components of the pixel color.
uniform vec3 triangleColor;
void main()
{
outColor = vec4(triangleColor, 1.0);}'''
So I was hoping to find a method in which I firstly plot data using shader and then rotate all the pixel array index
I have used this website to create a shader that displays a snowman and some snowflakes:
http://glslsandbox.com/e#54840.8
In case the link doesn't work, heres the code:
#ifdef GL_ES
precision mediump float;
#endif
#extension GL_OES_standard_derivatives : enable
uniform float time;
uniform vec2 mouse;
uniform vec2 resolution;
uniform sampler2D backbuffer;
#define PI 3.14159265
vec2 p;
float bt;
float seed=0.1;
float rand(){
seed+=fract(sin(seed)*seed*1000.0)+.123;
return mod(seed,1.0);
}
//No I don't know why he loks so creepy
float thicc=.003;
vec3 color=vec3(1.);
vec3 border=vec3(.4);
void diff(float p){
if( (p)<thicc)
gl_FragColor.rgb=color;
}
void line(vec2 a, vec2 b){
vec2 q=p-a;
vec2 r=normalize(b-a);
if(dot(r,q)<0.){
diff(length(q));
return;
}
if(dot(r,q)>length(b-a)){
diff(length(p-b));
return;
}
vec2 rr=vec2(r.y,-r.x);
diff(abs(dot(rr,q)));
}
void circle(vec2 m,float r){
vec2 q=p-m;
vec3 c=color;
diff(length(q)-r);
color=border;
diff(abs(length(q)-r));
color=c;
}
void main() {
p=gl_FragCoord.xy/resolution.y;
bt=mod(time,4.*PI);
gl_FragColor.rgb=vec3(0.);
vec2 last;
//Body
circle(vec2(1.,.250),.230);
circle(vec2(1.,.520),.180);
circle(vec2(1.,.75),.13);
//Nose
color=vec3(1.,.4,.0);
line(vec2(1,.720),vec2(1.020,.740));
line(vec2(1,.720),vec2(.980,.740));
line(vec2(1,.720),vec2(.980,.740));
line(vec2(1.020,.740),vec2(.980,.740));
border=vec3(0);
color=vec3(1);
thicc=.006;
//Eyes
circle(vec2(.930,.800),.014);
circle(vec2(1.060,.800),.014);
color=vec3(.0);
thicc=0.;
//mouth
for(float x=0.;x<.1300;x+=.010)
circle(vec2(.930+x,.680+cos(x*40.0+.5)*.014),.005);
//buttons
for(float x=0.02;x<.450;x+=.070)
circle(vec2(1.000,.150+x),0.01);
color=vec3(0.9);
thicc=0.;
//snowflakes
for(int i=0;i<99;i++){
circle(vec2(rand()*2.0,mod(rand()-time,1.0)),0.01);
}
gl_FragColor.a=1.0;
}
The way it works is, that for each pixel on the screen, the shader checks for each elment (button, body, head, eyes mouth, carrot, snowflake) wheter it's inside an area, n which case it replaces the current color at that position with the current draw color.
So we have a complexity of O(pixels_width * pixels_height * elements), which leads to to the shader slowing down when too many snowflakes are own screen.
So now I was wondering, how can this code be optimized? I already thought about using bounding boxes or even a 3d Octree (I guess that would be a quadtree) to quickly discard elements that are outside a certain pixel (or fragments) area.
Does anyone have another idea how to optimize this shadercode? Keeping in mind that every shader execution is completely independant of all others and I can't use any overarching structure.
You would need to break up your screen into regions, "tiles" and compute the snowflakes per tile. Tiles would have the same number of snowflakes and share the same seed, so that one particle leaving the tile's boundary would have an identical particle entering the next tile, making it look seamless. The pattern might still appear depending on your settings, but you could consider adding an extra uniform transformation, potentially based on the final screen position.
On a side note, your method for drawing circles could be more efficient by removing all conditional branching (and look anti-aliased in the process) and could get rid of the square root generated by length().
I want to have a fullscreen mode that keeps the aspect ratio by adding black bars on either side. I tried just creating a display mode, but I can't make it fullscreen unless it's a pre-approved resolution, and when I use a bigger diaplay than the native resolution the pixels become messed up, and lines appeared between all of the tiles in the game for some reason.
I think I need to use FBOs to render the scenario to a texture instead of the window, and then just use a fullscreen approved resolution and render the texture properly stretched out in the center of the screen, but I just don't understand how to render to a texture in order to do that, or how to stretch an image. Could someone please help me?
EDIT
I got fullscreen working, but it makes everything all broken looking There are random lines on the edges of anything that's written to the window. There are no glitchy lines when it's in native resolution though. Here's my code:
Display.setTitle("Mega Man");
try{
Display.setDisplayMode(Display.getDesktopDisplayMode());
Display.create();
}catch(LWJGLException e){
e.printStackTrace();
}
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrtho(0,WIDTH,HEIGHT,0,1,-1);
glMatrixMode(GL_MODELVIEW);
glEnable(GL_TEXTURE_2D);
glEnable(GL_BLEND);
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
glHint(GL_PERSPECTIVE_CORRECTION_HINT, GL_NICEST);
glHint(GL_LINE_SMOOTH_HINT, GL_NICEST);
try{Display.setFullscreen(true);}catch(Exception e){}
int sh=Display.getHeight();
int sw=WIDTH*sh/HEIGHT;
GL11.glViewport(Display.getWidth()/2-sw/2, 0, sw, sh);
Screenshot of the glitchy fullscreen here: http://sta.sh/021fohgnmxwa
EDIT
Here is the texture rendering code that I use to draw everything:
public static void DrawQuadTex(Texture tex, int x, int y, float width, float height, float texWidth, float texHeight, float subx, float suby, float subd, String mirror){
if (tex==null){return;}
if (mirror==null){mirror = "";}
//subx, suby, and subd are to grab sprites from a sprite sheet. subd is the measure of both the width and length of the sprite, as only images with dimensions that are the same and are powers of 2 are properly displayed.
int xinner = 0;
int xouter = (int) width;
int yinner = 0;
int youter = (int) height;
if (mirror.indexOf("h")>-1){
xinner = xouter;
xouter = 0;
}
if (mirror.indexOf("v")>-1){
yinner = youter;
youter = 0;
}
tex.bind();
glTranslatef(x,y,0);
glBegin(GL_QUADS);
glTexCoord2f(subx/texWidth,suby/texHeight);
glVertex2f(xinner,yinner);
glTexCoord2f((subx+subd)/texWidth,suby/texHeight);
glVertex2f(xouter,yinner);
glTexCoord2f((subx+subd)/texWidth,(suby+subd)/texHeight);
glVertex2f(xouter,youter);
glTexCoord2f(subx/texWidth,(suby+subd)/texHeight);
glVertex2f(xinner,youter);
glEnd();
glLoadIdentity();
}
Just to keep it clean I give you a real answer and not just a comment.
The aspect ratio problem can be solved with help of glViewport. Using this method you can decide which area of the surface that will be rendered to. The default viewport will cover the whole surface.
Since the second problem with the corrupt rendering (also described here https://stackoverflow.com/questions/28846531/sprite-game-in-full-screen-aliasing-issue) appeared after changing viewport I will give my thought about it in this answer as well.
Without knowing exactly how the rendering code for the tile background looks. I would guess that the problem is due to any differences in the resolution between the glViewport and glOrtho calls.
Example: If the glOrtho resolution is half the viewport resolution then each openGL unit is actually 2 pixels. If you then renders a tile between x=0 and x=9 and then the next one between x=10 and x=19 you will get an empty space between them.
To solve this you can change the resolution so that they are the same. Or you can render the tile to overlap, first one x=0 to x=10 second one x=10 to x=20 and so on.
Without seeing the tile rendering code I can't verify it this is the problem though.
I have a quad covering the area between -0.5, 0.5 and 0.5, -0.5 on a cleared viewport with a stencil and alpha buffer. In the fragment shader I apply a texture which happens to have a shape -- in this case a circle -- outside of which it is fully transparent.
I am trying to figure out how I can essentially "cut" that non-alpha textured shape out of the next draw of the shape, such that I draw the first quad, offset to some degree (say between -0.3, 0.5 and 0.8, -0.5) and draw again, and only the non-overlap of the non-alpha texture is drawn of the second quad's texture.
It is easy enough doing this with a stencil buffer, such that it applies to the quad and is blind to the texture, however I would like to apply it to the texture.
So as an example of the function what I want actually rendered of the conceptual circle texture would be a crescent in that case. I am not sure what tests I should be using for this.
I think you want to stick with the stencil buffer, but the alpha test isn't available in ES 2.0 per the philosophy that anything that can be done in a shader isn't supplied as fixed functionality.
Instead, you can insert one of your own choosing inside the fragment shader, thanks to the discard keyword. Supposing you had the most trivial textured fragment shader:
varying mediump vec2 texCoordVarying;
uniform sampler2D tex2D;
void main()
{
gl_FragColor = texture2D(tex2D, texCoordVarying);
}
You could throw in an alpha test so that pixels with an alpha of less than 0.1 don't proceed down the pipeline, and hence don't affect the stencil buffer with:
varying mediump vec2 texCoordVarying;
uniform sampler2D tex2D;
void main()
{
vec4 colour = texture2D(tex2D, texCoordVarying);
if(colour.a > 0.1)
gl_FragColor = colour;
else
discard;
}