I'm a beginner in OSL and got quiet confused about its "radiance closure".
Just diffuse closure as an example. We can directly write
Ci = diffuse(N)
in an osl file to use diffuse closure. And the document says "the internals of the closure are left to the implementation in the render". But I know diffuse is a built-in closure in OSL and
OSL has already implemented eval_reflect(), eval_transmit,sample() interfaces for diffuse in bsdf_diffsue.cpp. For example, eval_reflect() is as follow:
Color3 eval_reflect (const Vec3 &omega_out, const Vec3 &omega_in, float& pdf) const
{
float cos_pi = std::max(m_N.dot(omega_in),0.0f) * (float) M_1_PI;
pdf = cos_pi;
return Color3 (cos_pi, cos_pi, cos_pi);
}
So it seems there is nothing else to be done in the outside render. So what "the internals of the closure are left to the implementation in the render" means exactly?
Any explanation will be appreciated! Thanks!
The question has been open for some time, but I'll give it a shot anyway.
In bsdf_diffuse.cpp, or even, for every file bsdf_*.cpp under the oslexec folder, you'll find classes that inherit from BSDFClosure, meaning each one of these are themselves closures.
The methods
Color3 eval_reflect (const Vec3 &omega_out, const Vec3 &omega_in, float& pdf) const;
Color3 eval_transmit (const Vec3 &omega_out, const Vec3 &omega_in, float& pdf) const;
ustring sample (const Vec3 &Ng,
const Vec3 &omega_out, const Vec3 &domega_out_dx, const Vec3 &domega_out_dy,
float randu, float randv,
Vec3 &omega_in, Vec3 &domega_in_dx, Vec3 &domega_in_dy,
float &pdf, Color3 &eval) const;
are called at a later time, multiple times, if needed, by the host renderer. Hence, the need for the renderer own internals: it's up for the renderer to decide when to actually call these.
Related
I have used this website to create a shader that displays a snowman and some snowflakes:
http://glslsandbox.com/e#54840.8
In case the link doesn't work, heres the code:
#ifdef GL_ES
precision mediump float;
#endif
#extension GL_OES_standard_derivatives : enable
uniform float time;
uniform vec2 mouse;
uniform vec2 resolution;
uniform sampler2D backbuffer;
#define PI 3.14159265
vec2 p;
float bt;
float seed=0.1;
float rand(){
seed+=fract(sin(seed)*seed*1000.0)+.123;
return mod(seed,1.0);
}
//No I don't know why he loks so creepy
float thicc=.003;
vec3 color=vec3(1.);
vec3 border=vec3(.4);
void diff(float p){
if( (p)<thicc)
gl_FragColor.rgb=color;
}
void line(vec2 a, vec2 b){
vec2 q=p-a;
vec2 r=normalize(b-a);
if(dot(r,q)<0.){
diff(length(q));
return;
}
if(dot(r,q)>length(b-a)){
diff(length(p-b));
return;
}
vec2 rr=vec2(r.y,-r.x);
diff(abs(dot(rr,q)));
}
void circle(vec2 m,float r){
vec2 q=p-m;
vec3 c=color;
diff(length(q)-r);
color=border;
diff(abs(length(q)-r));
color=c;
}
void main() {
p=gl_FragCoord.xy/resolution.y;
bt=mod(time,4.*PI);
gl_FragColor.rgb=vec3(0.);
vec2 last;
//Body
circle(vec2(1.,.250),.230);
circle(vec2(1.,.520),.180);
circle(vec2(1.,.75),.13);
//Nose
color=vec3(1.,.4,.0);
line(vec2(1,.720),vec2(1.020,.740));
line(vec2(1,.720),vec2(.980,.740));
line(vec2(1,.720),vec2(.980,.740));
line(vec2(1.020,.740),vec2(.980,.740));
border=vec3(0);
color=vec3(1);
thicc=.006;
//Eyes
circle(vec2(.930,.800),.014);
circle(vec2(1.060,.800),.014);
color=vec3(.0);
thicc=0.;
//mouth
for(float x=0.;x<.1300;x+=.010)
circle(vec2(.930+x,.680+cos(x*40.0+.5)*.014),.005);
//buttons
for(float x=0.02;x<.450;x+=.070)
circle(vec2(1.000,.150+x),0.01);
color=vec3(0.9);
thicc=0.;
//snowflakes
for(int i=0;i<99;i++){
circle(vec2(rand()*2.0,mod(rand()-time,1.0)),0.01);
}
gl_FragColor.a=1.0;
}
The way it works is, that for each pixel on the screen, the shader checks for each elment (button, body, head, eyes mouth, carrot, snowflake) wheter it's inside an area, n which case it replaces the current color at that position with the current draw color.
So we have a complexity of O(pixels_width * pixels_height * elements), which leads to to the shader slowing down when too many snowflakes are own screen.
So now I was wondering, how can this code be optimized? I already thought about using bounding boxes or even a 3d Octree (I guess that would be a quadtree) to quickly discard elements that are outside a certain pixel (or fragments) area.
Does anyone have another idea how to optimize this shadercode? Keeping in mind that every shader execution is completely independant of all others and I can't use any overarching structure.
You would need to break up your screen into regions, "tiles" and compute the snowflakes per tile. Tiles would have the same number of snowflakes and share the same seed, so that one particle leaving the tile's boundary would have an identical particle entering the next tile, making it look seamless. The pattern might still appear depending on your settings, but you could consider adding an extra uniform transformation, potentially based on the final screen position.
On a side note, your method for drawing circles could be more efficient by removing all conditional branching (and look anti-aliased in the process) and could get rid of the square root generated by length().
I finished adding light to my object. But I have most of it of an example of internet and I want to understand what i'm doing. Can somebody explain me in detail what every step in the code does?
Fragment Program,
Lightcolor : the light that we need (i took here red as example)
Shininess : how many light we want to use, can also change the picture into a dark one
gl_FragColor = to write the total color. But why do we do texture2D(..) + facingRatio * ...?
Vertex Program,
-Why gl_MultiTexCoord0.xy?
- And can someone explain how the lightdirection is calculated?
varying vec2 Texcoord;
uniform sampler2D baseMap;
uniform vec4 lightColor;
uniform float shininess;
varying vec3 LightDirection;
varying vec3 Normal;
void main(void)
{
float facingRatio = dot(normalize(Normal), normalize(LightDirection));
gl_FragColor = texture2D(baseMap, Texcoord) + facingRatio * lightColor * shininess;
}
varying vec2 Texcoord;
uniform mat4 modelView;
uniform vec3 lightPos;
varying vec3 Normal;
varying vec3 LightDirection;
void main(void)
{
gl_Position = gl_ProjectionMatrix * modelView * gl_Vertex;
Texcoord = gl_MultiTexCoord0.xy;
Normal = normalize( gl_NormalMatrix * gl_Normal);
vec4 objectPosition = gl_ModelViewMatrix * gl_Vertex;
LightDirection = (gl_ModelViewMatrix * vec4(lightPos, 1)).xyz - objectPosition.xyz;
}
Vertex shader
I guess an old version of OpenGL is used and that's why gl_MultiTexCoord0.xy is used. Have a look at this page to understand multitexturing and the built-in variable.
The variable LightDirection is easy to calculate, because it just refers to a vector from the object in direction to the light source. So, you can substract the position of the light from the current position (here: objectPosition). In this example, this is done in the eye-space, which has the advantage, that the view direction is easy to calculate (It's just vec3(0.0,0.0,1.0)).
The other lines are just some usual graphic transformations from the object-space of an vertex into the eye-space. To transform the normal into the eye-space, instead of the modelview matrix the normal matrix is needed. This is the inverse and transposed of the modelview matrix.
Fragment shader
The facingRatio defines how strong the current fragment should be enlightend. So, the scalar product between the normal and the light direction is used to measure it.
In the next line, the fragment color is calculated by reading a color for this fragment out of a texture and adding the color from the light to it.
I trying to compile program (I have previously ported it from Cg language). Fragment shader is
precision mediump float;
precision mediump int;
uniform float time;
uniform float aspect;
uniform sampler2D sampler_main;
varying vec4 v_texCoord;
void main()
{
vec3 ret;
vec2 uv = v_texCoord.xy;
float rad=sqrt((uv.x-0.5)*(uv.x-0.5)*4.0+(uv.y-0.5)*(uv.y-0.5)*4.0)*.7071067;
float ang=atan(((uv.y-0.5)*2.0),((uv.x-0.5)*2.0));
vec2 uv1 = (uv-0.5)*aspect.xy;
float rad1 = .1/(length(uv1) + .1)) ;
vec2 uv2 = vec2 (ang/3.14, rad1);
uv2.y = uv2.y +0.1*time;
uv2.x = uv2.x +.0*time;
vec2 uv3 = vec2 (ang/3.14, rad1*1.5);
uv3.y = uv3.y + 0.08*time ;
uv3.x = uv3.x + time/32;
vec3 crisp = 2*texture2D(sampler_main, uv2).xyz;
vec3 lay1 = vec3 (0,0,1)*uv.y*pow(1-rad,8);
crisp = 3*crisp * pow(rad,1);
float mask = saturate(1-4*rad);
ret = crisp + lay1*mask + mask * texture2D(sampler_main, uv).xyz;
gl_FragColor.xyz = ret;
gl_FragColor.w = 1.0;
}
I got error on line
uv3.x = uv3.x + time/32;
When I change it to
uv3.x = uv3.x + time/32.0;
Problem is solved, but I don't understand the root of the problem.
The same problem for the line
float mask = saturate(1-4*rad); => float mask = saturate(1.0-4.0*rad);
vec3 crisp = 2*texture2D(sampler_main, uv2).xyz; => vec3 crisp = 2.0*texture2D(sampler_main, uv2).xyz;
vec3 lay1 = vec3 (0,0,1)*uv.y*pow(1-rad,8); => vec3 lay1 = vec3 (0,0,1)*uv.y*pow(1.0-rad,8.0);
crisp = 3*crisp * pow(rad,1); => crisp = 3.0*crisp * pow(rad,1.0);
Could someone explain:
Why I cannot mix float and int constant in the same expression?
Is there any workaround that allow me to mix float and int constant?
Implicit casts are not allowed in early GLSL. So try an explicit cast:
uv3.x = uv3.x + time/float(32);
The GLSL 1.1 Spec says in Chapter 4 (page 16):
The OpenGL Shading Language is type safe. There are no implicit conversions between types
Recent GLSL allows implicit type casts. The GLSL 4.4 Spec says in Chapter 4 (page 25):
The OpenGL Shading Language is type safe. There are some implicit conversions between types.
Exactly how and when this can occur is described in section 4.1.10 “Implicit Conversions” and as
referenced by other sections in this specification.
And later on starting at page 39 there is a list of possible implicit conversions.
Since your question has the opengl-es-2.0 tag, the relevant spec is the corresponding OpenGL ES Shading Language spec.
It says in section "5.8 Assignments" (page 46):
The lvalue-expression and rvalue-expression must have the same type. All desired type-conversions must be specified explicitly via a constructor.
and in section "5.9 Expressions" (page 48):
The arithmetic binary operators add (+), subtract (-), multiply (*), and divide (/) operate on integer and floating-point typed expressions (including vectors and matrices). The two operands must be the same type, or one can be a scalar float and the other a float vector or matrix, or one can be a scalar integer and the other an integer vector.
All you need to do is use float constants in float expressions. In your first example, use 32.0 instead of 32. Subtle detail, if you're used to writing 32.0f from C/C++: The f postfix is not supported in the GLSL version that goes with ES 2.0. So writing 32.0f is an error. It's allowed in ES 3.0.
While I'm sure that some people will violently disagree with my point of view: I think not supporting these automatic type conversions is a good feature. I believe it's useful to always be aware of what type you're operating on, and use the correct types, without relying on automatic conversions. Type safety is valuable, and the loose typing in languages like C and C++ is a frequent source of errors.
I need to modify during runtime the appearence of textures..
Some examples may be rendering them with a gray scale to indicate a deactivation, orange color for selection and so on
A little example that better shows what I would like to achieve
Right now my FS looks pretty simple
#version 330
in vec2 fragmentUV;
out vec4 outputColor;
uniform sampler2D textureNode;
void main()
{
outputColor = texture(textureNode, fragmentUV).rgba;
}
I thought I could control these few cases by setting an uniform variable to some hardcoded values...
That's how you can convert an image into grayscale: http://glsl.heroku.com/e#18369.1
float grayScale = dot(imageColor.rgb, vec3(0.299, 0.587, 0.114));
if (IsGrayScale){
gl_FragColor = vec4(grayScale, grayScale, grayScale, 1.0);
} else{
gl_FragColor = imageColor;
}
I tried to add lighting to my OpenGLES2 application following the tutorial at http://www.learnopengles.com/android-lesson-two-ambient-and-diffuse-lighting/
Unlike in above tutorial,I have FPS camera movements.In the vertex shader I have hard coded camera position (u_LightPos) in world coodinates.But its giving weird lighting effects when I move the camera.Do I have to transform this position using projection/view matrix ?
uniform mat4 u_MVPMatrix;
uniform mat4 u_MVMatrix;
attribute vec4 a_Position;
attribute vec4 a_Color;
attribute vec3 a_Normal;
varying vec4 v_Color;
void main()
{
vec3 u_LightPos=vec3(0,0,-20.0);
vec3 modelViewVertex = vec3(u_MVMatrix * a_Position);
vec3 modelViewNormal = vec3(u_MVMatrix * vec4(a_Normal, 0.0));
float distance = length(u_LightPos - modelViewVertex);
// Get a lighting direction vector from the light to the vertex.
vec3 lightVector = normalize(u_LightPos - modelViewVertex);
// Calculate the dot product of the light vector and vertex normal. If the normal and light vector are
// pointing in the same direction then it will get max illumination.
float diffuse = max(dot(modelViewNormal, lightVector), 0.1);
// Attenuate the light based on distance.
diffuse = diffuse * (1.0 / (1.0 + (0.25 * distance * distance)));
// Multiply the color by the illumination level. It will be interpolated across the triangle.
v_Color = a_Color * diffuse;
// gl_Position is a special variable used to store the final position.
// Multiply the vertex by the matrix to get the final point in normalized screen coordinates.
gl_Position = u_MVPMatrix * a_Position;
}
When performing arithmetic on vectors, they must be in the same coordinate space. You're subtracting modelViewVertex (view space) from u_LightPos (world space), which will give you a bogus result.
You need to decide if you want to do lighting calculations in world space, or view space (either should be valid), but you must transform all of the inputs to the same space.
That means either getting the vertex/normal/lightpos in world space, or the vertex/normal/lightpos in view space.
Try multiplying your lightpos by the view matrix (not modelview), and then using that in your computation instead of u_Lightpos, I think it should work.