texelFetch works on NDIVIA driver but not on Mesa - linux

I have a 3D texture uploaded to a shader (see code below). I need to perform some bitwise operations on that texture to get bit-by-bit data. The Fragment shader I wrote works under Linux, with NVIDIA drivers given as
OpenGL version string: 4.5.0 NVIDIA 367.57
but do not work on another computer with Intel integrated GPU and Mesa drivers, version information given by:
OpenGL version string: 3.0 Mesa 11.2.0
OpenGL shading language version string: 1.30
What is the reason for this not to work on that system?
I know it supports version 130, and the compilation yields no errors.
What could be wrong, or, alternatively, how can I change this shader to NOT require version 130?
Here's the code:
// Fragment Shader
#version 130 \n
\n
in vec4 texcoord;\n
\n
uniform uint width;\n
uniform uint height;\n
uniform usampler3D textureA;\n
\n
void main() {\n
uint x = uint(texcoord.x * float(width));\n
uint y = uint(texcoord.y * float(height));\n
uint shift = x % 8u;\n
uint mask = 1u << shift;\n
uint octet = texelFetch(textureA, ivec3(x / 8u, y % 256u, y /256u), 0).r;
uint value = (octet & mask) >> shift;\n
if (value > 0u)\n
gl_FragColor = vec4(1.0, 1.0, 1.0, 1.0);\n
else\n
gl_FragColor = vec4(0.0, 0.0, 0.0, 0.0);\n
}

Related

GLSL vars conversion problem from ShaderToy to Android Studio GLSL

I would like to have a very simple explanation WHY these variables doesn't work on "Android Studio" and how to solve my problem(some work on "TheeBookOfShaders", some work on "Atom", others work on both, some work only on "ShaderToy" and some only work on "Android Studio").
* To really understand, this is a sample (from a "fragment.glsl" file) *
uniform vec2 resolution; // [-] work on...
uniform vec2 uresolution; // [-] work on...
uniform vec2 iresolution; // [Y] work only on "ShaderToy"
uniform vec2 u_resolution; // [Y] work on "Atom" and "WebGL"
i.e.
* Sample Conversion FROM "ShaderToy" TO "Atom" (live coding)*
uniform vec2 iresolution; // is used on: "ShaderToy"
uniform vec2 u_resolution; // is used on: "Atom", "WebGL", etc.
so: [iresolution = u_resolution] * OK It works *
* Well, now, why in "Android Studio" (java code + fragment.glsl) no one of these it works? *
uniform vec2 resolution; // doesn't work on "Android Studio"
uniform vec2 uresolution; // doesn't work on "Android Studio"
uniform vec2 iresolution; // doesn't work on "Android Studio"
uniform vec2 u_resolution; // doesn't work on "Android Studio"
uniform vec2 vresolution; // doesn't work on "Android Studio"
uniform vec2 v_resolution; // doesn't work on "Android Studio"
and obviously:
vec2 A = (gl_FragCoord.xy / u_resolution); // doesn't work on "Android Studio"
vec2 A = (gl_FragCoord.xy / uresolution); // doesn't work on "Android Studio"
vec2 A = (gl_FragCoord.xy / *SOME*resolution); // doesn't work on "Android Studio"
etc.
Same situation about the time var: time, utime, u_time, itime, vtime, v_time, globalTime, etc.
* Where do I can find the exact keyword to use RESOLUTION/TIME/others system-var in "Android Studio" GLSL shader file? *
"resolution" is there a currently defined reference table to understand how to convert system variables?
"resolution" is it a system-lib variable or not?
"Xresolution" is there a simple final real scheme to understand something in this confusion?
Atom-Editor - "u_resolution" using
in this sample, we can see the ONLY work version of Xresolution - try at home
Atom-Editor - "OTHERSresolution" using
in this other sample, we can see the ALL THE OTHERS yellow-failure versions of Xresolution - try at home
The "fragment.glsl" test-file work 100% on Atom-Editor (try at home please)
#ifdef GL_ES
precision highp float;
#endif
uniform vec2 resolution; // not-system var
uniform vec2 uresolution; // not-system var
uniform vec2 iResolution; // system-var WORK 100% on ShaderToy
uniform vec2 vresolution; // not-system var
uniform vec2 u_resolution; // system-var WORK 100% on Atom-Editor but NOT on Android Studio
uniform vec2 i_resolution; // not-system var
uniform vec2 v_resolution; // not-system var
void main()
{
vec2 A = (gl_FragCoord.xy / u_resolution);
gl_FragColor = vec4(A.x, A.y, 0.0, 1.0);
}
* SOLUTION | WORK 100% ONLY ON ANDROID STUDIO *
#ifdef GL_ES
precision highp float;
#endif
uniform vec2 u_resolution; // note: you can name it also "Pacman"...
// this mode let you can to create your
// personal var-name to access to windows view-port
void main()
{
// ---------------------------------------------------------------------------------
u_resolution = vec2(1920, 1080); // this assignment work 100% ONLY on Android Studio
// ---------------------------------------------------------------------------------
// --------------------------------------------------------------------------
vec2 A = (gl_FragCoord.xy / u_resolution); // solution 1
vec2 A = (gl_FragCoord.xy / vec2(1920, 1080)); // solution 2
vec2 A = (vec2(gl_FragCoord.x / 1920, gl_FragCoord.y / 1080)); // solution 3
// --------------------------------------------------------------------------
gl_FragColor = vec4(A.x, A.y, 0.0, 1.0);
}
Finally we found the solution, always before our eyes.
We start from a window of which we have the dimensions of X and Y set to 1920x1080 (in our case we do not need anything else) and I point out 3 modes of setting the variable "u_resolution". WARNING - this feature works ONLY in Android Studio and is able to answer my questions above. The problem has been solved. Felipe showed his commitment to solving the problem by getting involved. Of course we can also set this value from the main-code via Java or C ++ or other; but to us, in this post, it was only interesting to set/retrieve these "u_resolution" directly via/from GLSL.
The solution adopted perfectly meets the needs of departure, and I hope it will be helpful to all those who come after me.
The 3 line solution are equivalent: choose your preferred
A special thank to #felipe-gutierrez for his kind cooperation.
NONE of the GLSL variables you mentioned are system vars
They are user made up variables.
uniform vec2 resolution;
has absolutely no more meaning than;
uniform vec2 foobar;
Those are variables chosen by you.
You set them by looking up their location
In WebGL/JavaScript
const resolutionLocation = gl.getUniformLocation(someProgram, "resolution");
const foobarLocation = gl.getUniformLocation(someProgram, "foobar");
In Java
int resolutionLocation = GLES20.glGetUniformLocation(mProgram, "resolution");
int foobarLocation = GLES20.glGetUniformLocation(mProgram, "foobar");
You set them in WebGL/JavaScript
gl.useProgram(someProgram);
gl.uniform2f(resolutionLocation, yourVariableForResolutionX, yourVariableForResolutionY);
gl.uniform2f(foobarLocation, yourVariableForFoobarX, yourVariableForFoobarY);
or Java
GLES20.glUseProgram(someProgram);
GLES20.glUniform2f(resolutionLocation, yourVariableForResolutionX, yourVariableForResolutionY);
GLES20.glUniform2f(foobarLocation, yourVariableForFoobarX, yourVariableForFoobarY);
There is no magic system vars, they are 100% your app's variables. iResolution is a variable that the programmers of ShaderToy made up. u_resolution is a variable that some plugin author for Atom made up. They could have just as easily chosen renderSize or gamenHirosa (japanese for screen width), or anything. Again, they are not system vars, they are variables chosen by the programmer. In your app you also make up your own variables.
I suggest you read some tutorials on WebGL
According to the Khronos site: "A uniform is a global GLSLvariable declared with the "uniform" storage qualifier. These act as parameters that the user of a shader program can pass to that program. They are stored in a program object.
Uniforms are named so because they do not change from one execution of a shader program to the next within a particular rendering call. This makes them unlike shader stage inputs and outputs, which are often different for each invocation of a program stage."
So in other words it's a variable that you create in your host that you can access in your OpenGL program (Vertex and Fragment Shaders) but that you can't modify directly, so for example you get the resolution of your window in Java or C++ or Javascript or *** then you input it in Shadertoy's convention as iResolution, or your mouse position and left click (iMouse.xyz) and you pass it as a Uniform to your fragment shader.
They are useful for input that isn't too heavy, as you may have seen in Shadertoy, videos are pased as textures, like your webcam or Van Dammes' clip, you can even pass sound as input, for more advanced effects you can pass one shader program into another for things like additive blending or Ping-Pong as BufferA or B or C or D in Shadertoy.
You can see what they stand for from the inputs to the shader on the top part of the editor on the shadertoy site, and here you can check how I got many of the same inputs that shadertoy uses in C++ and unfortunately not plain Java but Processing
If you want to test that you have the correct iResolution uniform then you can type:
void main()
{
vec2 uv = gl_FragCoord.xy/u_resolution;
vec3 col = vec3( smoothstep( 0.1, 0.1 - 0.005, length( uv - 0.5 ) ) );
gl_FragColor = vec4( col, 1 );
}
And you should see the ellipse at the center of the screen.

GLSL won't accept my implicit cast

I'm learning OpenGL 3.3, using some tutorials (http://opengl-tutorial.org). In the tutorial I'm using, there is a vertex shader which does the following:
Tutorial Shader source
#version 330 core
// Input vertex data, different for all executions of this shader.
layout(location = 0) in vec3 vertexPosition_modelspace;
// Values that stay constant for the whole mesh.
uniform mat4 MVP;
void main(){
// Output position of the vertex, in clip space : MVP * position
gl_Position = MVP * vec4(vertexPosition_modelspace,1);
}
Yet, when I try to emulate the same behavior in my application, I get the following:
error: implicit cast from "vec4" to "vec3".
After seeing this, I wasn't sure if it was because I was using 4.2 version shaders as opposed to 3.3, so changed everything to match what the author had been using, still receiving the same error afterward.
So, I changed my shader to do this:
My (latest) Source
#version 330 core
layout(location = 0) in vec3 vertexPosition_modelspace;
uniform mat4 MVP;
void main()
{
vec4 a = vec4(vertexPosition_modelspace, 1);
gl_Position.xyz = MVP * a;
}
Which, of course, still produces the same error.
Does anyone know why this is the case, as well as what a solution might be to this? I'm not sure if it could be my calling code (which I've posted, just in case).
Calling Code
static const GLfloat T_VERTEX_BUF_DATA[] =
{
// x, y z
-1.0f, -1.0f, 0.0f,
1.0f, -1.0f, 0.0f,
0.0f, 1.0f, 0.0f
};
static const GLushort T_ELEMENT_BUF_DATA[] =
{ 0, 1, 2 };
void TriangleDemo::Run(void)
{
glClear(GL_COLOR_BUFFER_BIT);
GLuint matrixID = glGetUniformLocation(mProgramID, "MVP");
glUseProgram(mProgramID);
glUniformMatrix4fv(matrixID, 1, GL_FALSE, &mMVP[0][0]); // This sends our transformation to the MVP uniform matrix, in the currently bound vertex shader
const GLuint vertexShaderID = 0;
glEnableVertexAttribArray(vertexShaderID);
glBindBuffer(GL_ARRAY_BUFFER, mVertexBuffer);
glVertexAttribPointer(
vertexShaderID, // Specify the ID of the shader to point to (in this case, the shader is built in to GL, which will just produce a white triangle)
3, // Specify the number of indices per vertex in the vertex buffer
GL_FLOAT, // Type of value the vertex buffer is holding as data
GL_FALSE, // Normalized?
0, // Amount of stride
(void*)0 ); // Offset within the array buffer
glDrawArrays(GL_TRIANGLES, 0, 3); //0 => start index of the buffer, 3 => number of vertices
glDisableVertexAttribArray(vertexShaderID);
}
void TriangleDemo::Initialize(void)
{
glGenVertexArrays(1, &mVertexArrayID);
glBindVertexArray(mVertexArrayID);
glGenBuffers(1, &mVertexBuffer);
glBindBuffer(GL_ARRAY_BUFFER, mVertexBuffer);
glBufferData(GL_ARRAY_BUFFER, sizeof(T_VERTEX_BUF_DATA), T_VERTEX_BUF_DATA, GL_STATIC_DRAW );
mProgramID = LoadShaders("v_Triangle", "f_Triangle");
glm::mat4 projection = glm::perspective(45.0f, 4.0f / 3.0f, 0.1f, 100.0f); // field of view, aspect ratio (4:3), 0.1 units near, to 100 units far
glm::mat4 view = glm::lookAt(
glm::vec3(4, 3, 3), // Camera is at (4, 3, 3) in world space
glm::vec3(0, 0, 0), // and looks at the origin
glm::vec3(0, 1, 0) // this is the up vector - the head of the camera is facing upwards. We'd use (0, -1, 0) to look upside down
);
glm::mat4 model = glm::mat4(1.0f); // set model matrix to identity matrix, meaning the model will be at the origin
mMVP = projection * view * model;
}
Notes
I'm in Visual Studio 2012
I'm using Shader Maker for the GLSL editing
I can't say what's wrong with the tutorial code.
In "My latest source" though, there's
gl_Position.xyz = MVP * a;
which looks weird because you're assigning a vec4 to a vec3.
EDIT
I can't reproduce your problem.
I have used a trivial fragment shader for testing...
#version 330 core
void main()
{
}
Testing "Tutorial Shader source":
3.3.11762 Core Profile Context
Log: Vertex shader was successfully compiled to run on hardware.
Log: Fragment shader was successfully compiled to run on hardware.
Log: Vertex shader(s) linked, fragment shader(s) linked.
Testing "My latest source":
3.3.11762 Core Profile Context
Log: Vertex shader was successfully compiled to run on hardware.
WARNING: 0:11: warning(#402) Implicit truncation of vector from size 4 to size 3.
Log: Fragment shader was successfully compiled to run on hardware.
Log: Vertex shader(s) linked, fragment shader(s) linked.
And the warning goes away after replacing gl_Position.xyz with gl_Position.
What's your setup? Do you have a correct version of OpenGL context? Is glGetError() silent?
Finally, are your GPU drivers up-to-date?
I've had problems with some GPUs (ATi ones, I believe) not liking integer literals when it expects a float. Try changing
gl_Position = MVP * vec4(vertexPosition_modelspace,1);
To
gl_Position = MVP * vec4(vertexPosition_modelspace, 1.0);
I just came across this error message on an ATI Radeon HD 7900 with latest drivers installed while compiling some sample code associated with the book "3D Engine Design for Virtual Globes" (http://www.virtualglobebook.com).
Here is the original fragment shader line:
fragmentColor = mix(vec3(0.0, intensity, 0.0), vec3(intensity, 0.0, 0.0), (distanceToContour < dF));
The solution is to cast the offending Boolean expression into float, as in:
fragmentColor = mix(vec3(0.0, intensity, 0.0), vec3(intensity, 0.0, 0.0), float(distanceToContour < dF));
The manual for mix (http://www.opengl.org/sdk/docs/manglsl) states
For the variants of mix where a is genBType, elements for which a[i] is false, the result for that
element is taken from x, and where a[i] is true, it will be taken from y.
So, since a Boolean blend value should be accepted by the compiler without comment, I think this should go down as an AMD/ATI driver issue.

OpenGL ES Overlay Blend Mode with Point Sprites

I'm trying to emulate Photoshop's Overlay blend mode on a point sprite. Is this possible in OpenGL ES?
EDIT - This might help you along:
Please note: I do not take credit for the code below; I found it on the powervr forums: http://www.imgtec.com/forum/forum_posts.asp?TID=949
uniform sampler2D s_renderTexture;
uniform sampler2D s_overlayMap;
varying mediump vec2 myTexCoord;
void main()
{
//Get the Texture colour values
lowp vec3 baseColor = texture2D(s_renderTexture, myTexCoord).rgb;
lowp float overlayTexture = texture2D(s_overlayMap, myTexCoord).r;
lowp vec3 finalMix = baseColor + (overlayTexture - 0.5) * (1.0 - abs(2.0 * baseColor - 1.0));
//Set the Fragments colour
gl_FragColor = vec4( finalMix, 1.0 );
}
Sure, call this before rendering the point sprites:
glEnable(GL_BLEND);
glBlendFunc(GL_ONE, GL_ONE);
This should result in additive blending.
Here's a visual reference on the different blending mode combinations:
http://zanir.wz.cz/?p=60&lang=en
It's an old page, but it's a nice reference.
For more on opengl-es blending : http://www.khronos.org/opengles/sdk/docs/man/xhtml/glBlendFunc.xml

OpenGL color/alpha output slightly dimmed

I'm seeing slightly dimmed color/alpha output from OpenGL in Linux. Instead of seeing a red component value of 1.0 I'm seeing ~.96988. For example, I have a fully red rectangle (red component = 1.0, alpha = 1.0, green and blue are zero). This dimming happens whether I enable my vertex/fragment shaders or not.
Lighting is disabled so no ambient or other light should be included in the color calculation.
glBegin(GL_POLYGON);
glColor4f(1.0, 0.0, 0.0, 1.0);
glVertex2f(0.0, 0.0);
glVertex2f(1.0, 0.0);
glVertex2f(1.0, 1.0);
glVertex2f(0.0, 1.0);
glEnd();
I take a screen-shot of the resulting window and then load the image into a paint program and examine any particular pixel. I see a red component integer value of 247 instead of 255 as I would expect. When I run this with the vertex shader enabled I see the gl_Color.r component is already < 1.0 and the gl_Color.a component is as well.
All OpenGL states are at the default values. What am I missing?
Edit due to question:
I determined that the value of the red component was ~.96988 by a crude and iterative process of inspecting it in the vertex shader and altering the blue component to signal when the red component was above a threshold value. I kept reducing the constant threashold value until I no longer saw purple. This did the trick:
if(gl_Color.r > 0.96988)
{
gl_Color.b = 1.0; \\ show purple instead of the slightly dimmed red.
}
Edit:
//VERTEX SHADER
varying vec2 texture_coordinate;
void main()
{
gl_Position = ftransform();
texture_coordinate = vec2(gl_MultiTexCoord0);
gl_FrontColor = gl_Color;
}
//FRAGMENT SHADER
varying vec2 texture_coordinate;
uniform sampler2D Texture0;
void main(void)
{
gl_FragColor = texture2D(Texture0, texture_coordinate) * gl_Color;
}
Texture0 in this instance is a fully saturated RED rectangle Red = 1.0, Alpha = 1.0. Without the texture, using vertex color, I get the same results; a slightly dimminished Red and Alpha component.
One more thing, the Red and Aplha channels are "dimmed" by the same amount. So something is causing a dimming of the entire color component. And as I stated in the main question this occurs whether I use shaders or the fixed punction pipeline.
Just for fun I performed a similar test in Windows using DirectX and this resulted in a rectangle with a Red component of 254; still slightly dimmed but just barely.
I'm answering my own question because I resolved the issue and I was the cause. It turns out that I was incorrectly calculating the color channels, including alpha, for the vertices in my models when converting from binary to floating point. A silly error that introduced this slight dimming.
For instance:
currentColor = m_pVertices[i].clr; // color format ARGB
float a = (1.0 / 256) * (m_pVertices[i].clr >> 24);
float r = (1.0 / 256) * ((m_pVertices[i].clr >> 16) % 256);
float g = (1.0 / 256) * ((m_pVertices[i].clr >> 8) % 256);
float b = (1.0 / 256) * (m_pVertices[i].clr % 256);
glColor4f(r, g, b, a);
I should be dividing by 255. Doh!
It seems the only dimming is in my brain and not in openGL.

GLSL - Front vs. Back faces of polygons

I made some simple shading in GLSL of a checkers board:
f(P) = [ floor(Px)+floor(Py)+floor(Pz) ] mod 2
It seems to work well except the fact that i see the interior of the objects but i want to see only the front face.
Any ideas how to fix this? Thanks!
Teapot (glutSolidTeapot()):
Cube (glutSolidCube):
The vertex shader file is:
varying float x,y,z;
void main(){
gl_Position = gl_ProjectionMatrix * gl_ModelViewMatrix * gl_Vertex;
x = gl_Position.x;
y = gl_Position.y;
z = gl_Position.z;
}
And the fragment shader file is:
varying float x,y,z;
void main(){
float _x=x;
float _y=y;
float _z=z;
_x=floor(_x);
_y=floor(_y);
_z=floor(_z);
float sum = (_x+_y+_z);
sum = mod(sum,2.0);
gl_FragColor = vec4(sum,sum,sum,1.0);
}
The shaders are not the problem - the face culling is.
You should either disable the face culling (which is not recommended, since it's bad for performance reasons):
glDisable(GL_CULL_FACE);
or use glCullFace and glFrontFace to set the culling mode, i.e.:
glEnable(GL_CULL_FACE); // enables face culling
glCullFace(GL_BACK); // tells OpenGL to cull back faces (the sane default setting)
glFrontFace(GL_CW); // tells OpenGL which faces are considered 'front' (use GL_CW or GL_CCW)
The argument to glFrontFace depends on application conventions, i.e. the matrix handedness.

Resources