I'm trying to build a simple passthrough geometry shader, but
I cant make it work with
glDrawElements(GL_TRIANGLES, fooSize, GL_UNSIGNED_INT, NULL);
but it does work with
glDrawArrays(GL_TRIANGLES, 0, foo_INDEX);.
the geometry shader is..
#version 400
#extension GL_EXT_geometry_shader4 : enable
layout(triangles) in;
layout(triangle_strip, max_vertices = 3) out;
void main() {
for(int i = 0; i < gl_VerticesIn; i++) {
gl_Position = gl_PositionIn[i];
EmitVertex();
}
EndPrimitive();
}
So, someone has any idea why this geometry shader works with drawArrays and doesnt work with drawElements? Please.
You have written a very confused and confusing geometry shader.
Your first line indicates that you're using GLSL version 4.00. This language has geometry shaders as a core feature; no extensions required. Your second line then enables the EXT_geometry_shader4 extension.
The problem comes from the fact that core geometry shaders and EXT_geometry_shader4 are very different things.
For example, there is no gl_VerticesIn in core geometry shaders. The number of vertices you get is defined by your layout() in; statement. You used triangles, so the answer is 3. You can also find it by using gl_in.length(), where gl_in is an interface block array containing the default vertex input values. Similarly, gl_PositionIn doesn't exist in core geometry shaders; you use gl_in[index].gl_Position.
So your geometry shaders code seems to be designed for EXT_geometry_shader4... except for these lines:
layout(triangles) in;
layout(triangle_strip, max_vertices = 3) out;
These are pure core geometry shaders; in EXT_geometry_shader4, these parameters are specified at link time in OpenGL code, not in the shader itself.
So you've thoroughly confused the compiler by having half of it use the EXT version and half use the core version.
A proper core version of what you're trying to do would be this:
#version 400
layout(triangles) in;
layout(triangle_strip, max_vertices = 3) out;
void main() {
for(int i = 0; i < 3; i++) { // You used triangles, so it's always 3
gl_Position = gl_in[i].gl_Position;
EmitVertex();
}
EndPrimitive();
}
I have no idea if this will fix your problem, but these are problems with your shader.
Related
I have this problem on a practice midterm that I don't understand.
void main(void){
int i;
for(i=0; i< gl_VerticesIn; i++){
gl_Position = gl_PositionIn[i];
EmitVertex();
}
EndPrimitive();
for(i=0; i< gl_VerticesIn; i++){
gl_Position = gl_PositionIn[i];
gl_Position.xy = gl_Position.yx;
EmitVertex();
}
EndPrimitive();
}
I have been reading documentation, and I think that this is part of a geometry shader, and I think it is inverting the x and y coordinates of each point, but I don't have any way to verify this. I tried checking it in a program and it made slight differences in the coloring of the scene, but it didn't seem to change the geometry at all, so if someone could help explain this that would be awesome. Thanks!
This is indeed a part of a geometry shader.
First part of the shader (ending with first EndPrimitive()) is simplest possible pass-through geometry shader that does absolutely nothing to the geometry.
Second part is almost the same, except for the swizzling with the xy. It duplicates geometry, but changing the x and y coordinates, so it effectively mirrors the image across the line that connects down-left and up-right corner of the screen.
So, geometry is being duplicated and mirrored across the diagonal of the screen.
My program receives input consists of line segments and expand the lines to cylinder-like object (like the PipeGS project in DX SDK sample browser).
I added an array of radius scaling parameter for the pipes, and modify them procedurally, but the radii of pipes just didn't change.
I'm pretty sure the scaling parameters are updated every frame because I set them as the pixel value. When I modify them, the pipes change color while their radii keep unchanged.
So I am wondering if there's any limitation of using global variables in GS and I didn't find it on the internet. (or just wrong keywords I used)
The shader code is like
cbuffer {
.....
float scaleParam[10];
.....
}
// Pass 1
VS_1 { // pass through }
// Tessellation stages
// Hull shader, domain shader and patch constant function
GS_1 {
pipeRadius = MaxRadius * scaleParam[PipeID];
....
// calculate pipe positions base on line-segments and pipeRadius
....
OutputStream.Append ( ... );
}
// Pixel shader is disabled in the first pass
// Pass 2
VS_2 { // pass through }
// Tessellation stages
// Hull shader, domain shader and patch constant function
// Transform the vertices and normals to world coordinate in DS
// No geometry shader in the second pass
PS_2
{
return float4( scaleParam[0], scaleParam[1], scaleParam[2], 0.0f );
}
Edit:
I shrinked the problem.
There are 2 passes in my program, in the first pass I calculate the line-segment expanding in Geometry Shader and stream-out.
In the second pass, the program receives pipe position from the first pass, tessellate the pipes and apply displacement mapping on them so they can be more detailed.
I can change the surface tessellation factor and pixel color which are in the second pass and see the result on screen immediately.
When I modify the scaleParam, the pipes change color while their radii keep unchanged. It means I did change the scaleParam and pass them into shader correctly but something's wrong in the first pass.
Second edit:
I modified shader code above and post some code of the cpp file here.
In the cpp file:
void DrawScene()
{
// Update view matrix, TessFactor, scaleParam etc.
....
....
// Bind stream-output buffer
ID3D11Buffer* bufferArray[1] = {mStreamOutBuffer};
md3dImmediateContext->SOSetTargets(1, bufferArray, 0);
// Two pass rendering
D3DX11_TECHNIQUE_DESC techDesc;
mTech->GetDesc( &techDesc );
for(UINT p = 0; p < techDesc.Passes; ++p)
{
mTech->GetPassByIndex(p)->Apply(0, md3dImmediateContext);
// First pass
if (p==0)
{
md3dImmediateContext->IASetVertexBuffers(0, 1,
&mVertexBuffer, &stride, &offset);
md3dImmediateContext->Draw(mVertexCount,0);
// unbind stream-output buffer
bufferArray[0] = NULL;
md3dImmediateContext->SOSetTargets( 1, bufferArray, 0 );
}
// Second pass
else
{
md3dImmediateContext->IASetVertexBuffers(0, 1,
&mStreamOutBuffer, &stride, &offset);
md3dImmediateContext->DrawAuto();
}
}
HR(mSwapChain->Present(0, 0));
}
Check if you are using a float4 position, the w value of vector is a scale for final position in scene, example:
float4 pos0 = float4(5, 5, 5, 1);
// is equals that:
float4 pos1 = float4(10, 10, 10, 2);
To correct scale a position you must changue only the .xyz value of vector position.
I solved this problem by rebuilding Vertex buffer and Stream-output buffer everytime after I modified the parameters, but I still don't know what exactly causes this problem.
I'm learning OpenGL 3.3, using some tutorials (http://opengl-tutorial.org). In the tutorial I'm using, there is a vertex shader which does the following:
Tutorial Shader source
#version 330 core
// Input vertex data, different for all executions of this shader.
layout(location = 0) in vec3 vertexPosition_modelspace;
// Values that stay constant for the whole mesh.
uniform mat4 MVP;
void main(){
// Output position of the vertex, in clip space : MVP * position
gl_Position = MVP * vec4(vertexPosition_modelspace,1);
}
Yet, when I try to emulate the same behavior in my application, I get the following:
error: implicit cast from "vec4" to "vec3".
After seeing this, I wasn't sure if it was because I was using 4.2 version shaders as opposed to 3.3, so changed everything to match what the author had been using, still receiving the same error afterward.
So, I changed my shader to do this:
My (latest) Source
#version 330 core
layout(location = 0) in vec3 vertexPosition_modelspace;
uniform mat4 MVP;
void main()
{
vec4 a = vec4(vertexPosition_modelspace, 1);
gl_Position.xyz = MVP * a;
}
Which, of course, still produces the same error.
Does anyone know why this is the case, as well as what a solution might be to this? I'm not sure if it could be my calling code (which I've posted, just in case).
Calling Code
static const GLfloat T_VERTEX_BUF_DATA[] =
{
// x, y z
-1.0f, -1.0f, 0.0f,
1.0f, -1.0f, 0.0f,
0.0f, 1.0f, 0.0f
};
static const GLushort T_ELEMENT_BUF_DATA[] =
{ 0, 1, 2 };
void TriangleDemo::Run(void)
{
glClear(GL_COLOR_BUFFER_BIT);
GLuint matrixID = glGetUniformLocation(mProgramID, "MVP");
glUseProgram(mProgramID);
glUniformMatrix4fv(matrixID, 1, GL_FALSE, &mMVP[0][0]); // This sends our transformation to the MVP uniform matrix, in the currently bound vertex shader
const GLuint vertexShaderID = 0;
glEnableVertexAttribArray(vertexShaderID);
glBindBuffer(GL_ARRAY_BUFFER, mVertexBuffer);
glVertexAttribPointer(
vertexShaderID, // Specify the ID of the shader to point to (in this case, the shader is built in to GL, which will just produce a white triangle)
3, // Specify the number of indices per vertex in the vertex buffer
GL_FLOAT, // Type of value the vertex buffer is holding as data
GL_FALSE, // Normalized?
0, // Amount of stride
(void*)0 ); // Offset within the array buffer
glDrawArrays(GL_TRIANGLES, 0, 3); //0 => start index of the buffer, 3 => number of vertices
glDisableVertexAttribArray(vertexShaderID);
}
void TriangleDemo::Initialize(void)
{
glGenVertexArrays(1, &mVertexArrayID);
glBindVertexArray(mVertexArrayID);
glGenBuffers(1, &mVertexBuffer);
glBindBuffer(GL_ARRAY_BUFFER, mVertexBuffer);
glBufferData(GL_ARRAY_BUFFER, sizeof(T_VERTEX_BUF_DATA), T_VERTEX_BUF_DATA, GL_STATIC_DRAW );
mProgramID = LoadShaders("v_Triangle", "f_Triangle");
glm::mat4 projection = glm::perspective(45.0f, 4.0f / 3.0f, 0.1f, 100.0f); // field of view, aspect ratio (4:3), 0.1 units near, to 100 units far
glm::mat4 view = glm::lookAt(
glm::vec3(4, 3, 3), // Camera is at (4, 3, 3) in world space
glm::vec3(0, 0, 0), // and looks at the origin
glm::vec3(0, 1, 0) // this is the up vector - the head of the camera is facing upwards. We'd use (0, -1, 0) to look upside down
);
glm::mat4 model = glm::mat4(1.0f); // set model matrix to identity matrix, meaning the model will be at the origin
mMVP = projection * view * model;
}
Notes
I'm in Visual Studio 2012
I'm using Shader Maker for the GLSL editing
I can't say what's wrong with the tutorial code.
In "My latest source" though, there's
gl_Position.xyz = MVP * a;
which looks weird because you're assigning a vec4 to a vec3.
EDIT
I can't reproduce your problem.
I have used a trivial fragment shader for testing...
#version 330 core
void main()
{
}
Testing "Tutorial Shader source":
3.3.11762 Core Profile Context
Log: Vertex shader was successfully compiled to run on hardware.
Log: Fragment shader was successfully compiled to run on hardware.
Log: Vertex shader(s) linked, fragment shader(s) linked.
Testing "My latest source":
3.3.11762 Core Profile Context
Log: Vertex shader was successfully compiled to run on hardware.
WARNING: 0:11: warning(#402) Implicit truncation of vector from size 4 to size 3.
Log: Fragment shader was successfully compiled to run on hardware.
Log: Vertex shader(s) linked, fragment shader(s) linked.
And the warning goes away after replacing gl_Position.xyz with gl_Position.
What's your setup? Do you have a correct version of OpenGL context? Is glGetError() silent?
Finally, are your GPU drivers up-to-date?
I've had problems with some GPUs (ATi ones, I believe) not liking integer literals when it expects a float. Try changing
gl_Position = MVP * vec4(vertexPosition_modelspace,1);
To
gl_Position = MVP * vec4(vertexPosition_modelspace, 1.0);
I just came across this error message on an ATI Radeon HD 7900 with latest drivers installed while compiling some sample code associated with the book "3D Engine Design for Virtual Globes" (http://www.virtualglobebook.com).
Here is the original fragment shader line:
fragmentColor = mix(vec3(0.0, intensity, 0.0), vec3(intensity, 0.0, 0.0), (distanceToContour < dF));
The solution is to cast the offending Boolean expression into float, as in:
fragmentColor = mix(vec3(0.0, intensity, 0.0), vec3(intensity, 0.0, 0.0), float(distanceToContour < dF));
The manual for mix (http://www.opengl.org/sdk/docs/manglsl) states
For the variants of mix where a is genBType, elements for which a[i] is false, the result for that
element is taken from x, and where a[i] is true, it will be taken from y.
So, since a Boolean blend value should be accepted by the compiler without comment, I think this should go down as an AMD/ATI driver issue.
My iOS 4 app uses OpenGL ES 2.0 and renders elements with a single texture. I would like to draw elements using multiple different textures and am having problems getting things to work.
I added a variable to my vertex shader to indicate which texture to apply:
...
attribute float TextureIn;
varying float TextureOut;
void main(void)
{
...
TextureOut = TextureIn;
}
I use that value in the fragment shader to select the texture:
...
varying lowp float TextureOut;
uniform sampler2D Texture0;
uniform sampler2D Texture1;
void main(void)
{
if (TextureOut == 1.0)
{
gl_FragColor = texture2D(Texture1, TexCoordOut);
}
else // 0
{
gl_FragColor = texture2D(Texture0, TexCoordOut);
}
}
Compile shaders:
...
_texture = glGetAttribLocation(programHandle, "TextureIn");
glEnableVertexAttribArray(_texture);
_textureUniform0 = glGetUniformLocation(programHandle, "Texture0");
_textureUniform1 = glGetUniformLocation(programHandle, "Texture1");
Init/Setup:
...
GLuint _texture;
GLuint _textureUniform0;
GLuint _textureUniform1;
...
glActiveTexture(GL_TEXTURE0);
glEnable(GL_TEXTURE_2D); // ?
glBindTexture(GL_TEXTURE_2D, _textureUniform0);
glUniform1i(_textureUniform0, 0);
glActiveTexture(GL_TEXTURE1);
glEnable(GL_TEXTURE_2D); // ?
glBindTexture(GL_TEXTURE_2D, _textureUniform1);
glUniform1i(_textureUniform1, 1);
glActiveTexture(GL_TEXTURE0);
Render:
...
glVertexAttribPointer(_texture, 1, GL_FLOAT, GL_FALSE, sizeof(Vertex), (GLvoid*) (sizeof(float) * 13));
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, _textureUniform0);
glUniform1i(_textureUniform0, 0);
glActiveTexture(GL_TEXTURE1);
glBindTexture(GL_TEXTURE_2D, _textureUniform1);
glUniform1i(_textureUniform1, 1);
glActiveTexture(GL_TEXTURE0);
glDrawElements(GL_TRIANGLES, indicesCountA, GL_UNSIGNED_SHORT, (GLvoid*) (sizeof(GLushort) * 0));
glDrawElements(GL_TRIANGLES, indicesCountB, GL_UNSIGNED_SHORT, (GLvoid*) (sizeof(GLushort) * indicesCountA));
glDrawElements(GL_TRIANGLES, indicesCountC, GL_UNSIGNED_SHORT, (GLvoid*) (sizeof(GLushort) * (indicesCountA + indicesCountB)));
My hope was to dynamically apply the texture associated with a vertex but it seems to only recognize GL_TEXTURE0.
The only way I have been able to change textures is to associated each texture with GL_TEXTURE0 and then draw:
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, _textureUniformX);
glUniform1i(_textureUniformX, 0);
glDrawElements(GL_TRIANGLES, indicesCountA, GL_UNSIGNED_SHORT, (GLvoid*) (sizeof(GLushort) * 0));
...
In order to render all the textures, I would need a separate glDrawElements() call for each texture, and I have read that glDrawElements() calls are a big hit to performance and the number of calls should be minimized. Thats why I was trying to dynamically specifiy which texture to use for each vertex.
It's entirely possible that my understanding is wrong or I am missing something important. I'm still new to OpenGL and the more I learn the more I feel I have more to learn.
It must be possible to use textures other than just GL_TEXTURE0 but I have yet to figure out how.
Any guidance or direction would be greatly appreciated.
Can it be you're just experiencing floating point rounding issues? There shouldn't be any (except if a single privimitve shares vertices with different textures), but just to be sure replace this TextureOut == 1.0 with a TextureOut > 0.5 or something the like.
As a general advice, you are correct in that the number of draw calls should be reduced as much a possible, but your approach is quite odd. You are buying draw call reduction with fragment shader branching. Your approach also doesn't scale well with the overall number of textures, since you always need all textures in separate texture units.
The usual approach to reduce texture switches is to put all the textures into a single large texture, a so-called texture atlas, and use the texture coordinates to select the appropriate subregion in this texture. This also has some pitfalls (which are an entirely different question), but nothing comes for free.
EDIT: Oh wait, I see what you're actually doing wrong
glBindTexture(GL_TEXTURE_2D, _textureUniform0);
You're binding a texture to the current texture unit, but instead of the texture object you give this function a uniform location, which is complete rubbish (but might even work in some weird circumstances, since both uniform locations and texture objects are themselves just integers). Of course you have to bind the actual texture.
I made some simple shading in GLSL of a checkers board:
f(P) = [ floor(Px)+floor(Py)+floor(Pz) ] mod 2
It seems to work well except the fact that i see the interior of the objects but i want to see only the front face.
Any ideas how to fix this? Thanks!
Teapot (glutSolidTeapot()):
Cube (glutSolidCube):
The vertex shader file is:
varying float x,y,z;
void main(){
gl_Position = gl_ProjectionMatrix * gl_ModelViewMatrix * gl_Vertex;
x = gl_Position.x;
y = gl_Position.y;
z = gl_Position.z;
}
And the fragment shader file is:
varying float x,y,z;
void main(){
float _x=x;
float _y=y;
float _z=z;
_x=floor(_x);
_y=floor(_y);
_z=floor(_z);
float sum = (_x+_y+_z);
sum = mod(sum,2.0);
gl_FragColor = vec4(sum,sum,sum,1.0);
}
The shaders are not the problem - the face culling is.
You should either disable the face culling (which is not recommended, since it's bad for performance reasons):
glDisable(GL_CULL_FACE);
or use glCullFace and glFrontFace to set the culling mode, i.e.:
glEnable(GL_CULL_FACE); // enables face culling
glCullFace(GL_BACK); // tells OpenGL to cull back faces (the sane default setting)
glFrontFace(GL_CW); // tells OpenGL which faces are considered 'front' (use GL_CW or GL_CCW)
The argument to glFrontFace depends on application conventions, i.e. the matrix handedness.