Hi I been studying webgl.
I been reading this book called Real-Time 3D Graphics with WebGL 2 and here it says this Vertex array objects allows us to store all of the vertex/index binding information for a set of buffers in a single, easy to manage object.
And it provides this example for VAO.
function initBuffers() {
/*
V0 V3
(-0.5, 0.5, 0) (0.5, 0.5, 0)
X---------------------X
| |
| |
| (0, 0) |
| |
| |
X---------------------X
V1 V2
(-0.5, -0.5, 0) (0.5, -0.5, 0)
*/
const vertices = [
-0.5, 0.5, 0,
-0.5, -0.5, 0,
0.5, -0.5, 0,
0.5, 0.5, 0
];
// Indices defined in counter-clockwise order
indices = [0, 1, 2, 0, 2, 3];
// Create VAO instance
squareVAO = gl.createVertexArray();
// Bind it so we can work on it
gl.bindVertexArray(squareVAO);
const squareVertexBuffer = gl.createBuffer();
gl.bindBuffer(gl.ARRAY_BUFFER, squareVertexBuffer);
gl.bufferData(gl.ARRAY_BUFFER, new Float32Array(vertices), gl.STATIC_DRAW);
// Provide instructions for VAO to use data later in draw
gl.enableVertexAttribArray(program.aVertexPosition);
gl.vertexAttribPointer(program.aVertexPosition, 3, gl.FLOAT, false, 0, 0);
// Setting up the IBO
squareIndexBuffer = gl.createBuffer();
gl.bindBuffer(gl.ELEMENT_ARRAY_BUFFER, squareIndexBuffer);
gl.bufferData(gl.ELEMENT_ARRAY_BUFFER, new Uint16Array(indices), gl.STATIC_DRAW);
// Clean
gl.bindVertexArray(null);
gl.bindBuffer(gl.ARRAY_BUFFER, null);
gl.bindBuffer(gl.ELEMENT_ARRAY_BUFFER, null);
}
// We call draw to render to our canvas
function draw() {
// Clear the scene
gl.clear(gl.COLOR_BUFFER_BIT | gl.DEPTH_BUFFER_BIT);
gl.viewport(0, 0, gl.canvas.width, gl.canvas.height);
// Bind the VAO
gl.bindVertexArray(squareVAO);
gl.bindBuffer(gl.ELEMENT_ARRAY_BUFFER, squareIndexBuffer);
// Draw to the scene using triangle primitives
gl.drawElements(gl.TRIANGLES, indices.length, gl.UNSIGNED_SHORT, 0);
// Clean
gl.bindVertexArray(null);
gl.bindBuffer(gl.ARRAY_BUFFER, null);
gl.bindBuffer(gl.ELEMENT_ARRAY_BUFFER, null);
}
// Entry point to our application
function init() {
// Retrieve the canvas
const canvas = utils.getCanvas('webgl-canvas');
// Set the canvas to the size of the screen
canvas.width = window.innerWidth;
canvas.height = window.innerHeight;
// Retrieve a WebGL context
gl = utils.getGLContext(canvas);
// Set the clear color to be black
gl.clearColor(0, 0, 0, 1);
// Call the functions in an appropriate order
initProgram();
initBuffers();
draw();
}
The question here is, do we need gl.bindBuffer(); after we bind the VAO in draw()?
I looked at this link What are Vertex Arrays in OpenGL & WebGL2? and it says
At draw time it then only takes one call to gl.bindVertexArray to setup all the attributes and the ELEMENT_ARRAY_BUFFER. So I suppose there is no need for the gl.bindBuffer(); after we bind the VAO in draw()?
Is the code from the textbook misleading?
No you do not need to rebind the buffer
The ELEMENT_ARRAY_BUFFER binding is part of the current vertex array state as the answer you linked to points out.
These lines in your example are also irrelevant
In initBuffers
// Clean
gl.bindVertexArray(null);
gl.bindBuffer(gl.ARRAY_BUFFER, null);
gl.bindBuffer(gl.ELEMENT_ARRAY_BUFFER, null);
None of these lines are truly needed. Only the first line has any real point even if it is not needed
This line
gl.bindBuffer(gl.ELEMENT_ARRAY_BUFFER, null);
does nothing really because as stated above ELEMENT_ARRAY_BUFFER state is part of the current vertex array so just changing the current vertex array with gl.bindVertexArray already changed that binding.
This line
gl.bindBuffer(gl.ELEMENT_BUFFER, null);
has no point really because AFAIK almost no programs ever just assume the current ARRAY_BUFFER binding is set to anything. They always bind a buffer before operating on it. It's not bad to have it and I'm sure you could find some convoluted way to make it important but in real life I haven't seen one.
This line does have a point.
gl.bindVertexArray(null);
It is common to setup vertex buffers separate from vertex attributes. If you are making one vertex array per thing to draw and your pattern is like this
// at init time
for each thing I plan to draw
(1) create buffers and fill with positions/normals/texcoords/indices
(2) create/bind vertex array
(3) setup attributes and ELEMENT_ARRAY_BUFFER
Then if you don't bind null after step 3, step 1 will end up changing the ELEMENT_ARRAY_BUFFER binding of the previously bound vertex array
In other words maybe this line
gl.bindVertexArray(null);
Has a point. Still, it's arguable. If you swapped steps 1 and 2 and changed your initialization to
// at init time
for each thing I plan to draw
(1) create/bind vertex array
(2) create buffers and fill with positions/normals/texcoords/indices
(3) setup attributes
Then the problem goes away
Those same 3 lines exist in draw
// Clean
gl.bindVertexArray(null);
gl.bindBuffer(gl.ARRAY_BUFFER, null);
gl.bindBuffer(gl.ELEMENT_ARRAY_BUFFER, null);
Where again they have no point
Related
I'm trying to utilize VAO (vertex array object) functionality in an opengl context. My non vao buffer objects draw fine, but when I bind my VAO and draw, no object is drawn. I am basically using some example code and feel it should work. But I have a hybrid dual graphics card setup that's a bit old and has been the source of deep seated grief and regret and several turns in the past, I have researched and have unearthed hints that it could be related to it or synchronization with the gpu and resource calls.. But need an expert to sort things out for me and define the lay of the land.
I am using opengl version is..3.3 (Core Profile) Mesa 18.2.8 on linux Ubuntu. I have turned off all other code and ran the setup and do while drawcalls quite diligently. I have set error callbacks and played with the values of
glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 3 * sizeof(float), (void*)0);
which I assume is the source of my greif. I have read these are not enabled by default and are activated in the draw call and initiakization which I will expose below.
This is where I create my VAO
void createSquare() {
float vertices1[] = {
0.5f, 0.5f, 0.0f, // top right
0.5f, -0.5f, 0.0f, // bottom right
-0.5f, -0.5f, 0.0f, // bottom left
-0.5f, 0.5f, 0.0f // top left
};
unsigned int indices1[] = { // note that we start from 0!
0, 1, 3, // first Triangle
1, 2, 3 // second Triangle
};
glGenVertexArrays(1, &VAO);
glGenBuffers(1, &VBO);
glGenBuffers(1, &EBO);
// bind the Vertex Array Object first, then bind and set vertex buffer(s), and then configure vertex attributes(s).
glBindVertexArray(VAO);
glBindBuffer(GL_ARRAY_BUFFER, VBO);
glBufferData(GL_ARRAY_BUFFER, sizeof(vertices1), vertices1, GL_STATIC_DRAW);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, EBO);
glBufferData(GL_ELEMENT_ARRAY_BUFFER, sizeof(indices1), indices1, GL_STATIC_DRAW);
glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 3 * sizeof(float), (void*)0);
glEnableVertexAttribArray(0);
// note that this is allowed, the call to glVertexAttribPointer registered VBO as the vertex attribute's bound vertex buffer object so afterwards we can safely unbind
glBindBuffer(GL_ARRAY_BUFFER, 0);
// remember: do NOT unbind the EBO while a VAO is active as the bound element buffer object IS stored in the VAO; keep the EBO bound.
//glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, 0);
// You can unbind the VAO afterwards so other VAO calls won't accidentally modify this VAO, but this rarely happens. Modifying other
// VAOs requires a call to glBindVertexArray anyways so we generally don't unbind VAOs (nor VBOs) when it's not directly necessary.
glBindVertexArray(0);
}
And this is the totality of what's in my draw function..
glUseProgram(programID);
glBindVertexArray(VAO);
glDrawElements(GL_TRIANGLES, 6, GL_UNSIGNED_INT, 0);
As I say before, this error is thrown---> GL_INVALID_VALUE in glVertexAttribPointerARB(idx) and the square is not drawn.
It appears I am not in core context..so, it's likely the answer to this mess. Thanks for the quick responses.
I am making an object move in it's update() and turn left right up down according to user input. All I want is to make a spotlight follow the object.
Object's Rotation: 0,180,0
SpotLight's Rotation: 90,0,0
Since the rotations are different( and they need to be like that), I cannot make the light follow the object.
code :
function Update () {
SetControl(); // Input Stuff...
transform.Translate (0, 0, objectSpeed*Time.deltaTime);
lightView.transform.eulerAngles=this.transform.eulerAngles;
lightView.transform.Rotate=this.transform.eulerAngles;
lightView.transform.Translate(snakeSpeed*Time.deltaTime,0, 0); //THIS IS INCORRECT
}
lightView is simply pointing to the SpotLight.
What your looking for is the Unity method Transform.lookAt.
Place the following script on the spotlight. This code will make the object it is attached to, look at another object.
// Drag another object onto it to make the camera look at it.
var target : Transform;
// Rotate the camera every frame so it keeps looking at the target
function Update() {
transform.LookAt(target);
}
All I want is to make a spotlight follow the object.
This is a two-step process. First, find the coordinate position (in world coordinates) of your target. Second, apply that position plus an offset to your spotlight. Since your light is rotated 90° along the x-axis, I assume your light is above and looking down.
var offset = new Vector3(0, 5, 0);
function Update()
{
// Move this object
transform.Translate (0, 0, objectSpeed*Time.deltaTime);
// Move the light to transform's position + offset.
// Note that the light's rotation has already been set and does
// not need to be re-set each frame.
lightView.transform.position = transform.position + offset;
}
If you want a smoother "following" action, do a linear interpolation over time. Replace
lightView.transform.position = transform.position + offset;
with
lightView.transform.position = Vector3.Lerp(lightView.transform.position, transform.position + offset, Time.deltaTime * smoothingFactor);
where smoothingFactor is any float.
As an aside, it is near death to call transform.* in any kind of recurring game loop, because GameObject.transform is actually a get property that does a component search. Most Unity documentation recommends you cache the transform variable first.
Better code:
var myTrans = transform; // Cache the transform
var lightTrans = lightView.transform;
var offset = new Vector3(0, 5, 0);
function Update()
{
// Move this object
myTrans.Translate (0, 0, objectSpeed*Time.deltaTime);
// Move the light to transform's position + offset.
// Note that the light's rotation has already been set and does
// not need to be re-set each frame.
lightTrans.position = myTrans.position + offset;
}
My program receives input consists of line segments and expand the lines to cylinder-like object (like the PipeGS project in DX SDK sample browser).
I added an array of radius scaling parameter for the pipes, and modify them procedurally, but the radii of pipes just didn't change.
I'm pretty sure the scaling parameters are updated every frame because I set them as the pixel value. When I modify them, the pipes change color while their radii keep unchanged.
So I am wondering if there's any limitation of using global variables in GS and I didn't find it on the internet. (or just wrong keywords I used)
The shader code is like
cbuffer {
.....
float scaleParam[10];
.....
}
// Pass 1
VS_1 { // pass through }
// Tessellation stages
// Hull shader, domain shader and patch constant function
GS_1 {
pipeRadius = MaxRadius * scaleParam[PipeID];
....
// calculate pipe positions base on line-segments and pipeRadius
....
OutputStream.Append ( ... );
}
// Pixel shader is disabled in the first pass
// Pass 2
VS_2 { // pass through }
// Tessellation stages
// Hull shader, domain shader and patch constant function
// Transform the vertices and normals to world coordinate in DS
// No geometry shader in the second pass
PS_2
{
return float4( scaleParam[0], scaleParam[1], scaleParam[2], 0.0f );
}
Edit:
I shrinked the problem.
There are 2 passes in my program, in the first pass I calculate the line-segment expanding in Geometry Shader and stream-out.
In the second pass, the program receives pipe position from the first pass, tessellate the pipes and apply displacement mapping on them so they can be more detailed.
I can change the surface tessellation factor and pixel color which are in the second pass and see the result on screen immediately.
When I modify the scaleParam, the pipes change color while their radii keep unchanged. It means I did change the scaleParam and pass them into shader correctly but something's wrong in the first pass.
Second edit:
I modified shader code above and post some code of the cpp file here.
In the cpp file:
void DrawScene()
{
// Update view matrix, TessFactor, scaleParam etc.
....
....
// Bind stream-output buffer
ID3D11Buffer* bufferArray[1] = {mStreamOutBuffer};
md3dImmediateContext->SOSetTargets(1, bufferArray, 0);
// Two pass rendering
D3DX11_TECHNIQUE_DESC techDesc;
mTech->GetDesc( &techDesc );
for(UINT p = 0; p < techDesc.Passes; ++p)
{
mTech->GetPassByIndex(p)->Apply(0, md3dImmediateContext);
// First pass
if (p==0)
{
md3dImmediateContext->IASetVertexBuffers(0, 1,
&mVertexBuffer, &stride, &offset);
md3dImmediateContext->Draw(mVertexCount,0);
// unbind stream-output buffer
bufferArray[0] = NULL;
md3dImmediateContext->SOSetTargets( 1, bufferArray, 0 );
}
// Second pass
else
{
md3dImmediateContext->IASetVertexBuffers(0, 1,
&mStreamOutBuffer, &stride, &offset);
md3dImmediateContext->DrawAuto();
}
}
HR(mSwapChain->Present(0, 0));
}
Check if you are using a float4 position, the w value of vector is a scale for final position in scene, example:
float4 pos0 = float4(5, 5, 5, 1);
// is equals that:
float4 pos1 = float4(10, 10, 10, 2);
To correct scale a position you must changue only the .xyz value of vector position.
I solved this problem by rebuilding Vertex buffer and Stream-output buffer everytime after I modified the parameters, but I still don't know what exactly causes this problem.
I'm learning OpenGL 3.3, using some tutorials (http://opengl-tutorial.org). In the tutorial I'm using, there is a vertex shader which does the following:
Tutorial Shader source
#version 330 core
// Input vertex data, different for all executions of this shader.
layout(location = 0) in vec3 vertexPosition_modelspace;
// Values that stay constant for the whole mesh.
uniform mat4 MVP;
void main(){
// Output position of the vertex, in clip space : MVP * position
gl_Position = MVP * vec4(vertexPosition_modelspace,1);
}
Yet, when I try to emulate the same behavior in my application, I get the following:
error: implicit cast from "vec4" to "vec3".
After seeing this, I wasn't sure if it was because I was using 4.2 version shaders as opposed to 3.3, so changed everything to match what the author had been using, still receiving the same error afterward.
So, I changed my shader to do this:
My (latest) Source
#version 330 core
layout(location = 0) in vec3 vertexPosition_modelspace;
uniform mat4 MVP;
void main()
{
vec4 a = vec4(vertexPosition_modelspace, 1);
gl_Position.xyz = MVP * a;
}
Which, of course, still produces the same error.
Does anyone know why this is the case, as well as what a solution might be to this? I'm not sure if it could be my calling code (which I've posted, just in case).
Calling Code
static const GLfloat T_VERTEX_BUF_DATA[] =
{
// x, y z
-1.0f, -1.0f, 0.0f,
1.0f, -1.0f, 0.0f,
0.0f, 1.0f, 0.0f
};
static const GLushort T_ELEMENT_BUF_DATA[] =
{ 0, 1, 2 };
void TriangleDemo::Run(void)
{
glClear(GL_COLOR_BUFFER_BIT);
GLuint matrixID = glGetUniformLocation(mProgramID, "MVP");
glUseProgram(mProgramID);
glUniformMatrix4fv(matrixID, 1, GL_FALSE, &mMVP[0][0]); // This sends our transformation to the MVP uniform matrix, in the currently bound vertex shader
const GLuint vertexShaderID = 0;
glEnableVertexAttribArray(vertexShaderID);
glBindBuffer(GL_ARRAY_BUFFER, mVertexBuffer);
glVertexAttribPointer(
vertexShaderID, // Specify the ID of the shader to point to (in this case, the shader is built in to GL, which will just produce a white triangle)
3, // Specify the number of indices per vertex in the vertex buffer
GL_FLOAT, // Type of value the vertex buffer is holding as data
GL_FALSE, // Normalized?
0, // Amount of stride
(void*)0 ); // Offset within the array buffer
glDrawArrays(GL_TRIANGLES, 0, 3); //0 => start index of the buffer, 3 => number of vertices
glDisableVertexAttribArray(vertexShaderID);
}
void TriangleDemo::Initialize(void)
{
glGenVertexArrays(1, &mVertexArrayID);
glBindVertexArray(mVertexArrayID);
glGenBuffers(1, &mVertexBuffer);
glBindBuffer(GL_ARRAY_BUFFER, mVertexBuffer);
glBufferData(GL_ARRAY_BUFFER, sizeof(T_VERTEX_BUF_DATA), T_VERTEX_BUF_DATA, GL_STATIC_DRAW );
mProgramID = LoadShaders("v_Triangle", "f_Triangle");
glm::mat4 projection = glm::perspective(45.0f, 4.0f / 3.0f, 0.1f, 100.0f); // field of view, aspect ratio (4:3), 0.1 units near, to 100 units far
glm::mat4 view = glm::lookAt(
glm::vec3(4, 3, 3), // Camera is at (4, 3, 3) in world space
glm::vec3(0, 0, 0), // and looks at the origin
glm::vec3(0, 1, 0) // this is the up vector - the head of the camera is facing upwards. We'd use (0, -1, 0) to look upside down
);
glm::mat4 model = glm::mat4(1.0f); // set model matrix to identity matrix, meaning the model will be at the origin
mMVP = projection * view * model;
}
Notes
I'm in Visual Studio 2012
I'm using Shader Maker for the GLSL editing
I can't say what's wrong with the tutorial code.
In "My latest source" though, there's
gl_Position.xyz = MVP * a;
which looks weird because you're assigning a vec4 to a vec3.
EDIT
I can't reproduce your problem.
I have used a trivial fragment shader for testing...
#version 330 core
void main()
{
}
Testing "Tutorial Shader source":
3.3.11762 Core Profile Context
Log: Vertex shader was successfully compiled to run on hardware.
Log: Fragment shader was successfully compiled to run on hardware.
Log: Vertex shader(s) linked, fragment shader(s) linked.
Testing "My latest source":
3.3.11762 Core Profile Context
Log: Vertex shader was successfully compiled to run on hardware.
WARNING: 0:11: warning(#402) Implicit truncation of vector from size 4 to size 3.
Log: Fragment shader was successfully compiled to run on hardware.
Log: Vertex shader(s) linked, fragment shader(s) linked.
And the warning goes away after replacing gl_Position.xyz with gl_Position.
What's your setup? Do you have a correct version of OpenGL context? Is glGetError() silent?
Finally, are your GPU drivers up-to-date?
I've had problems with some GPUs (ATi ones, I believe) not liking integer literals when it expects a float. Try changing
gl_Position = MVP * vec4(vertexPosition_modelspace,1);
To
gl_Position = MVP * vec4(vertexPosition_modelspace, 1.0);
I just came across this error message on an ATI Radeon HD 7900 with latest drivers installed while compiling some sample code associated with the book "3D Engine Design for Virtual Globes" (http://www.virtualglobebook.com).
Here is the original fragment shader line:
fragmentColor = mix(vec3(0.0, intensity, 0.0), vec3(intensity, 0.0, 0.0), (distanceToContour < dF));
The solution is to cast the offending Boolean expression into float, as in:
fragmentColor = mix(vec3(0.0, intensity, 0.0), vec3(intensity, 0.0, 0.0), float(distanceToContour < dF));
The manual for mix (http://www.opengl.org/sdk/docs/manglsl) states
For the variants of mix where a is genBType, elements for which a[i] is false, the result for that
element is taken from x, and where a[i] is true, it will be taken from y.
So, since a Boolean blend value should be accepted by the compiler without comment, I think this should go down as an AMD/ATI driver issue.
So I am coding in DirectX 9 and whenever I place a sprite inside of a 2D world. There is a white colored "halo" that appears around the sprite image p. I am using PNGs and the background behind the sprite is transparent. I have also tried using a pink background as well. It seems that the halo only appears on straight lines of pixels but only on some edges. Any help is greatly appreciated!
m_d3d = Direct3DCreate9(D3D_SDK_VERSION); // create the Direct3D interface
D3DPRESENT_PARAMETERS d3dpp; // create a struct to hold various device information
ZeroMemory(&d3dpp, sizeof(d3dpp)); // clear out the struct for use
d3dpp.Windowed = windowed; // is program fullscreen, not windowed?
d3dpp.SwapEffect = D3DSWAPEFFECT_DISCARD; // discard old frames
d3dpp.hDeviceWindow = hWnd; // set the window to be used by Direct3D
d3dpp.BackBufferFormat = D3DFMT_X8R8G8B8; // set the back buffer format to 32-bit
d3dpp.BackBufferWidth = screenWidth; // set the width of the buffer
d3dpp.BackBufferHeight = screenHeight; // set the height of the buffer
d3dpp.EnableAutoDepthStencil = TRUE; // automatically run the z-buffer for us
d3dpp.AutoDepthStencilFormat = D3DFMT_D16; // 16-bit pixel format for the z-buffer
// create a device class using this information and the info from the d3dpp stuct
m_d3d->CreateDevice(D3DADAPTER_DEFAULT,
D3DDEVTYPE_HAL,
hWnd,
D3DCREATE_SOFTWARE_VERTEXPROCESSING,
&d3dpp,
&m_d3ddev);
D3DXCreateSprite(m_d3ddev, &m_d3dspt); // create the Direct3D Sprite object
LPDIRECT3DTEXTURE9 texture;
D3DXCreateTextureFromFileEx(m_d3ddev, "DC.png", D3DX_DEFAULT, D3DX_DEFAULT,
D3DX_DEFAULT, NULL, D3DFMT_A8R8G8B8, D3DPOOL_MANAGED, D3DX_DEFAULT,
D3DX_DEFAULT, D3DCOLOR_XRGB(255, 0, 255), NULL, NULL, &texture);
m_d3ddev->BeginScene();
m_d3dspt->Begin(D3DXSPRITE_ALPHABLEND); // begin sprite drawing with transparency
D3DXVECTOR3 center(0.0f, 0.0f, 0.0f), position((appropriate x), (appropriate y), 1);
m_d3dspt->Draw(texture, NULL, ¢er, &position, D3DCOLOR_ARGB(255, 255, 255, 255));
m_d3dspt->End(); // end sprite drawing
m_d3ddev->EndScene();
m_d3ddev->Present(NULL, NULL, NULL, NULL);
Thanks
Peter
This occurs when you screw up your texture co-ordinates from sprite atlasing and you accidentally run off the texture or on to another texture.
Most commonly, anyway, AFAIK.